id
stringlengths 3
8
| title
stringlengths 3
52
| url
stringlengths 33
82
| text
stringlengths 20.8k
159k
| category
stringclasses 15
values | word_count
int64 3.3k
23.9k
|
|---|---|---|---|---|---|
698
|
Atlantic Ocean
|
https://en.wikipedia.org/wiki/Atlantic_Ocean
|
The Atlantic Ocean is the second largest of the world's five oceanic divisions, with an area of about . It covers approximately 17% of Earth's surface and about 24% of its water surface area. During the Age of Discovery, it was known for separating the New World of the Americas (North America and South America) from the Old World of Afro-Eurasia (Africa, Asia, and Europe).
Through its separation of Afro-Eurasia from the Americas, the Atlantic Ocean has played a central role in the development of human society, globalization, and the histories of many nations. While the Norse were the first known humans to cross the Atlantic, it was the expedition of Christopher Columbus in 1492 that proved to be the most consequential. Columbus's expedition ushered in an age of exploration and colonization of the Americas by European powers, most notably Portugal, Spain, France, and the United Kingdom. From the 16th to 19th centuries, the Atlantic Ocean was the center of both an eponymous slave trade and the Columbian exchange while occasionally hosting naval battles. Such naval battles, as well as growing trade from regional American powers like the United States and Brazil, both increased in degree during the early 20th century. After World War II, major military operations became rarer, though notable postwar conflicts include the Cuban Missile Crisis and the Falklands War. The ocean remains a core component of trade around the world.
The Atlantic Ocean's temperatures vary by location. For example, the South Atlantic maintains warm temperatures year-round, as its basin countries are tropical. The North Atlantic maintains a temperate climate, as its basin countries are temperate and have seasons of extremely low temperatures and high temperatures.
The Atlantic Ocean occupies an elongated, S-shaped basin extending longitudinally between Europe and Africa to the east, and the Americas to the west. As one component of the interconnected World Ocean, it is connected in the north to the Arctic Ocean, to the Pacific Ocean in the southwest, the Indian Ocean in the southeast, and the Southern Ocean in the south. Other definitions describe the Atlantic as extending southward to Antarctica. The Atlantic Ocean is divided in two parts, the northern and southern Atlantic, by the Equator.International Hydrographic Organization, Limits of Oceans and Seas, 3rd ed. (1953) , pages 4 and 13.
Toponymy
The oldest known mentions of an "Atlantic" sea come from Stesichorus around mid-sixth century BC (Sch. A. R. 1. 211): (, , . ) and in The Histories of Herodotus around 450 BC (Hdt. 1.202.4): (, or ) where the name refers to "the sea beyond the pillars of Hercules" which is said to be part of the sea that surrounds all land. In these uses, the name refers to Atlas, the Titan in Greek mythology, who supported the heavens and who later appeared as a frontispiece in medieval maps and also lent his name to modern atlases. On the other hand, to early Greek sailors and in ancient Greek mythological literature such as the Iliad and the Odyssey, this all-encompassing ocean was instead known as Oceanus, the gigantic river that encircled the world; in contrast to the enclosed seas well known to the Greeks: the Mediterranean and the Black Sea. In contrast, the term "Atlantic" originally referred specifically to the Atlas Mountains in Morocco and the sea off the Strait of Gibraltar and the West African coast.
The term "Aethiopian Ocean", derived from Ancient Ethiopia, was applied to the southern Atlantic as late as the mid-19th century. During the Age of Discovery, the Atlantic was also known to English cartographers as the Great Western Ocean.
The pond is a term often used by British and American speakers in reference to the northern Atlantic Ocean, as a form of meiosis, or ironic understatement. It is used mostly when referring to events or circumstances "on this side of the pond" or "on the other side of the pond" or "across the pond", rather than to discuss the ocean itself. The term dates to 1640, first appearing in print in a pamphlet released during the reign of Charles I, and reproduced in 1869 in Nehemiah Wallington's Historical Notices of Events Occurring Chiefly in The Reign of Charles I, where "great Pond" is used in reference to the Atlantic Ocean by Francis Windebank, Charles I's Secretary of State.
Extent and data
The International Hydrographic Organization (IHO) defined the limits of the oceans and seas in 1953, but some of these definitions have been revised since then and some are not recognized by various authorities, institutions, and countries, for example the CIA World Factbook. Correspondingly, the extent and number of oceans and seas vary.
The Atlantic Ocean is bounded on the west by North and South America. It connects to the Arctic Ocean through the Labrador Sea, Denmark Strait, Greenland Sea, Norwegian Sea and Barents Sea with the northern divider passing through Iceland and Svalbard. To the east, the boundaries of the ocean proper are Europe and Africa: the Strait of Gibraltar (where it connects with the Mediterranean Sea – one of its marginal seas – and, in turn, the Black Sea, both of which also touch upon Asia).
In the southeast, the Atlantic merges into the Indian Ocean. The 20° East meridian, running south from Cape Agulhas to Antarctica defines its border. In the 1953 definition it extends south to Antarctica, while in later maps it is bounded at the 60° parallel by the Southern Ocean.
The Atlantic has irregular coasts indented by numerous bays, gulfs and seas. These include the Baltic Sea, Black Sea, Caribbean Sea, Davis Strait, Denmark Strait, part of the Drake Passage, Gulf of Mexico, Labrador Sea, Mediterranean Sea, North Sea, Norwegian Sea, almost all of the Scotia Sea, and other tributary water bodies. Including these marginal seas the coast line of the Atlantic measures compared to for the Pacific.
Including its marginal seas, the Atlantic covers an area of or 23.5% of the global ocean and has a volume of or 23.3% of the total volume of the Earth's oceans. Excluding its marginal seas, the Atlantic covers and has a volume of . The North Atlantic covers (11.5%) and the South Atlantic (11.1%). The average depth is and the maximum depth, the Milwaukee Deep in the Puerto Rico Trench, is .
Bathymetry
The bathymetry of the Atlantic is dominated by a submarine mountain range called the Mid-Atlantic Ridge (MAR). It runs from 87°N or south of the North Pole to the subantarctic Bouvet Island at 54°S. Expeditions to explore the bathymertry of the Atlantic include the Challenger expedition and the German Meteor expedition; , Columbia University's Lamont–Doherty Earth Observatory and the United States Navy Hydrographic Office conduct research on the ocean.
Mid-Atlantic Ridge
The MAR divides the Atlantic longitudinally into two halves, in each of which a series of basins are delimited by secondary, transverse ridges. The MAR reaches above along most of its length, but is interrupted by larger transform faults at two places: the Romanche Trench near the Equator and the Gibbs fracture zone at 53°N. The MAR is a barrier for bottom water, but at these two transform faults deep water currents can pass from one side to the other.
The MAR rises above the surrounding ocean floor and its rift valley is the divergent boundary between the North American and Eurasian plates in the North Atlantic and the South American and African plates in the South Atlantic. The MAR produces basaltic volcanoes in Eyjafjallajökull, Iceland, and pillow lava on the ocean floor. The depth of water at the apex of the ridge is less than in most places, while the bottom of the ridge is three times as deep.
The MAR is intersected by two perpendicular ridges: the Azores–Gibraltar transform fault, the boundary between the Nubian and Eurasian plates, intersects the MAR at the Azores triple junction, on either side of the Azores microplate, near the 40°N. A much vaguer, nameless boundary, between the North American and South American plates, intersects the MAR near or just north of the Fifteen-Twenty fracture zone, approximately at 16°N.
In the 1870s, the Challenger expedition discovered parts of what is now known as the Mid-Atlantic Ridge, or:
The remainder of the ridge was discovered in the 1920s by the German Meteor expedition using echo-sounding equipment. The exploration of the MAR in the 1950s led to the general acceptance of seafloor spreading and plate tectonics.
Most of the MAR runs under water but where it reaches the surfaces it has produced volcanic islands. While nine of these have collectively been nominated a World Heritage Site for their geological value, four of them are considered of "Outstanding Universal Value" based on their cultural and natural criteria: Þingvellir, Iceland; Landscape of the Pico Island Vineyard Culture, Portugal; Gough and Inaccessible Islands, United Kingdom; and Brazilian Atlantic Islands: Fernando de Noronha and Atol das Rocas Reserves, Brazil.
Ocean floor
Continental shelves in the Atlantic are wide off Newfoundland, southernmost South America, and northeastern Europe.
In the western Atlantic carbonate platforms dominate large areas, for example, the Blake Plateau and Bermuda Rise.
The Atlantic is surrounded by passive margins except at a few locations where active margins form deep trenches: the Puerto Rico Trench ( maximum depth) in the western Atlantic and South Sandwich Trench () in the South Atlantic. There are numerous submarine canyons off northeastern North America, western Europe, and northwestern Africa. Some of these canyons extend along the continental rises and farther into the abyssal plains as deep-sea channels.
In 1922, a historic moment in cartography and oceanography occurred. The USS Stewart used a Navy Sonic Depth Finder to draw a continuous map across the bed of the Atlantic. This involved little guesswork because the idea of sonar is straightforward with pulses being sent from the vessel, which bounce off the ocean floor, then return to the vessel. The deep ocean floor is thought to be fairly flat with occasional deeps, abyssal plains, trenches, seamounts, basins, plateaus, canyons, and some guyots. Various shelves along the margins of the continents constitute about 11% of the bottom topography with few deep channels cut across the continental rise.
The mean depth between 60°N and 60°S is , or close to the average for the global ocean, with a modal depth between .
In the South Atlantic the Walvis Ridge and Rio Grande Rise form barriers to ocean currents.
The Laurentian Abyss is found off the eastern coast of Canada.
Water characteristics
Path of the thermohaline circulation. Purple paths represent deep-water currents, while blue paths represent surface currents.|thumb|alt=Map displaying a looping line with arrows indicating that water flows eastward in the far Southern Ocean, angling northeast of Australia, turning sough-after passing Alaska, then crossing the mid-Pacific to flow north of Australia, continuing west below Africa, then turning northwest until reaching eastern Canada, then angling east to southern Europe, then finally turning south just below Greenland and flowing down the Americas' eastern coast, and resuming its flow eastward to complete the circle
Surface water temperatures, which vary with latitude, current systems, and season and reflect the latitudinal distribution of solar energy, range from below to over . Maximum temperatures occur north of the equator, and minimum values are found in the polar regions. In the middle latitudes, the area of maximum temperature variations, values may vary by .
From October to June the surface is usually covered with sea ice in the Labrador Sea, Denmark Strait, and Baltic Sea.
The Coriolis effect circulates North Atlantic water in a clockwise direction, whereas South Atlantic water circulates counter-clockwise. The south tides in the Atlantic Ocean are semi-diurnal; that is, two high tides occur every 24 lunar hours. In latitudes above 40° North some east–west oscillation, known as the North Atlantic oscillation, occurs.
Salinity
On average, the Atlantic is the saltiest major ocean; surface water salinity in the open ocean ranges from 33 to 37 parts per thousand (3.3–3.7%) by mass and varies with latitude and season. Evaporation, precipitation, river inflow and sea ice melting influence surface salinity values. Although the lowest salinity values are just north of the equator (because of heavy tropical rainfall), in general, the lowest values are in the high latitudes and along coasts where large rivers enter. Maximum salinity values occur at about 25° north and south, in subtropical regions with low rainfall and high evaporation.
The high surface salinity in the Atlantic, on which the Atlantic thermohaline circulation is dependent, is maintained by two processes. The Agulhas Leakage/Rings brings salty Indian Ocean waters into the South Atlantic. While the "Atmospheric Bridge" evaporates subtropical Atlantic waters and exports it to the Pacific.
Water masses
+ Temperature-salinity characteristics for Atlantic water masses Water mass Temperature Salinity Upper waters () Atlantic SubarcticUpper Water (ASUW) 0.0–4.0 °C 34.0–35.0 Western North AtlanticCentral Water (WNACW) 7.0–20 °C 35.0–36.7 Eastern North AtlanticCentral Water (ENACW) 8.0–18.0 °C 35.2–36.7 South AtlanticCentral Water (SACW) 5.0–18.0 °C 34.3–35.8 Intermediate waters () Western Atlantic SubarcticIntermediate Water (WASIW) 3.0–9.0 °C 34.0–35.1 Eastern Atlantic SubarcticIntermediate Water (EASIW) 3.0–9.0 °C 34.4–35.3 Mediterranean Water (MW) 2.6–11.0 °C 35.0–36.2 Arctic Intermediate Water (AIW) −1.5–3.0 °C 34.7–34.9 Deep and abyssal waters (1,500 m–bottom or 4,900 ft–bottom) North AtlanticDeep Water (NADW) 1.5–4.0 °C 34.8–35.0 Antarctic Bottom Water (AABW) −0.9–1.7 °C 34.6–34.7 Arctic Bottom Water (ABW) −1.8 to −0.5 °C 34.9–34.9
The Atlantic Ocean consists of four major, upper water masses with distinct temperature and salinity. The Atlantic subarctic upper water in the northernmost North Atlantic is the source for subarctic intermediate water and North Atlantic intermediate water. North Atlantic central water can be divided into the eastern and western North Atlantic central water since the western part is strongly affected by the Gulf Stream and therefore the upper layer is closer to underlying fresher subpolar intermediate water. The eastern water is saltier because of its proximity to Mediterranean water. North Atlantic central water flows into South Atlantic central water at 15°N.
There are five intermediate waters: four low-salinity waters formed at subpolar latitudes and one high-salinity formed through evaporation. Arctic intermediate water flows from the north to become the source for North Atlantic deep water, south of the Greenland-Scotland sill. These two intermediate waters have different salinity in the western and eastern basins. The wide range of salinities in the North Atlantic is caused by the asymmetry of the northern subtropical gyre and a large number of contributions from a wide range of sources: Labrador Sea, Norwegian-Greenland Sea, Mediterranean, and South Atlantic Intermediate Water.
The North Atlantic deep water (NADW) is a complex of four water masses, two that form by deep convection in the open oceanclassical and upper Labrador sea waterand two that form from the inflow of dense water across the Greenland-Iceland-Scotland sillDenmark Strait and Iceland-Scotland overflow water. Along its path across Earth the composition of the NADW is affected by other water masses, especially Antarctic bottom water and Mediterranean overflow water.
The NADW is fed by a flow of warm shallow water into the northern North Atlantic which is responsible for the anomalous warm climate in Europe. Changes in the formation of NADW have been linked to global climate changes in the past. Since human-made substances were introduced into the environment, the path of the NADW can be traced throughout its course by measuring tritium and radiocarbon from nuclear weapon tests in the 1960s and CFCs.
Gyres
The clockwise warm-water North Atlantic Gyre occupies the northern Atlantic, and the counter-clockwise warm-water South Atlantic Gyre appears in the southern Atlantic.
In the North Atlantic, surface circulation is dominated by three inter-connected currents: the Gulf Stream which flows north-east from the North American coast at Cape Hatteras; the North Atlantic Current, a branch of the Gulf Stream which flows northward from the Grand Banks; and the Subpolar Front, an extension of the North Atlantic Current, a wide, vaguely defined region separating the subtropical gyre from the subpolar gyre. This system of currents transports warm water into the North Atlantic, without which temperatures in the North Atlantic and Europe would plunge dramatically.
North of the North Atlantic Gyre, the cyclonic North Atlantic Subpolar Gyre plays a key role in climate variability. It is governed by ocean currents from marginal seas and regional topography, rather than being steered by wind, both in the deep ocean and at sea level.
The subpolar gyre forms an important part of the global thermohaline circulation. Its eastern portion includes eddying branches of the North Atlantic Current which transport warm, saline waters from the subtropics to the northeastern Atlantic. There this water is cooled during winter and forms return currents that merge along the eastern continental slope of Greenland where they form an intense (40–50 Sv) current which flows around the continental margins of the Labrador Sea. A third of this water becomes part of the deep portion of the North Atlantic Deep Water (NADW). The NADW, in turn, feeds the meridional overturning circulation (MOC), the northward heat transport of which is threatened by anthropogenic climate change. Large variations in the subpolar gyre on a decade-century scale, associated with the North Atlantic oscillation, are especially pronounced in Labrador Sea Water, the upper layers of the MOC.
The South Atlantic is dominated by the anti-cyclonic southern subtropical gyre. The South Atlantic Central Water originates in this gyre, while Antarctic Intermediate Water originates in the upper layers of the circumpolar region, near the Drake Passage and the Falkland Islands. Both these currents receive some contribution from the Indian Ocean. On the African east coast, the small cyclonic Angola Gyre lies embedded in the large subtropical gyre.
The southern subtropical gyre is partly masked by a wind-induced Ekman layer. The residence time of the gyre is 4.4–8.5 years. North Atlantic Deep Water flows southward below the thermocline of the subtropical gyre.
Sargasso Sea
The Sargasso Sea in the western North Atlantic can be defined as the area where two species of Sargassum (S. fluitans and natans) float, an area wide and encircled by the Gulf Stream, North Atlantic Drift, and North Equatorial Current. This population of seaweed probably originated from Tertiary ancestors on the European shores of the former Tethys Ocean and has, if so, maintained itself by vegetative growth, floating in the ocean for millions of years.
Other species endemic to the Sargasso Sea include the sargassum fish, a predator with algae-like appendages which hovers motionless among the Sargassum. Fossils of similar fishes have been found in fossil bays of the former Tethys Ocean, in what is now the Carpathian region, that were similar to the Sargasso Sea. It is possible that the population in the Sargasso Sea migrated to the Atlantic as the Tethys closed at the end of the Miocene around 17 Ma. The origin of the Sargasso fauna and flora remained enigmatic for centuries. The fossils found in the Carpathians in the mid-20th century often called the "quasi-Sargasso assemblage", finally showed that this assemblage originated in the Carpathian Basin from where it migrated over Sicily to the central Atlantic where it evolved into modern species of the Sargasso Sea.
The location of the spawning ground for European eels remained unknown for decades. In the early 19th century it was discovered that the southern Sargasso Sea is the spawning ground for both the European and American eel and that the former migrate more than and the latter . Ocean currents such as the Gulf Stream transport eel larvae from the Sargasso Sea to foraging areas in North America, Europe, and northern Africa. Recent but disputed research suggests that eels possibly use Earth's magnetic field to navigate through the ocean both as larvae and as adults.
Climate
The climate is influenced by the temperatures of the surface waters and water currents as well as winds. Because of the ocean's great capacity to store and release heat, maritime climates are more moderate and have less extreme seasonal variations than inland climates. Precipitation can be approximated from coastal weather data and air temperature from water temperatures.
The oceans are the major source of atmospheric moisture that is obtained through evaporation. Climatic zones vary with latitude; the warmest zones stretch across the Atlantic north of the equator. The coldest zones are in high latitudes, with the coldest regions corresponding to the areas covered by sea ice. Ocean currents influence the climate by transporting warm and cold waters to other regions. The winds that are cooled or warmed when blowing over these currents influence adjacent land areas.
The Gulf Stream and its northern extension towards Europe, the North Atlantic Drift is thought to have at least some influence on climate. For example, the Gulf Stream helps moderate winter temperatures along the coastline of southeastern North America, keeping it warmer in winter along the coast than inland areas. The Gulf Stream also keeps extreme temperatures from occurring on the Florida Peninsula. In the higher latitudes, the North Atlantic Drift, warms the atmosphere over the oceans, keeping the British Isles and northwestern Europe mild and cloudy, and not severely cold in winter, like other locations at the same high latitude. The cold water currents contribute to heavy fog off the coast of eastern Canada (the Grand Banks of Newfoundland area) and Africa's northwestern coast. In general, winds transport moisture and air over land areas.
Natural hazards
Every winter, the Icelandic Low produces frequent storms. Icebergs are common from early February to the end of July across the shipping lanes near the Grand Banks of Newfoundland. The ice season is longer in the polar regions, but there is little shipping in those areas.
Hurricanes are a hazard in the western parts of the North Atlantic during the summer and autumn. Due to a consistently strong wind shear and a weak Intertropical Convergence Zone, South Atlantic tropical cyclones are rare.
Geology and plate tectonics
The Atlantic Ocean is underlain mostly by dense mafic oceanic crust made up of basalt and gabbro and overlain by fine clay, silt and siliceous ooze on the abyssal plain. The continental margins and continental shelf mark lower density, but greater thickness felsic continental rock that is often much older than that of the seafloor. The oldest oceanic crust in the Atlantic is up to 145 million years and is situated off the west coast of Africa and the east coast of North America, or on either side of the South Atlantic.
In many places, the continental shelf and continental slope are covered in thick sedimentary layers. For instance, on the North American side of the ocean, large carbonate deposits formed in warm shallow waters such as Florida and the Bahamas, while coarse river outwash sands and silt are common in shallow shelf areas like the Georges Bank. Coarse sand, boulders, and rocks were transported into some areas, such as off the coast of Nova Scotia or the Gulf of Maine during the Pleistocene ice ages.
Central Atlantic
The break-up of Pangaea began in the central Atlantic, between North America and Northwest Africa, where rift basins opened during the Late Triassic and Early Jurassic. This period also saw the first stages of the uplift of the Atlas Mountains. The exact timing is controversial with estimates ranging from 200 to 170 Ma.
The opening of the Atlantic Ocean coincided with the initial break-up of the supercontinent Pangaea, both of which were initiated by the eruption of the Central Atlantic Magmatic Province (CAMP), one of the most extensive and voluminous large igneous provinces in Earth's history associated with the Triassic–Jurassic extinction event, one of Earth's major extinction events.
Theoliitic dikes, sills, and lava flows from the CAMP eruption at 200 Ma have been found in West Africa, eastern North America, and northern South America. The extent of the volcanism has been estimated to of which covered what is now northern and central Brazil.
The formation of the Central American Isthmus closed the Central American Seaway at the end of the Pliocene 2.8 Ma ago. The formation of the isthmus resulted in the migration and extinction of many land-living animals, known as the Great American Interchange, but the closure of the seaway resulted in a "Great American Schism" as it affected ocean currents, salinity, and temperatures in both the Atlantic and Pacific. Marine organisms on both sides of the isthmus became isolated and either diverged or went extinct.
North Atlantic
Geologically, the North Atlantic is the area delimited to the south by two conjugate margins, Newfoundland and Iberia, and to the north by the Arctic Eurasian Basin. The opening of the North Atlantic closely followed the margins of its predecessor, the Iapetus Ocean, and spread from the central Atlantic in six stages: Iberia–Newfoundland, Porcupine–North America, Eurasia–Greenland, Eurasia–North America. Active and inactive spreading systems in this area are marked by the interaction with the Iceland hotspot.
Seafloor spreading led to the extension of the crust and the formation of troughs and sedimentary basins. The Rockall Trough opened between 105 and 84 million years ago although the rift failed along with one leading into the Bay of Biscay.
Spreading began opening the Labrador Sea around 61 million years ago, continuing until 36 million years ago. Geologists distinguish two magmatic phases. One from 62 to 58 million years ago predates the separation of Greenland from northern Europe while the second from 56 to 52 million years ago happened as the separation occurred.
Iceland began to form 62 million years ago due to a particularly concentrated mantle plume. Large quantities of basalt erupted at this time period are found on Baffin Island, Greenland, the Faroe Islands, and Scotland, with ash falls in Western Europe acting as a stratigraphic marker. The opening of the North Atlantic caused a significant uplift of continental crust along the coast. For instance, despite 7 km thick basalt, Gunnbjorn Field in East Greenland is the highest point on the island, elevated enough that it exposes older Mesozoic sedimentary rocks at its base, similar to old lava fields above sedimentary rocks in the uplifted Hebrides of western Scotland.
The North Atlantic Ocean contains about 810 seamounts, most of them situated along the Mid-Atlantic Ridge.Gubbay S. 2003. Seamounts of the northeast Atlantic. OASIS (Oceanic Seamounts: an Integrated Study). Hamburg & WWF, Frankfurt am Main, Germany The OSPAR database (Convention for the Protection of the Marine Environment of the North-East Atlantic) mentions 104 seamounts: 74 within national exclusive economic zones. Of these seamounts, 46 are located close to the Iberian Peninsula.
South Atlantic
West Gondwana (South America and Africa) broke up in the Early Cretaceous to form the South Atlantic. The apparent fit between the coastlines of the two continents was noted on the first maps that included the South Atlantic and it was also the subject of the first computer-assisted plate tectonic reconstructions in 1965. This magnificent fit, however, has since then proven problematic and later reconstructions have introduced various deformation zones along the shorelines to accommodate the northward-propagating break-up. Intra-continental rifts and deformations have also been introduced to subdivide both continental plates into sub-plates.
Geologically, the South Atlantic can be divided into four segments: equatorial segment, from 10°N to the Romanche fracture zone (RFZ); central segment, from RFZ to Florianopolis fracture zone (FFZ, north of Walvis Ridge and Rio Grande Rise); southern segment, from FFZ to the Agulhas–Falkland fracture zone (AFFZ); and Falkland segment, south of AFFZ.
In the southern segment the Early Cretaceous (133–130 Ma) intensive magmatism of the Paraná–Etendeka Large Igneous Province produced by the Tristan hotspot resulted in an estimated volume of . It covered an area of in Brazil, Paraguay, and Uruguay and in Africa. Dyke swarms in Brazil, Angola, eastern Paraguay, and Namibia, however, suggest the LIP originally covered a much larger area and also indicate failed rifts in all these areas. Associated offshore basaltic flows reach as far south as the Falkland Islands and South Africa. Traces of magmatism in both offshore and onshore basins in the central and southern segments have been dated to 147–49 Ma with two peaks between 143 and 121 Ma and 90–60 Ma.
In the Falkland segment rifting began with dextral movements between the Patagonia and Colorado sub-plates between the Early Jurassic (190 Ma) and the Early Cretaceous (126.7 Ma). Around 150 Ma sea-floor spreading propagated northward into the southern segment. No later than 130 Ma rifting had reached the Walvis Ridge–Rio Grande Rise.
In the central segment, rifting started to break Africa in two by opening the Benue Trough around 118 Ma. Rifting in the central segment, however, coincided with the Cretaceous Normal Superchron (also known as the Cretaceous quiet period), a 40 Ma period without magnetic reversals, which makes it difficult to date sea-floor spreading in this segment.
The equatorial segment is the last phase of the break-up, but, because it is located on the Equator, magnetic anomalies cannot be used for dating. Various estimates date the propagation of seafloor spreading in this segment and consequent opening of the Equatorial Atlantic Gateway (EAG) to the period 120–96 Ma. This final stage, nevertheless, coincided with or resulted in the end of continental extension in Africa.
About 50 Ma the opening of the Drake Passage resulted from a change in the motions and separation rate of the South American and Antarctic plates. First, small ocean basins opened and a shallow gateway appeared during the Middle Eocene. 34–30 Ma a deeper seaway developed, followed by an Eocene–Oligocene climatic deterioration and the growth of the Antarctic ice sheet.
Closure of the Atlantic
An embryonic subduction margin is potentially developing west of Gibraltar. The Gibraltar Arc in the western Mediterranean is migrating westward into the central Atlantic where it joins the converging African and Eurasian plates. Together these three tectonic forces are slowly developing into a new subduction system in the eastern Atlantic Basin. Meanwhile, the Scotia Arc and Caribbean plate in the western Atlantic Basin are eastward-propagating subduction systems that might, together with the Gibraltar system, represent the beginning of the closure of the Atlantic Ocean and the final stage of the Atlantic Wilson cycle.
History
Old World
Mitochondrial DNA (mtDNA) studies indicate that 80,000–60,000 years ago a major demographic expansion within Africa, derived from a single, small population, coincided with the emergence of behavioral complexity and the rapid MIS 5–4 environmental changes. This group of people not only expanded over the whole of Africa, but also started to disperse out of Africa into Asia, Europe, and Australasia around 65,000 years ago and quickly replaced the archaic humans in these regions. During the Last Glacial Maximum (LGM) 20,000 years ago humans had to abandon their initial settlements along the European North Atlantic coast and retreat to the Mediterranean. Following rapid climate changes at the end of the LGM this region was repopulated by Magdalenian culture. Other hunter-gatherers followed in waves interrupted by hazards such as the Laacher See volcanic eruption, the inundation of Doggerland (now the North Sea), and the formation of the Baltic Sea. The European coasts of the North Atlantic were permanently populated about 9,000–8.5,000 years ago.
This human dispersal left abundant traces along the coasts of the Atlantic Ocean. 50 kya-old, deeply stratified shell middens found in Ysterfontein on the western coast of South Africa are associated with the Middle Stone Age (MSA). The MSA population was small and dispersed and the rate of their reproduction and exploitation was less intense than those of later generations. While their middens resemble 12–11 kya-old Late Stone Age (LSA) middens found on every inhabited continent, the 50–45 kya-old Enkapune Ya Muto in Kenya probably represents the oldest traces of the first modern humans to disperse out of Africa.
The same development can be seen in Europe. In La Riera Cave (23–13 kya) in Asturias, Spain, only some 26,600 molluscs were deposited over 10 kya. In contrast, 8–7 kya-old shell middens in Portugal, Denmark, and Brazil generated thousands of tons of debris and artefacts. The Ertebølle middens in Denmark, for example, accumulated of shell deposits representing some 50 million molluscs over only a thousand years. This intensification in the exploitation of marine resources has been described as accompanied by new technologiessuch as boats, harpoons, and fish hooks because many caves found in the Mediterranean and on the European Atlantic coast have increased quantities of marine shells in their upper levels and reduced quantities in their lower. The earliest exploitation took place on the submerged shelves, now submerged and most settlements now excavated were then located several kilometers from these shelves. The reduced quantities of shells in the lower levels can represent the few shells that were exported inland.
New World
During the LGM the Laurentide Ice Sheet covered most of northern North America while Beringia connected Siberia to Alaska. In 1973, late American geoscientist Paul S. Martin proposed a "blitzkrieg" colonization of the Americas by which Clovis hunters migrated into North America around 13,000 years ago in a single wave through an ice-free corridor in the ice sheet and "spread southward explosively, briefly attaining a density sufficiently large to overkill much of their prey." Others later proposed a "three-wave" migration over the Bering Land Bridge. These hypotheses remained the long-held view regarding the settlement of the Americas, a view challenged by more recent archaeological discoveries: the oldest archaeological sites in the Americas have been found in South America; sites in northeast Siberia report virtually no human presence there during the LGM; and most Clovis artefacts have been found in eastern North America along the Atlantic coast. Furthermore, colonisation models based on mtDNA, yDNA, and atDNA data respectively support neither the "blitzkrieg" nor the "three-wave" hypotheses but they also deliver mutually ambiguous results. Contradictory data from archaeology and genetics will most likely deliver future hypotheses that will, eventually, confirm each other. A proposed route across the Pacific to South America could explain early South American finds and another hypothesis proposes a northern path, through the Canadian Arctic and down the North American Atlantic coast.
Early settlements across the Atlantic have been suggested by alternative theories, ranging from purely hypothetical to mostly disputed, including the Solutrean hypothesis and some of the Pre-Columbian trans-oceanic contact theories.
The Norse settlement of the Faroe Islands and Iceland began during the 9th and 10th centuries. A settlement on Greenland was established before 1000 CE, but contact with it was lost in 1409 and it was finally abandoned during the early Little Ice Age. This setback was caused by a range of factors: an unsustainable economy resulted in erosion and denudation, while conflicts with the local Inuit resulted in the failure to adapt their Arctic technologies; a colder climate resulted in starvation, and the colony got economically marginalized as the Great Plague harvested its victims on Iceland in the 15th century.
Iceland was initially settled 865–930 CE following a warm period when winter temperatures hovered around which made farming favorable at high latitudes. This did not last, however, and temperatures quickly dropped; at 1080 CE summer temperatures had reached a maximum of . The (Book of Settlement) records disastrous famines during the first century of settlement"men ate foxes and ravens" and "the old and helpless were killed and thrown over cliffs"and by the early 1200s hay had to be abandoned for short-season crops such as barley.
Atlantic World
Christopher Columbus reached the Americas in 1492, sailing under the Spanish flag. Six years later Vasco da Gama reached India under the Portuguese flag, by navigating south around the Cape of Good Hope, thus proving that the Atlantic and Indian Oceans are connected. In 1500, in his voyage to India following Vasco da Gama, Pedro Álvares Cabral reached Brazil, taken by the currents of the South Atlantic Gyre. Following these explorations, Spain and Portugal quickly conquered and colonized large territories in the New World and forced the Amerindian population into slavery in order to exploit the vast quantities of silver and gold they found. Spain and Portugal monopolized this trade in order to keep other European nations out, but conflicting interests nevertheless led to a series of Spanish-Portuguese wars. A peace treaty mediated by the Pope divided the conquered territories into Spanish and Portuguese sectors while keeping other colonial powers away. England, France, and the Dutch Republic enviously watched the Spanish and Portuguese wealth grow and allied themselves with pirates such as Henry Mainwaring and Alexandre Exquemelin. They could explore the convoys leaving the Americas because prevailing winds and currents made the transport of heavy metals slow and predictable.
In the colonies of the Americas, depredation, smallpox and other diseases, and slavery quickly reduced the indigenous population of the Americas to the extent that the Atlantic slave trade was introduced by colonists to replace thema trade that became the norm and an integral part of the colonization. Between the 15th century and 1888, when Brazil became the last part of the Americas to end the slave trade, an estimated 9.5 million enslaved Africans were shipped into the New World, most of them destined for agricultural labor. The slave trade was officially abolished in the British Empire and the United States in 1808, and slavery itself was abolished in the British Empire in 1838 and in the United States in 1865 after the Civil War.
From Columbus to the Industrial Revolution trans-Atlantic trade, including colonialism and slavery, became crucial for Western Europe. For European countries with direct access to the Atlantic (including Britain, France, the Netherlands, Portugal, and Spain) 1500–1800 was a period of sustained growth during which these countries grew richer than those in Eastern Europe and Asia. Colonialism evolved as part of the trans-Atlantic trade, but this trade also strengthened the position of merchant groups at the expense of monarchs. Growth was more rapid in non-absolutist countries, such as Britain and the Netherlands, and more limited in absolutist monarchies, such as Portugal, Spain, and France, where profit mostly or exclusively benefited the monarchy and its allies.
Trans-Atlantic trade also resulted in increasing urbanization: in European countries facing the Atlantic, urbanization grew from 8% in 1300, 10.1% in 1500, to 24.5% in 1850; in other European countries from 10% in 1300, 11.4% in 1500, to 17% in 1850. Likewise, GDP doubled in Atlantic countries but rose by only 30% in the rest of Europe. By the end of the 17th century, the volume of the Trans-Atlantic trade had surpassed that of the Mediterranean trade.
Economy
The Atlantic has contributed significantly to the development and economy of surrounding countries. Besides major transatlantic transportation and communication routes, the Atlantic offers abundant petroleum deposits in the sedimentary rocks of the continental shelves.
The Atlantic harbors petroleum and gas fields, fish, marine mammals (seals and whales), sand and gravel aggregates, placer deposits, polymetallic nodules, and precious stones. Gold deposits are a mile or two underwater on the ocean floor, however, the deposits are also encased in rock that must be mined through. Currently, there is no cost-effective way to mine or extract gold from the ocean to make a profit. Various international treaties attempt to reduce pollution caused by environmental threats such as oil spills, marine debris, and the incineration of toxic wastes at sea.
Fisheries
The shelves of the Atlantic hosts one of the world's richest fishing resources. The most productive areas include the Grand Banks of Newfoundland, the Scotian Shelf, Georges Bank off Cape Cod, the Bahama Banks, the waters around Iceland, the Irish Sea, the Bay of Fundy, the Dogger Bank of the North Sea, and the Falkland Banks. Fisheries have undergone significant changes since the 1950s and global catches can now be divided into three groups of which only two are observed in the Atlantic: fisheries in the eastern-central and southwest Atlantic oscillate around a globally stable value, the rest of the Atlantic is in overall decline following historical peaks. The third group, "continuously increasing trend since 1950", is only found in the Indian Ocean and western Pacific. UN FAO partitioned the Atlantic into major fishing areas:
Northeast Atlantic
Northeast Atlantic is schematically limited to the 40°00' west longitude (except around Greenland), south to the 36°00' north latitude, and to the 68°30' east longitude, with both the west and east longitude limits reaching to the north pole. The Atlantic's subareas include: Barents Sea; Norwegian Sea, Spitzbergen, and Bear Island; Skagerrak, Kattegat, Sound, Belt Sea, and Baltic Sea; North Sea; Iceland and Faroes Grounds; Rockall, Northwest Coast of Scotland, and North Ireland; Irish Sea, West of Ireland, Porcupine Bank, and eastern and western English Channel; Bay of Biscay; Portuguese Waters; Azores Grounds and Northeast Atlantic South; North of Azores; and East Greenland. There are also two defunct subareas.
In the Northeast Atlantic total catches decreased between the mid-1970s and the 1990s and reached 8.7 million tons in 2013. Blue whiting reached a 2.4 million tons peak in 2004 but was down to 628,000 tons in 2013. Recovery plans for cod, sole, and plaice have reduced mortality in these species. Arctic cod reached its lowest levels in the 1960s–1980s but is now recovered. Arctic saithe and haddock are considered fully fished; Sand eel is overfished as was capelin which has now recovered to fully fished. Limited data makes the state of redfishes and deep-water species difficult to assess but most likely they remain vulnerable to overfishing. Stocks of northern shrimp and Norwegian lobster are in good condition. In the Northeast Atlantic, 21% of stocks are considered overfished.
This zone makes almost three-quarters (72.8%) of European Union fishing catches in 2020. Main fishing EU countries are Denmark, France, the Netherlands and Spain. Most common species include herring, mackerel, and sprats.
Northwest Atlantic In the Northwest Atlantic landings have decreased from 4.2 million tons in the early 1970s to 1.9 million tons in 2013. During the 21st century, some species have shown weak signs of recovery, including Greenland halibut, yellowtail flounder, Atlantic halibut, haddock, spiny dogfish, while other stocks shown no such signs, including cod, witch flounder, and redfish. Stocks of invertebrates, in contrast, remain at record levels of abundance. 31% of stocks are overfished in the northwest Atlantic.
In 1497, John Cabot became the first Western European since the Vikings to explore mainland North America and one of his major discoveries was the abundant resources of Atlantic cod off Newfoundland. Referred to as "Newfoundland Currency" this discovery yielded some 200 million tons of fish over five centuries. In the late 19th and early 20th centuries, new fisheries started to exploit haddock, mackerel, and lobster. From the 1950s to the 1970s, the introduction of European and Asian distant-water fleets in the area dramatically increased the fishing capacity and the number of exploited species. It also expanded the exploited areas from near-shore to the open sea and to great depths to include deep-water species such as redfish, Greenland halibut, witch flounder, and grenadiers. Overfishing in the area was recognized as early as the 1960s but, because this was occurring on international waters, it took until the late 1970s before any attempts to regulate was made. In the early 1990s, this finally resulted in the collapse of the Atlantic northwest cod fishery. The population of a number of deep-sea fishes also collapsed in the process, including American plaice, redfish, and Greenland halibut, together with flounder and grenadier.
Eastern central-Atlantic In the eastern central-Atlantic small pelagic fishes constitute about 50% of landings with sardine reaching 0.6–1.0 million tons per year. Pelagic fish stocks are considered fully fished or overfished, with sardines south of Cape Bojador the notable exception. Almost half of the stocks are fished at biologically unsustainable levels. Total catches have been fluctuating since the 1970s; reaching 3.9 million tons in 2013 or slightly less than the peak production in 2010.
Western central-Atlantic In the western central-Atlantic, catches have been decreasing since 2000 and reached 1.3 million tons in 2013. The most important species in the area, Gulf menhaden, reached a million tons in the mid-1980s but only half a million tons in 2013 and is now considered fully fished. Round sardinella was an important species in the 1990s but is now considered overfished. Groupers and snappers are overfished and northern brown shrimp and American cupped oyster are considered fully fished approaching overfished. 44% of stocks are being fished at unsustainable levels.
Southeast Atlantic In the southeast Atlantic catches have decreased from 3.3 million tons in the early 1970s to 1.3 million tons in 2013. Horse mackerel and hake are the most important species, together representing almost half of the landings. Off South Africa and Namibia deep-water hake and shallow-water Cape hake have recovered to sustainable levels since regulations were introduced in 2006 and the states of southern African pilchard and anchovy have improved to fully fished in 2013.
Southwest Atlantic In the southwest Atlantic, a peak was reached in the mid-1980s and catches now fluctuate between 1.7 and 2.6 million tons. The most important species, the Argentine shortfin squid, which reached half a million tons in 2013 or half the peak value, is considered fully fished to overfished. Another important species was the Brazilian sardinella, with a production of 100,000 tons in 2013 it is now considered overfished. Half the stocks in this area are being fished at unsustainable levels: Whitehead's round herring has not yet reached fully fished but Cunene horse mackerel is overfished. The sea snail perlemoen abalone is targeted by illegal fishing and remains overfished.
Environmental issues
Endangered species
Endangered marine species include the manatee, seals, sea lions, turtles, and whales. Drift net fishing can kill dolphins, albatrosses and other seabirds (petrels, auks), hastening the fish stock decline and contributing to international disputes.
List
Green sea turtle
Kemp's ridley sea turtle
Leatherback sea turtle
Loggerhead sea turtle
Smalltooth sawfish
Shortnose sturgeon
Atlantic sturgeon
Oceanic whitetip shark
Giant oceanic manta ray
Fin whale
Blue whale
Waste and pollution
Marine pollution is a generic term for the entry into the ocean of potentially hazardous chemicals or particles. The biggest culprits are rivers and with them many agriculture fertilizer chemicals as well as livestock and human waste. The excess of oxygen-depleting chemicals leads to hypoxia and the creation of a dead zone.Sebastian A. Gerlach "Marine Pollution", Springer, Berlin (1975)
Marine debris, which is also known as marine litter, describes human-created waste floating in a body of water. Oceanic debris tends to accumulate at the center of gyres and coastlines, frequently washing aground where it is known as beach litter. The North Atlantic garbage patch is estimated to be hundreds of kilometers across in size.
Other pollution concerns include agricultural and municipal waste. Municipal pollution comes from the eastern United States, southern Brazil, and eastern Argentina; oil pollution in the Caribbean Sea, Gulf of Mexico, Lake Maracaibo, Mediterranean Sea, and North Sea; and industrial waste and municipal sewage pollution in the Baltic Sea, North Sea, and Mediterranean Sea.
A USAF C-124 aircraft from Dover Air Force Base, Delaware was carrying three nuclear bombs over the Atlantic Ocean when it experienced a loss of power. For their own safety, the crew jettisoned two nuclear bombs, which were never recovered.
Climate change
North Atlantic hurricane activity has increased over past decades because of increased sea surface temperature (SST) at tropical latitudes, changes that can be attributed to either the natural Atlantic Multidecadal Oscillation (AMO) or to anthropogenic climate change.
A 2005 report indicated that the Atlantic meridional overturning circulation (AMOC) slowed down by 30% between 1957 and 2004. In 2024, the research highlighted a significant weakening of the AMOC by approximately 12% over the past two decades. If the AMO were responsible for SST variability, the AMOC would have increased in strength, which is apparently not the case. Furthermore, it is clear from statistical analyses of annual tropical cyclones that these changes do not display multidecadal cyclicity. Therefore, these changes in SST must be caused by human activities.
The ocean mixed layer plays an important role in heat storage over seasonal and decadal time scales, whereas deeper layers are affected over millennia and have a heat capacity about 50 times that of the mixed layer. This heat uptake provides a time-lag for climate change but it also results in thermal expansion of the oceans which contributes to sea level rise. 21st-century global warming will probably result in an equilibrium sea-level rise five times greater than today, whilst melting of glaciers, including that of the Greenland ice sheet, expected to have virtually no effect during the 21st century, will likely result in a sea-level rise of over a millennium.
See also
Atlantic Revolutions
List of countries and territories bordering the Atlantic Ocean
Seven Seas
Shipwrecks in the Atlantic Ocean
Atlantic hurricanes
Piracy in the Atlantic World
Transatlantic crossing
South Atlantic Peace and Cooperation Zone
Natural delimitation between the Pacific and South Atlantic oceans by the Scotia Arc
References
Sources
map
Further reading
External links
Atlantic Ocean. Cartage.org.lb (archived)
"Map of Atlantic Coast of North America from the Chesapeake Bay to Florida" from 1639 via the Library of Congress
Category:Oceans
Category:Articles containing video clips
Category:Oceans surrounding Antarctica
|
geography
| 8,122
|
736
|
Albert Einstein
|
https://en.wikipedia.org/wiki/Albert_Einstein
|
Albert Einstein (14 March 187918 April 1955) was a German-born theoretical physicist best known for developing the theory of relativity. Einstein also made important contributions to quantum theory. His mass–energy equivalence formula , which arises from special relativity, has been called "the world's most famous equation". He received the 1921 Nobel Prize in Physics for "his services to theoretical physics, and especially for his discovery of the law of the photoelectric effect".
Born in the German Empire, Einstein moved to Switzerland in 1895, forsaking his German citizenship (as a subject of the Kingdom of Württemberg) the following year. In 1897, at the age of seventeen, he enrolled in the mathematics and physics teaching diploma program at the Swiss federal polytechnic school in Zurich, graduating in 1900. He acquired Swiss citizenship a year later, which he kept for the rest of his life, and afterwards secured a permanent position at the Swiss Patent Office in Bern. In 1905, he submitted a successful PhD dissertation to the University of Zurich. In 1914, he moved to Berlin to join the Prussian Academy of Sciences and the Humboldt University of Berlin, becoming director of the Kaiser Wilhelm Institute for Physics in 1917; he also became a German citizen again, this time as a subject of the Kingdom of Prussia. In 1933, while Einstein was visiting the United States, Adolf Hitler came to power in Germany. Horrified by the Nazi persecution of his fellow Jews, he decided to remain in the US, and was granted American citizenship in 1940. On the eve of World War II, he endorsed a letter to President Franklin D. Roosevelt alerting him to the potential German nuclear weapons program and recommending that the US begin similar research, later carried out as the Manhattan Project.
In 1905, sometimes described as his annus mirabilis (miracle year), he published four groundbreaking papers. In them, he outlined a theory of the photoelectric effect, explained Brownian motion, introduced his special theory of relativity, and demonstrated that if the special theory is correct, mass and energy are equivalent to each other. In 1915, he proposed a general theory of relativity that extended his system of mechanics to incorporate gravitation. A cosmological paper that he published the following year laid out the implications of general relativity for the modeling of the structure and evolution of the universe as a whole. In 1917, Einstein wrote a paper which introduced the concepts of spontaneous emission and stimulated emission, the latter of which is the core mechanism behind the laser and maser, and which contained a trove of information that would be beneficial to developments in physics later on, such as quantum electrodynamics and quantum optics.
In the middle part of his career, Einstein made important contributions to statistical mechanics and quantum theory. Especially notable was his work on the quantum physics of radiation, in which light consists of particles, subsequently called photons. With physicist Satyendra Nath Bose, he laid the groundwork for Bose–Einstein statistics. For much of the last phase of his academic life, Einstein worked on two endeavors that ultimately proved unsuccessful. First, he advocated against quantum theory's introduction of fundamental randomness into science's picture of the world, objecting that "God does not play dice". Second, he attempted to devise a unified field theory by generalizing his geometric theory of gravitation to include electromagnetism. As a result, he became increasingly isolated from mainstream modern physics.
Life and career
Childhood, youth and education
Albert Einstein was born in Ulm, in the Kingdom of Württemberg in the German Empire, on 14 March 1879. His parents, secular Ashkenazi Jews, were Hermann Einstein, a salesman and engineer, and Pauline Koch. In 1880, the family moved to Munich's borough of Ludwigsvorstadt-Isarvorstadt, where Einstein's father and his uncle Jakob founded Elektrotechnische Fabrik J. Einstein & Cie, a company that manufactured electrical equipment based on direct current.
When he was very young, his parents worried that he had a learning disability because he was very slow to learn to talk. When he was five and sick in bed, his father brought him a compass. This sparked his lifelong fascination with electromagnetism. He realized that "Something deeply hidden had to be behind things."
Einstein attended St. Peter's Catholic elementary school in Munich from the age of five. When he was eight, he was transferred to the Luitpold Gymnasium, where he received advanced primary and then secondary school education.
In 1894, Hermann and Jakob's company tendered for a contract to install electric lighting in Munich, but without success—they lacked the capital that would have been required to update their technology from direct current to the more efficient, alternating current alternative. The failure of their bid forced them to sell their Munich factory and search for new opportunities elsewhere. The Einstein family moved to Italy, first to Milan and a few months later to Pavia, where they settled in Palazzo Cornazzani. Einstein, then fifteen, stayed behind in Munich in order to finish his schooling. His father wanted him to study electrical engineering, but he was a fractious pupil who found the Gymnasium's regimen and teaching methods far from congenial. He later wrote that the school's policy of strict rote learning was harmful to creativity. At the end of December 1894, a letter from a doctor persuaded the Luitpold's authorities to release him from its care, and he joined his family in Pavia. While in Italy as a teenager, he wrote an essay entitled "On the Investigation of the State of the Ether in a Magnetic Field".Stachel, et al (2008). Vol. 1 (1987), doc. 5.
Einstein excelled at physics and mathematics from an early age, and soon acquired the mathematical expertise normally only found in a child several years his senior. He began teaching himself algebra, calculus and Euclidean geometry when he was twelve; he made such rapid progress that he discovered an original proof of the Pythagorean theorem before his thirteenth birthday. A family tutor, Max Talmud, said that only a short time after he had given the twelve year old Einstein a geometry textbook, the boy "had worked through the whole book. He thereupon devoted himself to higher mathematics... Soon the flight of his mathematical genius was so high I could not follow." Einstein recorded that he had "mastered integral and differential calculus" while still just fourteen. His love of algebra and geometry was so great that at twelve, he was already confident that nature could be understood as a "mathematical structure".
At thirteen, when his range of enthusiasms had broadened to include music and philosophy, Talmud introduced Einstein to Kant's Critique of Pure Reason. Kant became his favorite philosopher; according to Talmud, "At the time he was still a child, only thirteen years old, yet Kant's works, incomprehensible to ordinary mortals, seemed to be clear to him."
In 1895, at the age of sixteen, Einstein sat the entrance examination for the federal polytechnic school (later the Eidgenössische Technische Hochschule, ETH) in Zurich, Switzerland. He failed to reach the required standard in the general part of the test,Stachel, et al (2008). Vol. 1 (1987), p. 11. but performed with distinction in physics and mathematics. On the advice of the polytechnic's principal, he completed his secondary education at the Argovian cantonal school (a gymnasium) in Aarau, Switzerland, graduating in 1896. ref for: Old Cantonal School Aarau While lodging in Aarau with the family of Jost Winteler, he fell in love with Winteler's daughter, Marie. (His sister, Maja, later married Winteler's son Paul.)
In January 1896, with his father's approval, Einstein renounced his citizenship of the German Kingdom of Württemberg in order to avoid conscription into military service. The Matura (graduation for the successful completion of higher secondary schooling), awarded to him in September 1896, acknowledged him to have performed well across most of the curriculum, allotting him a top grade of 6 for history, physics, algebra, geometry, and descriptive geometry.Stachel, et al (2008). Vol. 1 (1987), docs. 21–27. At seventeen, he enrolled in the four-year mathematics and physics teaching diploma program at the federal polytechnic school. He befriended fellow student Marcel Grossmann, who would help him there to get by despite his loose study habits, and later to mathematically underpin his revolutionary insights into physics. Marie Winteler, a year older than him, took up a teaching post in Olsberg, Switzerland.
The five other polytechnic school freshmen following the same course as Einstein included just one woman, a twenty year old Serbian, Mileva Marić. Over the next few years, the pair spent many hours discussing their shared interests and learning about topics in physics that the polytechnic school's lectures did not cover. In his letters to Marić, Einstein confessed that exploring science with her by his side was much more enjoyable than reading a textbook in solitude. Eventually the two students became not only friends but also lovers.
Historians of physics are divided on the question of the extent to which Marić contributed to the insights of Einstein's annus mirabilis publications. There is at least some evidence that he was influenced by her scientific ideas, but there are scholars who doubt whether her impact on his thought was of any great significance at all.
Marriages, relationships and children
Correspondence between Einstein and Marić, discovered and published in 1987, revealed that in early 1902, while Marić was visiting her parents in Novi Sad, she gave birth to a daughter, Lieserl. When Marić returned to Switzerland it was without the child, whose fate is uncertain. A letter of Einstein's that he wrote in September 1903 suggests that the girl was either given up for adoption or died of scarlet fever in infancy.
Einstein and Marić married in January 1903. In May 1904, their son Hans Albert was born in Bern, Switzerland. Their son Eduard was born in Zurich in July 1910. In letters that Einstein wrote to Marie Winteler in the months before Eduard's arrival, he described his love for his wife as "misguided" and mourned the "missed life" that he imagined he would have enjoyed if he had married Winteler instead: "I think of you in heartfelt love every spare minute and am so unhappy as only a man can be."
alt=Einstein, looking relaxed and holding a pipe, stands next to a smiling, well-dressed Elsa who is wearing a fancy hat and fur wrap. She is looking at him.|left|thumb|Albert and Elsa Einstein arriving in New York, 1921
In 1912, Einstein entered into a relationship with Elsa Löwenthal, who was both his first cousin on his mother's side and his second cousin on his father's. When Marić learned of his infidelity soon after moving to Berlin with him in April 1914, she returned to Zurich, taking Hans Albert and Eduard with her. Einstein and Marić were granted a divorce on 14 February 1919 on the grounds of having lived apart for five years. As part of the divorce settlement, Einstein agreed that if he were to win a Nobel Prize, he would give the money that he received to Marić; he won the prize two years later.
Einstein married Löwenthal in 1919. In 1923, he began a relationship with a secretary named Betty Neumann, the niece of his close friend Hans Mühsam. Löwenthal nevertheless remained loyal to him, accompanying him when he emigrated to the United States in 1933. In 1935, she was diagnosed with heart and kidney problems. She died in December 1936.
A volume of Einstein's letters released by Hebrew University of Jerusalem in 2006 added some other women with whom he was romantically involved. They included Margarete Lebach (a married Austrian), Estella Katzenellenbogen (the rich owner of a florist business), Toni Mendel (a wealthy Jewish widow) and Ethel Michanowski (a Berlin socialite), with whom he spent time and from whom he accepted gifts while married to Löwenthal. After being widowed, Einstein was briefly in a relationship with Margarita Konenkova, thought by some to be a Russian spy; her husband, the Russian sculptor Sergei Konenkov, created the bronze bust of Einstein at the Institute for Advanced Study at Princeton.
Following an episode of acute mental illness at about the age of twenty, Einstein's son Eduard was diagnosed with schizophrenia. He spent the remainder of his life either in the care of his mother or in temporary confinement in an asylum. After her death, he was committed permanently to Burghölzli, the Psychiatric University Hospital in Zurich.
Assistant at the Swiss Patent Office (1902–1909)
alt=Head and shoulders shot of a young, mustached man with dark, curly hair wearing a plaid suit and vest, striped shirt, and a dark tie.|thumb|upright=1|Einstein at the Swiss patent office, 1904Einstein graduated from the federal polytechnic school in 1900, duly certified as competent to teach mathematics and physics.Stachel, et al (2008). Vol. 1 (1987), doc. 67. His successful acquisition of Swiss citizenship in February 1901 was not followed by the usual sequel of conscription; the Swiss authorities deemed him medically unfit for military service. He found that Swiss schools too appeared to have no use for him, failing to offer him a teaching position despite the almost two years that he spent applying for one. Eventually it was with the help of Marcel Grossmann's father that he secured a post in Bern at the Swiss Patent Office, as an assistant examiner – level III.
Patent applications that landed on Einstein's desk for his evaluation included ideas for a gravel sorter and an electric typewriter. His employers were pleased enough with his work to make his position permanent in 1903, although they did not think that he should be promoted until he had "fully mastered machine technology". It is conceivable that his labors at the patent office had a bearing on his development of his special theory of relativity. He arrived at his revolutionary ideas about space, time and light through thought experiments about the transmission of signals and the synchronization of clocks, matters which also figured in some of the inventions submitted to him for assessment.
In 1902, Einstein and some friends whom he had met in Bern formed a group that held regular meetings to discuss science and philosophy. Their choice of a name for their club, the Olympia Academy, was an ironic comment upon its far from Olympian status. Sometimes they were joined by Marić, who limited her participation in their proceedings to careful listening. The thinkers whose works they reflected upon included Henri Poincaré, Ernst Mach and David Hume, all of whom significantly influenced Einstein's own subsequent ideas and beliefs.
First scientific papers (1900–1905)
Einstein's first paper, "Folgerungen aus den Capillaritätserscheinungen" ("Conclusions drawn from the phenomena of capillarity"), in which he proposed a model of intermolecular attraction that he afterwards disavowed as worthless, was published in the journal Annalen der Physik in 1901.Einstein (1901). His 24-page doctoral dissertation also addressed a topic in molecular physics. Titled "Eine neue Bestimmung der Moleküldimensionen" ("A New Determination of Molecular Dimensions") and dedicated "Meinem Freunde Herr Dr. Marcel Grossmann gewidmet" (to his friend Marcel Grossman), it was completed on 30 April 1905Einstein (1905b). and approved by Professor Alfred Kleiner of the University of Zurich three months later. (Einstein was formally awarded his PhD on 15 January 1906.)Einstein (1926b). A New Determination of Molecular Dimensions. Four other pieces of work that Einstein completed in 1905—his famous papers on the photoelectric effect, Brownian motion, his special theory of relativity and the equivalence of mass and energy—have led to the year being celebrated as an annus mirabilis for physics akin to the miracle year of 1666 when Isaac Newton experienced his greatest epiphanies. The publications deeply impressed Einstein's contemporaries.
Academic career in Europe (1908–1933)
Einstein's sabbatical as a civil servant approached its end in 1908, when he secured a junior teaching position at the University of Bern. In 1909, a lecture on relativistic electrodynamics that he gave at the University of Zurich, much admired by Alfred Kleiner, led to Zurich's luring him away from Bern with a newly created associate professorship. Promotion to a full professorship followed in April 1911, when he took up a chair at the German Charles-Ferdinand University in Prague, a move which required him to become an Austrian citizen of the Austro-Hungarian Empire, which was not completed. His time in Prague saw him producing eleven research papers.
From 30 October to 3 November 1911, Einstein attended the first Solvay Conference on Physics.Paul Langevin and Maurice de Broglie, eds., La théorie du rayonnement et les quanta. Rapports et discussions de la réunion tenue à Bruxelles, du 30 octobre au 3 novembre 1911, sous les auspices de M. E. Solvay. Paris: , 1912. See also: The Collected Papers of Albert Einstein, Vol. 3: Writings 1909–1911, Doc. 26, p. 402 (English translation supplement).
In July 1912, he returned to his alma mater, the ETH Zurich, to take up a chair in theoretical physics. His teaching activities there centered on thermodynamics and analytical mechanics, and his research interests included the molecular theory of heat, continuum mechanics and the development of a relativistic theory of gravitation. In his work on the latter topic, he was assisted by his friend Marcel Grossmann, whose knowledge of the kind of mathematics required was greater than his own.
In the spring of 1913, two German visitors, Max Planck and Walther Nernst, called upon Einstein in Zurich in the hope of persuading him to relocate to Berlin. They offered him membership of the Prussian Academy of Sciences, the directorship of the planned Kaiser Wilhelm Institute for Physics and a chair at the Humboldt University of Berlin that would allow him to pursue his research supported by a professorial salary but with no teaching duties to burden him. Their invitation was all the more appealing to him because Berlin happened to be the home of his latest girlfriend, Elsa Löwenthal. He duly joined the Academy on 24 July 1913, and moved into an apartment in the Berlin district of Dahlem on 1 April 1914. He was installed in his Humboldt University position shortly thereafter.
The outbreak of the First World War in July 1914 marked the beginning of Einstein's gradual estrangement from the nation of his birth. When the "Manifesto of the Ninety-Three" was published in October 1914—a document signed by a host of prominent German thinkers that justified Germany's belligerence—Einstein was one of the few German intellectuals to distance himself from it and sign the alternative, eirenic "Manifesto to the Europeans" instead. However, this expression of his doubts about German policy did not prevent him from being elected to a two-year term as president of the German Physical Society in 1916. When the Kaiser Wilhelm Institute for Physics opened its doors the following year—its foundation delayed because of the war—Einstein was appointed its first director, just as Planck and Nernst had promised.
Einstein was elected a Foreign Member of the Royal Netherlands Academy of Arts and Sciences in 1920, and a Foreign Member of the Royal Society in 1921. In 1922, he was awarded the 1921 Nobel Prize in Physics "for his services to Theoretical Physics, and especially for his discovery of the law of the photoelectric effect". At this point some physicists still regarded the general theory of relativity skeptically, and the Nobel citation displayed a degree of doubt even about the work on photoelectricity that it acknowledged: it did not assent to Einstein's notion of the particulate nature of light, which only won over the entire scientific community when S. N. Bose derived the Planck spectrum in 1924. That same year, Einstein was elected an International Honorary Member of the American Academy of Arts and Sciences. Britain's closest equivalent of the Nobel award, the Royal Society's Copley Medal, was not hung around Einstein's neck until 1925. He was elected an International Member of the American Philosophical Society in 1930.
Einstein resigned from the Prussian Academy in March 1933. His accomplishments in Berlin had included the completion of the general theory of relativity, proving the Einstein–de Haas effect, contributing to the quantum theory of radiation, and the development of Bose–Einstein statistics.
Putting general relativity to the test (1919)
In 1907, Einstein reached a milestone on his long journey from his special theory of relativity to a new idea of gravitation with the formulation of his equivalence principle, which asserts that an observer in a box falling freely in a gravitational field would be unable to find any evidence that the field exists. In 1911, he used the principle to estimate the amount by which a ray of light from a distant star would be bent by the gravitational pull of the Sun as it passed close to the Sun's photosphere (that is, the Sun's apparent surface). He reworked his calculation in 1913, having now found a way to model gravitation with the Riemann curvature tensor of a non-Euclidean four-dimensional spacetime. By the fall of 1915, his reimagining of the mathematics of gravitation in terms of Riemannian geometry was complete, and he applied his new theory not just to the behavior of the Sun as a gravitational lens but also to another astronomical phenomenon, the precession of the perihelion of Mercury (a slow drift in the point in Mercury's elliptical orbit at which it approaches the Sun most closely). A total eclipse of the Sun that took place on 29 May 1919 provided an opportunity to put his theory of gravitational lensing to the test, and observations performed by Sir Arthur Eddington yielded results that were consistent with his calculations. Eddington's work was reported at length in newspapers around the world. On 7 November 1919, for example, the leading British newspaper, The Times, printed a banner headline that read: "Revolution in Science– New Theory of the Universe– Newtonian Ideas Overthrown".
Coming to terms with fame (1921–1923)
With Eddington's eclipse observations widely reported not just in academic journals but by the popular press as well, Einstein became "perhaps the world's first celebrity scientist", a genius who had shattered a paradigm that had been basic to physicists' understanding of the universe since the seventeenth century.
Einstein began his new life as an intellectual icon in America, where he arrived on 2 April 1921. He was welcomed to New York City by Mayor John Francis Hylan, and then spent three weeks giving lectures and attending receptions. He spoke several times at Columbia University and Princeton, and in Washington, he visited the White House with representatives of the National Academy of Sciences. He returned to Europe via London, where he was the guest of the philosopher and statesman Viscount Haldane. He used his time in the British capital to meet several people prominent in British scientific, political or intellectual life, and to deliver a lecture at King's College. In July 1921, he published an essay, "My First Impression of the U.S.A.", in which he sought to sketch the American character, much as had Alexis de Tocqueville in Democracy in America (1835). He wrote of his transatlantic hosts in highly approving terms: "What strikes a visitor is the joyous, positive attitude to life ... The American is friendly, self-confident, optimistic, and without envy."
In 1922, Einstein's travels were to the old world rather than the new. He devoted six months to a tour of Asia that saw him speaking in Japan, Singapore and Sri Lanka (then known as Ceylon). After his first public lecture in Tokyo, he met Emperor Yoshihito and his wife at the Imperial Palace, with thousands of spectators thronging the streets in the hope of catching a glimpse of him. (In a letter to his sons, he wrote that Japanese people seemed to him to be generally modest, intelligent and considerate, and to have a true appreciation of art. But his picture of them in his diary was less flattering: "[the] intellectual needs of this nation seem to be weaker than their artistic ones – natural disposition?" His journal also contains views of China and India which were uncomplimentary. Of Chinese people, he wrote that "even the children are spiritless and look obtuse... It would be a pity if these Chinese supplant all other races. For the likes of us the mere thought is unspeakably dreary".) He was greeted with even greater enthusiasm on the last leg of his tour, in which he spent twelve days in Mandatory Palestine, newly entrusted to British rule by the League of Nations in the aftermath of the First World War. Sir Herbert Samuel, the British High Commissioner, welcomed him with a degree of ceremony normally only accorded to a visiting head of state, including a cannon salute. One reception held in his honor was stormed by people determined to hear him speak: he told them that he was happy that Jews were beginning to be recognized as a force in the world.
Einstein's decision to tour the eastern hemisphere in 1922 meant that he was unable to go to Stockholm in the December of that year to participate in the Nobel prize ceremony. His place at the traditional Nobel banquet was taken by a German diplomat, who gave a speech praising him not only as a physicist but also as a campaigner for peace. A two-week visit to Spain that he undertook in 1923 saw him collecting another award, a membership of the Spanish Academy of Sciences signified by a diploma handed to him by King Alfonso XIII. (His Spanish trip also gave him a chance to meet a fellow Nobel laureate, the neuroanatomist Santiago Ramón y Cajal.)
Serving the League of Nations (1922–1932)
From 1922 until 1932, with the exception of a few months in 1923 and 1924, Einstein was a member of the Geneva-based International Committee on Intellectual Cooperation of the League of Nations, a group set up by the League to encourage scientists, artists, scholars, teachers and other people engaged in the life of the mind to work more closely with their counterparts in other countries. He was appointed as a German delegate rather than as a representative of Switzerland because of the machinations of two Catholic activists, Oskar Halecki and Giuseppe Motta. By persuading Secretary General Eric Drummond to deny Einstein the place on the committee reserved for a Swiss thinker, they created an opening for Gonzague de Reynold, who used his League of Nations position as a platform from which to promote traditional Catholic doctrine. Einstein's former physics professor Hendrik Lorentz and the Polish chemist Marie Curie were also members of the committee.
Touring South America (1925)
In March and April 1925, Einstein and his wife visited South America, where they spent about a week in Brazil, a week in Uruguay and a month in Argentina. Their tour was suggested by Jorge Duclout (1856–1927) and Mauricio Nirenstein (1877–1935) with the support of several Argentine scholars, including Julio Rey Pastor, Jakob Laub, and Leopoldo Lugones. and was financed primarily by the Council of the University of Buenos Aires and the Asociación Hebraica Argentina (Argentine Hebraic Association) with a smaller contribution from the Argentine-Germanic Cultural Institution.
Touring the US (1930–1931)
In December 1930, Einstein began another significant sojourn in the United States, drawn back to the US by the offer of a two month research fellowship at the California Institute of Technology. Caltech supported him in his wish that he should not be exposed to quite as much attention from the media as he had experienced when visiting the US in 1921, and he therefore declined all the invitations to receive prizes or make speeches that his admirers poured down upon him. But he remained willing to allow his fans at least some of the time with him that they requested.
After arriving in New York City, Einstein was taken to various places and events, including Chinatown, a lunch with the editors of The New York Times, and a performance of Carmen at the Metropolitan Opera, where he was cheered by the audience on his arrival. During the days following, he was given the keys to the city by Mayor Jimmy Walker and met Nicholas Murray Butler, the president of Columbia University, who described Einstein as "the ruling monarch of the mind". Harry Emerson Fosdick, pastor at New York's Riverside Church, gave Einstein a tour of the church and showed him a full-size statue that the church made of Einstein, standing at the entrance. Also during his stay in New York, he joined a crowd of 15,000 people at Madison Square Garden during a Hanukkah celebration.
Einstein next traveled to California, where he met Caltech president and Nobel laureate Robert A. Millikan. His friendship with Millikan was "awkward", as Millikan "had a penchant for patriotic militarism", where Einstein was a pronounced pacifist. During an address to Caltech's students, Einstein noted that science was often inclined to do more harm than good.
This aversion to war also led Einstein to befriend author Upton Sinclair and film star Charlie Chaplin, both noted for their pacifism. Carl Laemmle, head of Universal Studios, gave Einstein a tour of his studio and introduced him to Chaplin. They had an instant rapport, with Chaplin inviting Einstein and his wife, Elsa, to his home for dinner. Chaplin said Einstein's outward persona, calm and gentle, seemed to conceal a "highly emotional temperament", from which came his "extraordinary intellectual energy".
Chaplin's film City Lights was to premiere a few days later in Hollywood, and Chaplin invited Einstein and Elsa to join him as his special guests. Walter Isaacson, Einstein's biographer, described this as "one of the most memorable scenes in the new era of celebrity". Chaplin visited Einstein at his home on a later trip to Berlin and recalled his "modest little flat" and the piano at which he had begun writing his theory. Chaplin speculated that it was "possibly used as kindling wood by the Nazis". Einstein and Chaplin were cheered at the premiere of the film. Chaplin said to Einstein, "They cheer me because they understand me, and they cheer you because no one understands you."
Emigration to the US (1933)
In February 1933, while on a visit to the United States, Einstein knew he could not return to Germany with the rise to power of the Nazis under Germany's new chancellor, Adolf Hitler.
While at American universities in early 1933, he undertook his third two-month visiting professorship at the California Institute of Technology in Pasadena. In February and March 1933, the Gestapo repeatedly raided his family's apartment in Berlin. He and his wife Elsa returned to Europe in March, and during the trip, they learned that the German Reichstag had passed the Enabling Act on 23 March, transforming Hitler's government into a de facto legal dictatorship, and that they would not be able to proceed to Berlin. Later on, they heard that their cottage had been raided by the Nazis and Einstein's personal sailboat confiscated. Upon landing in Antwerp, Belgium on 28 March, Einstein immediately went to the German consulate and surrendered his passport, formally renouncing his German citizenship. The Nazis later sold his boat and converted his cottage into a Hitler Youth camp.
Refugee status
In April 1933, Einstein discovered that the new German government had passed laws barring Jews from holding any official positions, including teaching at universities. Historian Gerald Holton describes how, with "virtually no audible protest being raised by their colleagues", thousands of Jewish scientists were suddenly forced to give up their university positions and their names were removed from the rolls of institutions where they were employed.
A month later, Einstein's works were among those targeted by the German Student Union in the Nazi book burnings, with Nazi propaganda minister Joseph Goebbels proclaiming, "Jewish intellectualism is dead." One German magazine included him in a list of enemies of the German regime with the phrase, "not yet hanged", offering a $5,000 bounty on his head. In a subsequent letter to physicist and friend Max Born, who had already emigrated from Germany to England, Einstein wrote, "...I must confess that the degree of their brutality and cowardice came as something of a surprise." After moving to the US, he described the book burnings as a "spontaneous emotional outburst" by those who "shun popular enlightenment", and "more than anything else in the world, fear the influence of men of intellectual independence".Einstein (1954), p. 197.
Einstein was now without a permanent home, unsure where he would live and work, and equally worried about the fate of countless other scientists still in Germany. Aided by the Academic Assistance Council, founded in April 1933 by British Liberal politician William Beveridge to help academics escape Nazi persecution, Einstein was able to leave Germany. He rented a house in De Haan, Belgium, where he lived for a few months. In late July 1933, he visited England for about six weeks at the invitation of the British Member of Parliament Commander Oliver Locker-Lampson, who had become friends with him in the preceding years. Locker-Lampson invited him to stay near his Cromer home in a secluded wooden cabin on Roughton Heath in the Parish of Roughton, Norfolk. To protect Einstein, Locker-Lampson had two bodyguards watch over him; a photo of them carrying shotguns and guarding Einstein was published in the Daily Herald on 24 July 1933.
Locker-Lampson took Einstein to meet Winston Churchill at his home, and later, Austen Chamberlain and former Prime Minister Lloyd George. Einstein asked them to help bring Jewish scientists out of Germany. British historian Martin Gilbert notes that Churchill responded immediately, and sent his friend physicist Frederick Lindemann to Germany to seek out Jewish scientists and place them in British universities. Churchill later observed that as a result of Germany having driven the Jews out, they had lowered their "technical standards" and put the Allies' technology ahead of theirs.
Einstein later contacted leaders of other nations, including Turkey's Prime Minister, İsmet İnönü, to whom he wrote in September 1933, requesting placement of unemployed German-Jewish scientists. As a result of Einstein's letter, Jewish invitees to Turkey eventually totaled over "1,000 saved individuals".
Locker-Lampson also submitted a bill to parliament to extend British citizenship to Einstein, during which period Einstein made a number of public appearances describing the crisis brewing in Europe. In one of his speeches he denounced Germany's treatment of Jews, while at the same time he introduced a bill promoting Jewish citizenship in Palestine, as they were being denied citizenship elsewhere. In his speech he described Einstein as a "citizen of the world" who should be offered a temporary shelter in the UK. Both bills failed, however, and Einstein then accepted an earlier offer from the Institute for Advanced Study, in Princeton, New Jersey, US, to become a resident scholar.
Resident scholar at the Institute for Advanced Study
On 3 October 1933, Einstein delivered a speech on the importance of academic freedom before a packed audience at the Royal Albert Hall in London, with The Times reporting he was wildly cheered throughout. Four days later he returned to the US and took up a position at the Institute for Advanced Study, noted for having become a refuge for scientists fleeing Nazi Germany. At the time, most American universities, including Harvard, Princeton and Yale, had minimal or no Jewish faculty or students, as a result of their Jewish quotas, which lasted until the late 1940s.
Einstein was still undecided about his future. He had offers from several European universities, including Christ Church, Oxford, where he stayed for three short periods between May 1931 and June 1933 and was offered a five-year research fellowship (called a "studentship" at Christ Church), but in 1935, he arrived at the decision to remain permanently in the United States and apply for citizenship.
Einstein's affiliation with the Institute for Advanced Study would last until his death in 1955. He was one of the four first selected (along with John von Neumann, Kurt Gödel and Hermann Weyl) at the new Institute. He soon developed a close friendship with Gödel; the two would take long walks together discussing their work. Bruria Kaufman, his assistant, later became a physicist. During this period, Einstein tried to develop a unified field theory and to refute the accepted interpretation of quantum physics, both unsuccessfully. He lived in Princeton at his home from 1935 onwards. The Albert Einstein House was made a National Historic Landmark in 1976.
World War II and the Manhattan Project
In 1939, a group of Hungarian scientists that included émigré physicist Leó Szilárd attempted to alert Washington, D.C. to ongoing Nazi atomic bomb research. The group's warnings were discounted. Einstein and Szilárd, along with other refugees such as Edward Teller and Eugene Wigner, "regarded it as their responsibility to alert Americans to the possibility that German scientists might win the race to build an atomic bomb, and to warn that Hitler would be more than willing to resort to such a weapon." To make certain the US was aware of the danger, in July 1939, a few months before the beginning of World War II in Europe, Szilárd and Wigner visited Einstein to explain the possibility of atomic bombs, which Einstein, a pacifist, said he had never considered. He was asked to lend his support by writing a letter, with Szilárd, to President Franklin D. Roosevelt, recommending the US pay attention and engage in its own nuclear weapons research.
The letter is believed to be "arguably the key stimulus for the U.S. adoption of serious investigations into nuclear weapons on the eve of the U.S. entry into World War II". In addition to the letter, Einstein used his connections with the Belgian royal family and the Belgian queen mother to get access with a personal envoy to the White House's Oval Office. Some say that as a result of Einstein's letter and his meetings with Roosevelt, the US entered the "race" to develop the bomb, drawing on its "immense material, financial, and scientific resources" to initiate the Manhattan Project.
For Einstein, "war was a disease... [and] he called for resistance to war." By signing the letter to Roosevelt, some argue he went against his pacifist principles. In 1954, a year before his death, Einstein said to his old friend, Linus Pauling, "I made one great mistake in my life—when I signed the letter to President Roosevelt recommending that atom bombs be made; but there was some justification—the danger that the Germans would make them..." In 1955, Einstein and ten other intellectuals and scientists, including British philosopher Bertrand Russell, signed a manifesto highlighting the danger of nuclear weapons. In 1960 Einstein was included posthumously as a charter member of the World Academy of Art and Science (WAAS), an organization founded by distinguished scientists and intellectuals who committed themselves to the responsible and ethical advances of science, particularly in light of the development of nuclear weapons.
US citizenship
Einstein became an American citizen in 1940. Not long after settling into his career at the Institute for Advanced Study in Princeton, New Jersey, he expressed his appreciation of the meritocracy in American culture compared to Europe. He recognized the "right of individuals to say and think what they pleased" without social barriers. As a result, individuals were encouraged, he said, to be more creative, a trait he valued from his early education.
Einstein joined the National Association for the Advancement of Colored People (NAACP) in Princeton, where he campaigned for the civil rights of African Americans. He considered racism America's "worst disease", seeing it as "handed down from one generation to the next". As part of his involvement, he corresponded with civil rights activist W. E. B. Du Bois and was prepared to testify on his behalf during his trial as an alleged foreign agent in 1951. When Einstein offered to be a character witness for Du Bois, the judge decided to drop the case.
In 1946, Einstein visited Lincoln University in Pennsylvania, a historically black college, where he was awarded an honorary degree. Lincoln was the first university in the United States to grant college degrees to African Americans; alumni include Langston Hughes and Thurgood Marshall. Einstein gave a speech about racism in America, adding, "I do not intend to be quiet about it." A resident of Princeton recalls that Einstein had once paid the college tuition for a black student. Einstein has said, "Being a Jew myself, perhaps I can understand and empathize with how black people feel as victims of discrimination". Isaacson writes that "When Marian Anderson, the black contralto, came to Princeton for a concert in 1937, the Nassau Inn refused her a room. So Einstein invited her to stay at his house on Main Street, in what was a deeply personal as well as symbolic gesture ... Whenever she returned to Princeton, she stayed with Einstein, her last visit coming just two months before he died."
Personal views
Political views
alt=Casual group shot of four men and two women standing on a brick pavement.|thumb|Albert Einstein and Elsa Einstein arriving in New York in 1921. Accompanying them are Zionist leaders Chaim Weizmann (future president of Israel), Weizmann's wife Vera Weizmann, Menahem Ussishkin, and Ben-Zion Mossinson.
In 1918, Einstein was one of the signatories of the founding proclamation of the German Democratic Party, a liberal party. Later in his life, Einstein's political view was in favor of socialism and critical of capitalism, which he detailed in his essays such as "Why Socialism?".Einstein (1949), pp. 9–15. His opinions on the Bolsheviks also changed with time. In 1925, he criticized them for not having a "well-regulated system of government" and called their rule a "regime of terror and a tragedy in human history". He later adopted a more moderated view, criticizing their methods but praising them, which is shown by his 1929 remark on Vladimir Lenin:
Einstein offered and was called on to give judgments and opinions on matters often unrelated to theoretical physics or mathematics. He strongly advocated the idea of a democratic global government that would check the power of nation-states in the framework of a world federation. He wrote "I advocate world government because I am convinced that there is no other possible way of eliminating the most terrible danger in which man has ever found himself."Bulletin of the Atomic Scientists 4 (February 1948), No. 2 35–37: 'A Reply to the Soviet Scientists, December 1947' The FBI created a secret dossier on Einstein in 1932; by the time of his death, it was 1,427 pages long.
Einstein was deeply impressed by Mahatma Gandhi, with whom he corresponded. He described Gandhi as "a role model for the generations to come". The initial connection was established on 27 September 1931, when Wilfrid Israel took his Indian guest V. A. Sundaram to meet his friend Einstein at his summer home in the town of Caputh. Sundaram was Gandhi's disciple and special envoy, whom Wilfrid Israel met while visiting India and visiting the Indian leader's home in 1925. During the visit, Einstein wrote a short letter to Gandhi that was delivered to him through his envoy, and Gandhi responded quickly with his own letter. Although in the end Einstein and Gandhi were unable to meet as they had hoped, the direct connection between them was established through Wilfrid Israel., gandhiserve.org
Relationship with Zionism
Einstein was a figurehead leader in the establishment of the Hebrew University of Jerusalem, which opened in 1925. Earlier, in 1921, he was asked by the biochemist and president of the World Zionist Organization, Chaim Weizmann, to help raise funds for the planned university. He made suggestions for the creation of an Institute of Agriculture, a Chemical Institute and an Institute of Microbiology in order to fight the various ongoing epidemics such as malaria, which he called an "evil" that was undermining a third of the country's development. He also promoted the establishment of an Oriental Studies Institute, to include language courses given in both Hebrew and Arabic.
Einstein was not a nationalist and opposed the creation of an independent Jewish state. He felt that the waves of arriving Jews of the Aliyah could live alongside existing Arabs in Palestine. The state of Israel was established without his help in 1948; Einstein was limited to a marginal role in the Zionist movement. Upon the death of Israeli president Weizmann in November 1952, Prime Minister David Ben-Gurion offered Einstein the largely ceremonial position of President of Israel at the urging of Ezriel Carlebach. The offer was presented by Israel's ambassador in Washington, Abba Eban, who explained that the offer "embodies the deepest respect which the Jewish people can repose in any of its sons". Einstein wrote that he was "deeply moved", but "at once saddened and ashamed" that he could not accept it. Einstein did not want the office, and Israel did not want him to accept, but felt obliged to make the offer. Yitzhak Navon, Ben-Gurion's political secretary, and later president, reports Ben-Gurion as saying "Tell me what to do if he says yes! I've had to offer the post to him because it's impossible not to. But if he accepts, we are in for trouble."
Religious and philosophical views
"Ladies (coughs) and gentlemen, our age is proud of the progress it has made in man's intellectual development. The search and striving for truth and knowledge is one of the highest of man's qualities..."
Per Lee Smolin, "I believe what allowed Einstein to achieve so much was primarily a moral quality. He simply cared far more than most of his colleagues that the laws of physics have to explain everything in nature coherently and consistently." Einstein expounded his spiritual outlook in a wide array of writings and interviews. He said he had sympathy for the impersonal pantheistic God of Baruch Spinoza's philosophy. He did not believe in a personal god who concerns himself with fates and actions of human beings, a view which he described as naïve. He clarified, however, that "I am not an atheist", preferring to call himself an agnostic, or a "deeply religious nonbeliever". He wrote that "A spirit is manifest in the laws of the universe—a spirit vastly superior to that of man, and one in the face of which we with our modest powers must feel humble. In this way the pursuit of science leads to a religious feeling of a special sort."
Einstein was primarily affiliated with non-religious humanist and Ethical Culture groups in both the UK and US. He served on the advisory board of the First Humanist Society of New York, and was an honorary associate of the Rationalist Association, which publishes New Humanist in Britain. For the 75th anniversary of the New York Society for Ethical Culture, he stated that the idea of Ethical Culture embodied his personal conception of what is most valuable and enduring in religious idealism. He observed, "Without 'ethical culture' there is no salvation for humanity."Einstein (1995), p. 62.
In a German-language letter to philosopher Eric Gutkind, dated 3 January 1954, Einstein wrote:
Einstein had been sympathetic toward vegetarianism for a long time. In a letter in 1930 to Hermann Huth, vice-president of the German Vegetarian Federation (Deutsche Vegetarier-Bund), he wrote:
He became a vegetarian himself only during the last part of his life. In March 1954 he wrote in a letter: "So I am living without fats, without meat, without fish, but am feeling quite well this way. It almost seems to me that man was not born to be a carnivore."
"Albert Einstein [...] also read Blavatsky and attended lectures by Rudolf Steiner."
Love of music
Einstein developed an appreciation for music at an early age. In his late journals he wrote:
His mother played the piano reasonably well and wanted her son to learn the violin, not only to instill in him a love of music but also to help him assimilate into German culture. According to conductor Leon Botstein, Einstein began playing when he was 5. However, he did not enjoy it at that age.
When he turned 13, he discovered Mozart's violin sonatas, whereupon he became enamored of Mozart's compositions and studied music more willingly. Einstein taught himself to play without "ever practicing systematically". He said that "love is a better teacher than a sense of duty". At the age of 17, he was heard by a school examiner in Aarau while playing Beethoven's violin sonatas. The examiner stated afterward that his playing was "remarkable and revealing of 'great insight. What struck the examiner, writes Botstein, was that Einstein "displayed a deep love of the music, a quality that was and remains in short supply. Music possessed an unusual meaning for this student."
Music took on a pivotal and permanent role in Einstein's life from that period on. Although the idea of becoming a professional musician himself was not on his mind at any time, among those with whom Einstein played chamber music were a few professionals, including Kurt Appelbaum, and he performed for private audiences and friends. Chamber music had also become a regular part of his social life while living in Bern, Zurich, and Berlin, where he played with Max Planck and his son, among others. He is sometimes erroneously credited as the editor of the 1937 edition of the Köchel catalog of Mozart's work; that edition was prepared by Alfred Einstein, who may have been a distant relation. Mozart was a special favorite; he said that "Mozart's music is so pure it seems to have been ever-present in the universe." However, he preferred Bach to Beethoven, once saying: "Give me Bach, rather, and then more Bach."
In 1931, while engaged in research at the California Institute of Technology, he visited the Zoellner family conservatory in Los Angeles, where he played some of Beethoven and Mozart's works with members of the Zoellner Quartet. Near the end of his life, when the young Juilliard Quartet visited him in Princeton, he played his violin with them, and the quartet was "impressed by Einstein's level of coordination and intonation".
Death
On 17 April 1955, Einstein experienced internal bleeding caused by the rupture of an abdominal aortic aneurysm, which had previously been reinforced surgically by Rudolph Nissen in 1948. He took the draft of a speech he was preparing for a television appearance commemorating the state of Israel's seventh anniversary with him to the hospital, but he did not live to complete it.
Einstein refused surgery, saying, "I want to go when I want. It is tasteless to prolong life artificially. I have done my share; it is time to go. I will do it elegantly." He died in the Princeton Hospital early the next morning at the age of 76, having continued to work until near the end.
During the autopsy, the pathologist Thomas Stoltz Harvey removed Einstein's brain for preservation without the permission of his family, in the hope that the neuroscience of the future would be able to discover what made Einstein so intelligent. Einstein's remains were cremated in Trenton, New Jersey, and his ashes were scattered at an undisclosed location.
In a memorial lecture delivered on 13 December 1965 at UNESCO headquarters, nuclear physicist J. Robert Oppenheimer summarized his impression of Einstein as a person: "He was almost wholly without sophistication and wholly without worldliness... There was always with him a wonderful purity at once childlike and profoundly stubborn."
Einstein bequeathed his personal archives, library, and intellectual assets to the Hebrew University of Jerusalem in Israel.
Scientific career
Throughout his life, Einstein published hundreds of books and articles. He published more than 300 scientific papers and 150 non-scientific ones. On 5 December 2014, universities and archives announced the release of Einstein's papers, comprising more than 30,000 unique documents.Stachel et al (2008). In addition to the work he did by himself, he also collaborated with other scientists on additional projects, including the Bose–Einstein statistics, the Einstein refrigerator and others.
Statistical mechanics
Thermodynamic fluctuations and statistical physics
Einstein's first paper, submitted in 1900 to Annalen der Physik, was on capillary attraction. It was published in 1901 with the title "Folgerungen aus den Capillaritätserscheinungen", which translates as "Conclusions from the capillarity phenomena". Two papers he published in 1902–1903 (thermodynamics) attempted to interpret atomic phenomena from a statistical point of view. These papers were the foundation for the 1905 paper on Brownian motion, which showed that Brownian movement can be construed as firm evidence that molecules exist. His research in 1903 and 1904 was mainly concerned with the effect of finite atomic size on diffusion phenomena.
Theory of critical opalescence
Einstein returned to the problem of thermodynamic fluctuations, giving a treatment of the density variations in a fluid at its critical point. Ordinarily the density fluctuations are controlled by the second derivative of the free energy with respect to the density. At the critical point, this derivative is zero, leading to large fluctuations. The effect of density fluctuations is that light of all wavelengths is scattered, making the fluid look milky white. Einstein relates this to Rayleigh scattering, which is what happens when the fluctuation size is much smaller than the wavelength, and which explains why the sky is blue. Einstein quantitatively derived critical opalescence from a treatment of density fluctuations, and demonstrated how both the effect and Rayleigh scattering originate from the atomistic constitution of matter.
1905 – Annus Mirabilis papers
The Annus Mirabilis papers are four articles pertaining to the photoelectric effect (which gave rise to quantum theory), Brownian motion, the special theory of relativity, and E=mc2 that Einstein published in the Annalen der Physik scientific journal in 1905. These four works contributed substantially to the foundation of modern physics and changed views on space, time, and matter. The four papers are:
Title (translated) Area of focus Received Published Significance "On a Heuristic Viewpoint Concerning the Production and Transformation of Light"Einstein (1905a). Photoelectric effect 18 March 9 June Resolved an unsolved puzzle by suggesting that energy is exchanged only in discrete amounts (quanta). This idea was pivotal to the early development of quantum theory. "On the Motion of Small Particles Suspended in a Stationary Liquid, as Required by the Molecular Kinetic Theory of Heat"Einstein (1905c). Brownian motion 11 May 18 July Explained empirical evidence for the atomic theory, supporting the application of statistical physics. "On the Electrodynamics of Moving Bodies"Einstein (1905d). Special relativity 30 June 26September Reconciled Maxwell's equations for electricity and magnetism with the laws of mechanics by introducing changes to mechanics, resulting from analysis based on the independence of the speed of light from the motion of the observer. Discredited the concept of a "luminiferous ether". "Does the Inertia of a Body Depend Upon Its Energy Content?"Einstein (1905e). equivalence 27September 21 November Equivalence of matter and energy, E=mc2, the existence of "rest energy", and the basis of nuclear energy.
Special relativity
Einstein's "" ("On the Electrodynamics of Moving Bodies") was received on 30 June 1905 and published 26 September of that same year. It reconciled conflicts between Maxwell's equations (the laws of electricity and magnetism) and the laws of Newtonian mechanics by introducing changes to the laws of mechanics. Observationally, the effects of these changes are most apparent at high speeds (where objects are moving at speeds close to the speed of light). The theory developed in this paper later became known as Einstein's special theory of relativity.
This paper predicted that, when measured in the frame of a relatively moving observer, a clock carried by a moving body would appear to slow down, and the body itself would contract in its direction of motion. This paper also argued that the idea of a luminiferous aether—one of the leading theoretical entities in physics at the time—was superfluous.
In his paper on mass–energy equivalence, Einstein produced E=mc2 as a consequence of his special relativity equations. Einstein's 1905 work on relativity remained controversial for many years, but was accepted by leading physicists, starting with Max Planck.
Einstein originally framed special relativity in terms of kinematics (the study of moving bodies). In 1908, Hermann Minkowski reinterpreted special relativity in geometric terms as a theory of spacetime. Einstein adopted Minkowski's formalism in his 1915 general theory of relativity.
General relativity
General relativity and the equivalence principle
alt=Black circle covering the sun, rays visible around it, in a dark sky.|thumb|upright|Eddington's photo of a solar eclipse
General relativity (GR) is a theory of gravitation that was developed by Einstein between 1907 and 1915. According to it, the observed gravitational attraction between masses results from the warping of spacetime by those masses. General relativity has developed into an essential tool in modern astrophysics; it provides the foundation for the current understanding of black holes, regions of space where gravitational attraction is so strong that not even light can escape.
As Einstein later said, the reason for the development of general relativity was that the preference of inertial motions within special relativity was unsatisfactory, while a theory which from the outset prefers no state of motion (even accelerated ones) should appear more satisfactory.Einstein (1923). Consequently, in 1907 he published an article on acceleration under special relativity. In that article titled "On the Relativity Principle and the Conclusions Drawn from It", he argued that free fall is really inertial motion, and that for a free-falling observer the rules of special relativity must apply. This argument is called the equivalence principle. In the same article, Einstein also predicted the phenomena of gravitational time dilation, gravitational redshift and gravitational lensing.Stachel, et al (2008). Vol. 2: The Swiss Years—Writings, 1900–1909, pp. 273–274.
In 1911, Einstein published another article "On the Influence of Gravitation on the Propagation of Light" expanding on the 1907 article, in which he estimated the amount of deflection of light by massive bodies. Thus, the theoretical prediction of general relativity could for the first time be tested experimentally.
Gravitational waves
In 1916, Einstein predicted gravitational waves,Einstein (1916).Einstein (1918). ripples in the curvature of spacetime which propagate as waves, traveling outward from the source, transporting energy as gravitational radiation. The existence of gravitational waves is possible under general relativity due to its Lorentz invariance which brings the concept of a finite speed of propagation of the physical interactions of gravity with it. By contrast, gravitational waves cannot exist in the Newtonian theory of gravitation, which postulates that the physical interactions of gravity propagate at infinite speed.
The first, indirect, detection of gravitational waves came in the 1970s through observation of a pair of closely orbiting neutron stars, PSR B1913+16. The explanation for the decay in their orbital period was that they were emitting gravitational waves. Einstein's prediction was confirmed on 11 February 2016, when researchers at LIGO published the first observation of gravitational waves, detected on Earth on 14 September 2015, nearly one hundred years after the prediction.
Hole argument and Entwurf theory
While developing general relativity, Einstein became confused about the gauge invariance in the theory. He formulated an argument that led him to conclude that a general relativistic field theory is impossible. He gave up looking for fully generally covariant tensor equations and searched for equations that would be invariant under general linear transformations only.
In June 1913, the Entwurf ('draft') theory was the result of these investigations. As its name suggests, it was a sketch of a theory, less elegant and more difficult than general relativity, with the equations of motion supplemented by additional gauge fixing conditions. After more than two years of intensive work, Einstein realized that the hole argument was mistaken and abandoned the theory in November 1915.
Physical cosmology
In 1917, Einstein applied the general theory of relativity to the structure of the universe as a whole.Einstein (1917a). He discovered that the general field equations predicted a universe that was dynamic, either contracting or expanding. As observational evidence for a dynamic universe was lacking at the time, Einstein introduced a new term, the cosmological constant, into the field equations, in order to allow the theory to predict a static universe. The modified field equations predicted a static universe of closed curvature, in accordance with Einstein's understanding of Mach's principle in these years. This model became known as the Einstein World or Einstein's static universe.
Following the discovery of the recession of the galaxies by Edwin Hubble in 1929, Einstein abandoned his static model of the universe, and proposed two dynamic models of the cosmos, the Friedmann–Einstein universe of 1931Einstein (1931). and the Einstein–de Sitter universe of 1932.Einstein & de Sitter (1932). In each of these models, Einstein discarded the cosmological constant, claiming that it was "in any case theoretically unsatisfactory".
In many Einstein biographies, it is claimed that Einstein referred to the cosmological constant in later years as his "biggest blunder", based on a letter George Gamow claimed to have received from him. The astrophysicist Mario Livio has cast doubt on this claim.
In late 2013, a team led by the Irish physicist Cormac O'Raifeartaigh discovered evidence that, shortly after learning of Hubble's observations of the recession of the galaxies, Einstein considered a steady-state model of the universe. In a hitherto overlooked manuscript, apparently written in early 1931, Einstein explored a model of the expanding universe in which the density of matter remains constant due to a continuous creation of matter, a process that he associated with the cosmological constant. As he stated in the paper, "In what follows, I would like to draw attention to a solution to equation (1) that can account for Hubbel's facts, and in which the density is constant over time [...] If one considers a physically bounded volume, particles of matter will be continually leaving it. For the density to remain constant, new particles of matter must be continually formed in the volume from space."
It thus appears that Einstein considered a steady-state model of the expanding universe many years before Hoyle, Bondi and Gold. However, Einstein's steady-state model contained a fundamental flaw and he quickly abandoned the idea.
Energy momentum pseudotensor
General relativity includes a dynamical spacetime, so it is difficult to see how to identify the conserved energy and momentum. Noether's theorem allows these quantities to be determined from a Lagrangian with translation invariance, but general covariance makes translation invariance into something of a gauge symmetry. The energy and momentum derived within general relativity by Noether's prescriptions do not make a real tensor for this reason.
Einstein argued that this is true for a fundamental reason: the gravitational field could be made to vanish by a choice of coordinates. He maintained that the non-covariant energy momentum pseudotensor was, in fact, the best description of the energy momentum distribution in a gravitational field. While the use of non-covariant objects like pseudotensors was criticized by Erwin Schrödinger and others, Einstein's approach has been echoed by physicists including Lev Landau and Evgeny Lifshitz.
Wormholes
In 1935, Einstein collaborated with Nathan Rosen to produce a model of a wormhole, often called Einstein–Rosen bridges.Einstein & Rosen (1935). His motivation was to model elementary particles with charge as a solution of gravitational field equations, in line with the program outlined in the paper "Do Gravitational Fields play an Important Role in the Constitution of the Elementary Particles?". These solutions cut and pasted Schwarzschild black holes to make a bridge between two patches. Because these solutions included spacetime curvature without the presence of a physical body, Einstein and Rosen suggested that they could provide the beginnings of a theory that avoided the notion of point particles. However, it was later found that Einstein–Rosen bridges are not stable.
Einstein–Cartan theory
alt=Einstein, sitting at a table, looks up from the papers he is reading and into the camera.|thumb|upright|Einstein at his office, University of Berlin, 1920In order to incorporate spinning point particles into general relativity, the affine connection needed to be generalized to include an antisymmetric part, called the torsion. This modification was made by Einstein and Cartan in the 1920s.
Equations of motion
In general relativity, gravitational force is reimagined as curvature of spacetime. A curved path like an orbit is not the result of a force deflecting a body from an ideal straight-line path, but rather the body's attempt to fall freely through a background that is itself curved by the presence of other masses. A remark by John Archibald Wheeler that has become proverbial among physicists summarizes the theory: "Spacetime tells matter how to move; matter tells spacetime how to curve." The Einstein field equations cover the latter aspect of the theory, relating the curvature of spacetime to the distribution of matter and energy. The geodesic equation covers the former aspect, stating that freely falling bodies follow lines that are as straight as possible in a curved spacetime. Einstein regarded this as an "independent fundamental assumption" that had to be postulated in addition to the field equations in order to complete the theory. Believing this to be a shortcoming in how general relativity was originally presented, he wished to derive it from the field equations themselves. Since the equations of general relativity are non-linear, a lump of energy made out of pure gravitational fields, like a black hole, would move on a trajectory which is determined by the Einstein field equations themselves, not by a new law. Accordingly, Einstein proposed that the field equations would determine the path of a singular solution, like a black hole, to be a geodesic. Both physicists and philosophers have often repeated the assertion that the geodesic equation can be obtained from applying the field equations to the motion of a gravitational singularity, but this claim remains disputed.
Old quantum theory
Photons and energy quanta
alt=|thumb|The photoelectric effect. Incoming photons on the left strike a metal plate (bottom), and eject electrons, depicted as flying off to the right.
In a 1905 paper, Einstein postulated that light itself consists of localized particles (quanta). Einstein's light quanta were nearly universally rejected by all physicists, including Max Planck and Niels Bohr. This idea only became universally accepted in 1919, with Robert Millikan's detailed experiments on the photoelectric effect, and with the measurement of Compton scattering.
Einstein concluded that each wave of frequency f is associated with a collection of photons with energy hf each, where h is the Planck constant. He did not say much more, because he was not sure how the particles were related to the wave. But he did suggest that this idea would explain certain experimental results, notably the photoelectric effect. Light quanta were dubbed photons by Gilbert N. Lewis in 1926.
Quantized atomic vibrations
In 1907, Einstein proposed a model of matter where each atom in a lattice structure is an independent harmonic oscillator. In the Einstein model, each atom oscillates independently—a series of equally spaced quantized states for each oscillator. Einstein was aware that getting the frequency of the actual oscillations would be difficult, but he nevertheless proposed this theory because it was a particularly clear demonstration that quantum mechanics could solve the specific heat problem in classical mechanics. Peter Debye refined this model.
Bose–Einstein statistics
In 1924, Einstein received a description of a statistical model from Indian physicist Satyendra Nath Bose, based on a counting method that assumed that light could be understood as a gas of indistinguishable particles. Einstein noted that Bose's statistics applied to some atoms as well as to the proposed light particles, and submitted his translation of Bose's paper to the Zeitschrift für Physik. Einstein also published his own articles describing the model and its implications, among them the Bose–Einstein condensate phenomenon that some particulates should appear at very low temperatures.Einstein (1924). It was not until 1995 that the first such condensate was produced experimentally by Eric Allin Cornell and Carl Wieman using ultra-cooling equipment built at the NIST–JILA laboratory at the University of Colorado at Boulder. Bose–Einstein statistics are now used to describe the behaviors of any assembly of bosons. Einstein's sketches for this project may be seen in the Einstein Archive in the library of the Leiden University.
Wave–particle duality
Although the patent office promoted Einstein to Technical Examiner Second Class in 1906, he had not given up on academia. In 1908, he became a Privatdozent at the University of Bern. In "Über die Entwicklung unserer Anschauungen über das Wesen und die Konstitution der Strahlung" ("The Development of our Views on the Composition and Essence of Radiation"), on the quantization of light, and in an earlier 1909 paper, Einstein showed that Max Planck's energy quanta must have well-defined momenta and act in some respects as independent, point-like particles. This paper introduced the photon concept and inspired the notion of wave–particle duality in quantum mechanics. Einstein saw this wave–particle duality in radiation as concrete evidence for his conviction that physics needed a new, unified foundation.
Zero-point energy
In a series of works completed from 1911 to 1913, Planck reformulated his 1900 quantum theory and introduced the idea of zero-point energy in his "second quantum theory". Soon, this idea attracted the attention of Einstein and his assistant Otto Stern. Assuming the energy of rotating diatomic molecules contains zero-point energy, they then compared the theoretical specific heat of hydrogen gas with the experimental data. The numbers matched nicely. However, after publishing the findings, they promptly withdrew their support, because they no longer had confidence in the correctness of the idea of zero-point energy.Stachel et al (2008) Vol. 4: The Swiss Years—Writings, 1912–1914, pp. 270 ff.
Stimulated emission
In 1917, at the height of his work on relativity, Einstein published an article in Physikalische Zeitschrift that proposed the possibility of stimulated emission, the physical process that makes possible the maser and the laser.Einstein (1917b).
This article showed that the statistics of absorption and emission of light would only be consistent with Planck's distribution law if the emission of light into a mode with n photons would be enhanced statistically compared to the emission of light into an empty mode. This paper was enormously influential in the later development of quantum mechanics, because it was the first paper to show that the statistics of atomic transitions had simple laws.
Matter waves
Einstein discovered Louis de Broglie's work and supported his ideas, which were received skeptically at first. In another major paper from this era, Einstein observed that de Broglie waves could explain the quantization rules of Bohr and Sommerfeld. This paper would inspire Schrödinger's work of 1926.
Quantum mechanics
Einstein's objections to quantum mechanics
Einstein played a major role in developing quantum theory, beginning with his 1905 paper on the photoelectric effect. However, he became displeased with modern quantum mechanics as it had evolved after 1925, despite its acceptance by other physicists. He was skeptical that the randomness of quantum mechanics was fundamental rather than the result of determinism, stating that God "is not playing at dice". Until the end of his life, he continued to maintain that quantum mechanics was incomplete.
Bohr versus Einstein
Einstein–Podolsky–Rosen paradox
Einstein never fully accepted quantum mechanics. While he recognized that it made correct predictions, he believed a more fundamental description of nature must be possible. Over the years he presented multiple arguments to this effect, but the one he preferred most dated to a debate with Bohr in 1930. Einstein suggested a thought experiment in which two objects are allowed to interact and then moved apart a great distance from each other. The quantum-mechanical description of the two objects is a mathematical entity known as a wavefunction. If the wavefunction that describes the two objects before their interaction is given, then the Schrödinger equation provides the wavefunction that describes them after their interaction. But because of what would later be called quantum entanglement, measuring one object would lead to an instantaneous change of the wavefunction describing the other object, no matter how far away it is. Moreover, the choice of which measurement to perform upon the first object would affect what wavefunction could result for the second object. Einstein reasoned that no influence could propagate from the first object to the second instantaneously fast. Indeed, he argued, physics depends on being able to tell one thing apart from another, and such instantaneous influences would call that into question. Because the true "physical condition" of the second object could not be immediately altered by an action done to the first, Einstein concluded, the wavefunction could not be that true physical condition, only an incomplete description of it.
A more famous version of this argument came in 1935, when Einstein published a paper with Boris Podolsky and Nathan Rosen that laid out what would become known as the EPR paradox.Einstein, Podolsky & Rosen (1935). In this thought experiment, two particles interact in such a way that the wavefunction describing them is entangled. Then, no matter how far the two particles were separated, a precise position measurement on one particle would imply the ability to predict, perfectly, the result of measuring the position of the other particle. Likewise, a precise momentum measurement of one particle would result in an equally precise prediction for of the momentum of the other particle, without needing to disturb the other particle in any way. They argued that no action taken on the first particle could instantaneously affect the other, since this would involve information being transmitted faster than light, which is forbidden by the theory of relativity. They invoked a principle, later known as the "EPR criterion of reality", positing that: "If, without in any way disturbing a system, we can predict with certainty (i.e., with probability equal to unity) the value of a physical quantity, then there exists an element of reality corresponding to that quantity." From this, they inferred that the second particle must have a definite value of both position and of momentum prior to either quantity being measured. But quantum mechanics considers these two observables incompatible and thus does not associate simultaneous values for both to any system. Einstein, Podolsky, and Rosen therefore concluded that quantum theory does not provide a complete description of reality.
In 1964, John Stewart Bell carried the analysis of quantum entanglement much further. He deduced that if measurements are performed independently on the two separated particles of an entangled pair, then the assumption that the outcomes depend upon hidden variables within each half implies a mathematical constraint on how the outcomes on the two measurements are correlated. This constraint would later be called a Bell inequality. Bell then showed that quantum physics predicts correlations that violate this inequality. Consequently, the only way that hidden variables could explain the predictions of quantum physics is if they are "nonlocal", which is to say that somehow the two particles are able to interact instantaneously no matter how widely they ever become separated. Bell argued that because an explanation of quantum phenomena in terms of hidden variables would require nonlocality, the EPR paradox "is resolved in the way which Einstein would have liked least".
Despite this, and although Einstein personally found the argument in the EPR paper overly complicated, that paper became among the most influential papers published in Physical Review. It is considered a centerpiece of the development of quantum information theory.
Unified field theory
Encouraged by his success with general relativity, Einstein sought an even more ambitious geometrical theory that would treat gravitation and electromagnetism as aspects of a single entity. In 1950, he described his unified field theory in a Scientific American article titled "On the Generalized Theory of Gravitation".Einstein (1950). His attempt to find the most fundamental laws of nature won him praise but not success: a particularly conspicuous blemish of his model was that it did not accommodate the strong and weak nuclear forces, neither of which was well understood until many years after his death. Although most researchers now believe that Einstein's approach to unifying physics was mistaken, his goal of a theory of everything is one to which his successors still aspire.
Other investigations
Einstein conducted other investigations that were unsuccessful and abandoned. These pertain to force, superconductivity, and other research.
Collaboration with other scientists
In addition to longtime collaborators Leopold Infeld, Nathan Rosen, Peter Bergmann and others, Einstein also had some one-shot collaborations with various scientists.
Einstein–de Haas experiment
In 1908, Owen Willans Richardson predicted that a change in the magnetic moment of a free body will cause this body to rotate. This effect is a consequence of the conservation of angular momentum and is strong enough to be observable in ferromagnetic materials. Einstein and Wander Johannes de Haas published two papers in 1915 claiming the first experimental observation of the effect. Measurements of this kind demonstrate that the phenomenon of magnetization is caused by the alignment (polarization) of the angular momenta of the electrons in the material along the axis of magnetization. These measurements also allow the separation of the two contributions to the magnetization: that which is associated with the spin and with the orbital motion of the electrons. The Einstein-de Haas experiment is the only experiment conceived, realized and published by Albert Einstein himself.
A complete original version of the Einstein-de Haas experimental equipment was donated by Geertruida de Haas-Lorentz, wife of de Haas and daughter of Lorentz, to the Ampère Museum in Lyon France in 1961 where it is currently on display. It was lost among the museum's holdings and was rediscovered in 2023.
Einstein as an inventor
In 1926, Einstein and his former student Leó Szilárd co-invented (and in 1930, patented) the Einstein refrigerator. This absorption refrigerator was then revolutionary for having no moving parts and using only heat as an input. On 11 November 1930, was awarded to Einstein and Leó Szilárd for the refrigerator. Their invention was not immediately put into commercial production, but the most promising of their patents were acquired by the Swedish company Electrolux.
Einstein also invented an electromagnetic pump, sound reproduction device, and several other household devices.Albert Einstein's patents. 2006. World Pat Inf. 28/2, 159–65. M. Trainer. doi: 10.1016/j.wpi.2005.10.012
Legacy
Non-scientific
While traveling, Einstein wrote daily to his wife Elsa and adopted stepdaughters Margot and Ilse. The letters were included in the papers bequeathed to the Hebrew University of Jerusalem. Margot Einstein permitted the personal letters to be made available to the public, but requested that it not be done until twenty years after her death (she died in 1986). Barbara Wolff, of the Hebrew University's Albert Einstein Archives, told the BBC that there are about 3,500 pages of private correspondence written between 1912 and 1955.
In his final four years, Einstein was involved with the establishment of the Albert Einstein College of Medicine in New York City.
In 1979, the Albert Einstein Memorial was unveiled outside the National Academy of Sciences building in Washington, D.C. for the Einstein centenary. It was sculpted by Robert Berks. Einstein can be seen holding a paper with three of his most important equations: for the photoelectric effect, general relativity and mass-energy equivalence.
Einstein's right of publicity was litigated in 2015 in a federal district court in California. Although the court initially held that the right had expired, that ruling was immediately appealed, and the decision was later vacated in its entirety. The underlying claims between the parties in that lawsuit were ultimately settled. The right is enforceable, and the Hebrew University of Jerusalem is the exclusive representative of that right. Corbis, successor to The Roger Richman Agency, licenses the use of his name and associated imagery, as agent for the university.
Mount Einstein in the Chugach Mountains of Alaska was named in 1955. Mount Einstein in New Zealand's Paparoa Range was named after him in 1970 by the Department of Scientific and Industrial Research.
In 1999, Einstein was named Time's Person of the Century.
Scientific
In 1999, a survey of the top 100 physicists voted for Einstein as the "greatest physicist ever", while a parallel survey of rank-and-file physicists gave the top spot to Isaac Newton, with Einstein second.
Physicist Lev Landau ranked physicists from 0 to 5 on a logarithmic scale of productivity and genius, with Newton and Einstein belonging in a "super league", with Newton receiving the highest ranking of 0, followed by Einstein with 0.5, while fathers of quantum mechanics such as Werner Heisenberg and Paul Dirac were ranked 1, with Landau himself a 2.
Physicist Eugene Wigner noted that while John von Neumann had the quickest and most acute mind he ever knew, it was Einstein who had the more penetrating and original mind of the two, stating that: The International Union of Pure and Applied Physics declared 2005 the "World Year of Physics", also known as "Einstein Year", in recognition of Einstein's "miracle year" in 1905. It was also declared the "International Year of Physics" by the United Nations.
In popular culture
Einstein became one of the most famous scientific celebrities after the confirmation of his general theory of relativity in 1919. Although most of the public had little understanding of his work, he was widely recognized and admired. In the period before World War II, The New Yorker published a vignette in their "The Talk of the Town" feature saying that Einstein was so well known in America that he would be stopped on the street by people wanting him to explain "that theory". Eventually he came to cope with unwanted enquirers by pretending to be someone else: "Pardon me, sorry! Always I am mistaken for Professor Einstein."
Einstein has been the subject of or inspiration for many novels, films, plays, and works of music. He is a favorite model for depictions of absent-minded professors; his expressive face and distinctive hairstyle have been widely copied and exaggerated. Time magazine's Frederic Golden wrote that Einstein was "a cartoonist's dream come true". His intellectual achievements and originality made Einstein broadly synonymous with genius.
Many popular quotations are often misattributed to him.
Awards and honors
Einstein received numerous awards and honors, and in 1922, he was awarded the 1921 Nobel Prize in Physics "for his services to Theoretical Physics, and especially for his discovery of the law of the photoelectric effect". None of the nominations in 1921 met the criteria set by Alfred Nobel, so the 1921 prize was carried forward and awarded to Einstein in 1922.
Einsteinium, a synthetic chemical element, was named in his honor in 1955, a few months after his death.
Publications
Scientific
First of a series of papers on this topic.
A reprint of this book was published by Edition Erbrich in 1982, .
Further information about the volumes published so far can be found on the webpages of the Einstein Papers Project and on the Princeton University Press Einstein Page.
Popular
The chasing a light beam thought experiment is described on pages 48–51.
Political
Einstein, Albert (September 1960). Foreword to Gandhi Wields the Weapon of Moral Power: Three Case Histories. Introduction by Bharatan Kumarappa. Ahmedabad: Navajivan Publishing House. pp. v–vi. . Foreword originally written in April 1953.
See also
Bern Historical Museum – Einstein Museum
Frist Campus Center at Princeton University Room 302 is associated with Einstein. The center was once the Palmer Physical Laboratory.
History of gravitational theory
List of German inventors and discoverers
List of Jewish Nobel laureates
List of peace activists
Notes
References
. His non-scientific works include: About Zionism: Speeches and Lectures by Professor Albert Einstein (1930), "Why War?" (1933, co-authored by Sigmund Freud), The World As I See It (1934), Out of My Later Years (1950), and a book on science for the general reader, The Evolution of Physics (1938, co-authored by Leopold Infeld).
Gilbert, Martin. Churchill and the Jews, Henry Holt and Company, N.Y. (2007) pp. 101, 176
"Denunciation of German Policy is a Stirring Event", Associated Press, 27 July 1933
"Stateless Jews: The Exiles from Germany, Nationality Plan", The Guardian (UK) 27 July 1933
, Harvard Gazette, 12 April 2007
Einstein Archive 59–215.
"". Instituut-Lorentz. 2005. Retrieved 21 November 2005.
From Albert Einstein: Philosopher-Scientist (1949), publ. Cambridge University Press, 1949. Niels Bohr's report of conversations with Einstein.
Goettling, Gary. Georgia Tech Alumni Magazine. 1998. Retrieved 12 November 2014. Leó Szilárd, a Hungarian physicist who later worked on the Manhattan Project, is credited with the discovery of the chain reaction
Barry R. Parker (2003). Einstein: The Passions of a Scientist, Prometheus Books, p. 31
The Three-body Problem from Pythagoras to Hawking, Mauri Valtonen, Joanna Anosova, Konstantin Kholshevnikov, Aleksandr Mylläri, Victor Orlov, Kiyotaka Tanikawa, (Springer 2016), p. 43, Simon and Schuster, 2008
Holton, G., Einstein, History, and Other Passions, Harvard University Press, 1996, pp. 177–193.
Martinez, A. A., "Handling evidence in history: the case of Einstein's wife", School Science Review, 86 (316), March 2005, pp. 49–56.
, Einstein's World, a 1931 reprint with minor changes, of his 1921 essay.
Retrieved 9 December 2015 via Nobelprize.org
pp. 296–302
See also: (PDF) and .
, ScienceMuseum.org, UK
Article "Alfred Einstein", in The New Grove Dictionary of Music and Musicians, ed. Stanley Sadie. 20 vol. London, Macmillan Publishers Ltd., 1980.
The Concise Edition of Baker's Biographical Dictionary of Musicians, 8th ed. Revised by Nicolas Slonimsky. New York, Schirmer Books, 1993.
Dowbiggin, Ian (2003). A Merciful End. New York: Oxford University Press,
van Dongen, Jeroen (2010) Einstein's Unification Cambridge University Press, p. 23.
, Office of Scientific and Technical Information, 2011.
Works cited
Further reading
, or
External links
Home page of Albert Einstein at The Institute for Advanced Study
Einstein and his love of music (archived 2015), Physics World, Jan 2005
including the Nobel Lecture 11 July 1923 Fundamental ideas and problems of the theory of relativity
Einstein's declaration of intention for American citizenship (archived 2014) on the World Digital Library
Archival materials collections
Albert Einstein Historical Letters, Documents & Papers from Shapell Manuscript Foundation
Albert Einstein in FBI Records: The Vault
Albert Einstein Archives Online (80,000+ Documents, currently offline) from The Hebrew University of Jerusalem (MSNBC coverage in 19 March 2012)
The Albert Einstein Archives at The Hebrew University of Jerusalem
Finding aid to Albert Einstein Collection (archived 2013) at Brandeis University
Finding aid to Albert Einstein collection from Boston University
Finding aid to Albert Einstein Collection in Harry Ransom Center of University of Texas at Austin
Finding aid to Albert Einstein Collection from Center for Jewish History
Digital collections
The Digital Einstein Papers An open-access site for The Collected Papers of Albert Einstein, from Princeton University
Albert Einstein Digital Collection from Vassar College Digital Collections
Albert – The Digital Repository of the IAS, which contains many digitized original documents and photographs
Category:20th-century American physicists
Category:20th-century German physicists
Category:20th-century American engineers
Category:20th-century American inventors
Category:20th-century Swiss inventors
Category:20th-century American letter writers
Category:20th-century American male writers
Category:20th-century American non-fiction writers
Category:20th-century American science writers
Category:Academic staff of Charles University
Category:Academic staff of ETH Zurich
Category:Academic staff of the University of Bern
Category:Academic staff of the University of Zurich
Category:Activists for African-American civil rights
Category:American civil rights activists
Category:American agnostics
Category:American Ashkenazi Jews
Category:American democratic socialists
Category:American humanists
Category:American male non-fiction writers
Category:American Nobel laureates
Category:American pacifists
Category:American philosophers of mathematics
Category:American quantum physicists
Category:American relativity theorists
Category:American theoretical physicists
Category:Anti-nationalists
Category:Deaths from abdominal aortic aneurysm
Category:Denaturalized citizens of Germany
Albert
Category:ETH Zurich alumni
Category:European democratic socialists
Category:German agnostics
Category:German Ashkenazi Jews
Category:German emigrants to Switzerland
Category:German humanists
Category:German male non-fiction writers
Category:German Nobel laureates
Category:German quantum physicists
Category:German relativity theorists
Category:German theoretical physicists
Category:Institute for Advanced Study faculty
Category:Jewish agnostics
Category:Jewish American non-fiction writers
Category:Jewish American physicists
Category:Jewish German physicists
Category:Jewish emigrants from Nazi Germany to the United States
Category:Emigrants from Nazi Germany to the United States
Category:Jewish Nobel laureates
Category:Jewish American socialists
Category:American socialists
Category:Max Planck Institute directors
Category:International members of the American Philosophical Society
Category:Members of the Royal Netherlands Academy of Arts and Sciences
Category:Members of the United States National Academy of Sciences
Category:Naturalised citizens of Austria
Category:Naturalised citizens of Switzerland
Category:Naturalized citizens of the United States
Category:Nobel laureates in Physics
Category:Pantheists
Category:Patent examiners
Category:People from Ulm
Category:People who lost German citizenship
Category:Philosophers of mathematics
Category:Philosophers of science
Category:Recipients of Franklin Medal
Category:Scientists from Munich
Category:Stateless people
Category:Swiss agnostics
Category:Swiss Ashkenazi Jews
Category:Swiss cosmologists
Category:Swiss emigrants to the United States
Category:Swiss Nobel laureates
Category:Swiss physicists
Category:University of Zurich alumni
Category:Winners of the Max Planck Medal
Category:Emigrants from Württemberg to the United States
Category:19th-century German Jews
Category:1879 births
Category:1955 deaths
|
biographies
| 14,617
|
775
|
Algorithm
|
https://en.wikipedia.org/wiki/Algorithm
|
In mathematics and computer science, an algorithm () is a finite sequence of mathematically rigorous instructions, typically used to solve a class of specific problems or to perform a computation. Algorithms are used as specifications for performing calculations and data processing. More advanced algorithms can use conditionals to divert the code execution through various routes (referred to as automated decision-making) and deduce valid inferences (referred to as automated reasoning).
In contrast, a heuristic is an approach to solving problems without well-defined correct or optimal results.David A. Grossman, Ophir Frieder, Information Retrieval: Algorithms and Heuristics, 2nd edition, 2004, For example, although social media recommender systems are commonly called "algorithms", they actually rely on heuristics as there is no truly "correct" recommendation.
As an effective method, an algorithm can be expressed within a finite amount of space and time"Any classical mathematical algorithm, for example, can be described in a finite number of English words" (Rogers 1987:2). and in a well-defined formal languageWell defined concerning the agent that executes the algorithm: "There is a computing agent, usually human, which can react to the instructions and carry out the computations" (Rogers 1987:2). for calculating a function."an algorithm is a procedure for computing a function (concerning some chosen notation for integers) ... this limitation (to numerical functions) results in no loss of generality", (Rogers 1977:1). Starting from an initial state and initial input (perhaps empty),"An algorithm has zero or more inputs, i.e., quantities which are given to it initially before the algorithm begins" (Knuth 1973:5). the instructions describe a computation that, when executed, proceeds through a finite"A procedure which has all the characteristics of an algorithm except that it possibly lacks finiteness may be called a 'computational method (Knuth 1971:5). number of well-defined successive states, eventually producing "output""An algorithm has one or more outputs, i.e., quantities which have a specified relation to the inputs" (Knuth 1973:5). and terminating at a final ending state. The transition from one state to the next is not necessarily deterministic; some algorithms, known as randomized algorithms, incorporate random input.Whether or not a process with random interior processes (not including the input) is an algorithm is debatable. Rogers opines that: "a computation is carried out in a discrete stepwise fashion, without the use of continuous methods or analog devices ... carried forward deterministically, without resort to random methods or devices, e.g., dice" (Rogers 1987:2).
Etymology
Around 825 AD, Persian scientist and polymath Muḥammad ibn Mūsā al-Khwārizmī wrote kitāb al-ḥisāb al-hindī ("Book of Indian computation") and kitab al-jam' wa'l-tafriq al-ḥisāb al-hindī ("Addition and subtraction in Indian arithmetic"). In the early 12th century, Latin translations of these texts involving the Hindu–Arabic numeral system and arithmetic appeared, for example Liber Alghoarismi de practica arismetrice, attributed to John of Seville, and Liber Algorismi de numero Indorum, attributed to Adelard of Bath.Blair, Ann, Duguid, Paul, Goeing, Anja-Silvia and Grafton, Anthony. Information: A Historical Companion, Princeton: Princeton University Press, 2021. p. 247 Here, alghoarismi or algorismi is the Latinization of Al-Khwarizmi's name; the text starts with the phrase Dixit Algorismi, or "Thus spoke Al-Khwarizmi".
The word algorism in English came to mean the use of place-value notation in calculations; it occurs in the Ancrene Wisse from circa 1225. By the time Geoffrey Chaucer wrote The Canterbury Tales in the late 14th century, he used a variant of the same word in describing augrym stones, stones used for place-value calculation. In the 15th century, under the influence of the Greek word ἀριθμός (arithmos, "number"; cf. "arithmetic"), the Latin word was altered to algorithmus. By 1596, this form of the word was used in English, as algorithm, by Thomas Hood.
Definition
One informal definition is "a set of rules that precisely defines a sequence of operations", which would include all computer programs (including programs that do not perform numeric calculations), and any prescribed bureaucratic procedure
or cook-book recipe. In general, a program is an algorithm only if it stops eventuallyStone requires that "it must terminate in a finite number of steps" (Stone 1973:7–8).—even though infinite loops may sometimes prove desirable. define an algorithm to be an explicit set of instructions for determining an output, that can be followed by a computing machine or a human who could only carry out specific elementary operations on symbols.Boolos and Jeffrey 1974, 1999:19
Most algorithms are intended to be implemented as computer programs. However, algorithms are also implemented by other means, such as in a biological neural network (for example, the human brain performing arithmetic or an insect looking for food), in an electrical circuit, or a mechanical device.
History
Ancient algorithms
Step-by-step procedures for solving mathematical problems have been recorded since antiquity. This includes in Babylonian mathematics (around 2500 BC), Egyptian mathematics (around 1550 BC), Indian mathematics (around 800 BC and later),Hayashi, T. (2023, January 1). Brahmagupta. Encyclopedia Britannica. the Ifa Oracle (around 500 BC), Greek mathematics (around 240 BC), Chinese mathematics (around 200 BC and later), and Arabic mathematics (around 800 AD).
The earliest evidence of algorithms is found in ancient Mesopotamian mathematics. A Sumerian clay tablet found in Shuruppak near Baghdad and dated to describes the earliest division algorithm. During the Hammurabi dynasty , Babylonian clay tablets described algorithms for computing formulas. Algorithms were also used in Babylonian astronomy. Babylonian clay tablets describe and employ algorithmic procedures to compute the time and place of significant astronomical events.
Algorithms for arithmetic are also found in ancient Egyptian mathematics, dating back to the Rhind Mathematical Papyrus . Algorithms were later used in ancient Hellenistic mathematics. Two examples are the Sieve of Eratosthenes, which was described in the Introduction to Arithmetic by Nicomachus, and the Euclidean algorithm, which was first described in Euclid's Elements ().Examples of ancient Indian mathematics included the Shulba Sutras, the Kerala School, and the Brāhmasphuṭasiddhānta.
The first cryptographic algorithm for deciphering encrypted code was developed by Al-Kindi, a 9th-century Arab mathematician, in A Manuscript On Deciphering Cryptographic Messages. He gave the first description of cryptanalysis by frequency analysis, the earliest codebreaking algorithm.
Computers
Weight-driven clocks
Bolter credits the invention of the weight-driven clock as "the key invention [of Europe in the Middle Ages]," specifically the verge escapement mechanismBolter 1984:24 producing the tick and tock of a mechanical clock. "The accurate automatic machine"Bolter 1984:26 led immediately to "mechanical automata" in the 13th century and "computational machines"—the difference and analytical engines of Charles Babbage and Ada Lovelace in the mid-19th century.Bolter 1984:33–34, 204–206. Lovelace designed the first algorithm intended for processing on a computer, Babbage's analytical engine, which is the first device considered a real Turing-complete computer instead of just a calculator. Although the full implementation of Babbage's second device was not realized for decades after her lifetime, Lovelace has been called "history's first programmer".
Electromechanical relay
Bell and Newell (1971) write that the Jacquard loom, a precursor to Hollerith cards (punch cards), and "telephone switching technologies" led to the development of the first computers.Bell and Newell diagram 1971:39, cf. Davis 2000 By the mid-19th century, the telegraph, the precursor of the telephone, was in use throughout the world. By the late 19th century, the ticker tape () was in use, as were Hollerith cards (c. 1890). Then came the teleprinter () with its punched-paper use of Baudot code on tape.
Telephone-switching networks of electromechanical relays were invented in 1835. These led to the invention of the digital adding device by George Stibitz in 1937. While working in Bell Laboratories, he observed the "burdensome" use of mechanical calculators with gears. "He went home one evening in 1937 intending to test his idea... When the tinkering was over, Stibitz had constructed a binary adding device".Melina Hill, Valley News Correspondent, A Tinkerer Gets a Place in History, Valley News West Lebanon NH, Thursday, March 31, 1983, p. 13.Davis 2000:14
Formalization
In 1928, a partial formalization of the modern concept of algorithms began with attempts to solve the Entscheidungsproblem (decision problem) posed by David Hilbert. Later formalizations were framed as attempts to define "effective calculability"Kleene 1943 in Davis 1965:274 or "effective method".Rosser 1939 in Davis 1965:225 Those formalizations included the Gödel–Herbrand–Kleene recursive functions of 1930, 1934 and 1935, Alonzo Church's lambda calculus of 1936, Emil Post's Formulation 1 of 1936, and Alan Turing's Turing machines of 1936–37 and 1939.
Modern Algorithms
Algorithms have evolved and improved in many ways as time goes on. Common uses of algorithms today include social media apps like Instagram and YouTube. Algorithms are used as a way to analyze what people like and push more of those things to the people who interact with them. Quantum computing uses quantum algorithm procedures to solve problems faster. More recently, in 2024, NIST updated their post-quantum encryption standards, which includes new encryption algorithms to enhance defenses against attacks using quantum computing.
Representations
Algorithms can be expressed in many kinds of notation, including natural languages, pseudocode, flowcharts, drakon-charts, programming languages or control tables (processed by interpreters). Natural language expressions of algorithms tend to be verbose and ambiguous and are rarely used for complex or technical algorithms. Pseudocode, flowcharts, drakon-charts, and control tables are structured expressions of algorithms that avoid common ambiguities of natural language. Programming languages are primarily for expressing algorithms in a computer-executable form but are also used to define or document algorithms.
Turing machines
There are many possible representations and Turing machine programs can be expressed as a sequence of machine tables (see finite-state machine, state-transition table, and control table for more), as flowcharts and drakon-charts (see state diagram for more), as a form of rudimentary machine code or assembly code called "sets of quadruples", and more. Algorithm representations can also be classified into three accepted levels of Turing machine description: high-level description, implementation description, and formal description.Sipser 2006:157 A high-level description describes the qualities of the algorithm itself, ignoring how it is implemented on the Turing machine. An implementation description describes the general manner in which the machine moves its head and stores data to carry out the algorithm, but does not give exact states. In the most detail, a formal description gives the exact state table and list of transitions of the Turing machine.
Flowchart representation
The graphical aid called a flowchart offers a way to describe and document an algorithm (and a computer program corresponding to it). It has four primary symbols: arrows showing program flow, rectangles (SEQUENCE, GOTO), diamonds (IF-THEN-ELSE), and dots (OR-tie). Sub-structures can "nest" in rectangles, but only if a single exit occurs from the superstructure.
Algorithmic analysis
It is often important to know how much time, storage, or other cost an algorithm may require. Methods have been developed for the analysis of algorithms to obtain such quantitative answers (estimates); for example, an algorithm that adds up the elements of a list of n numbers would have a time requirement of , using big O notation. The algorithm only needs to remember two values: the sum of all the elements so far, and its current position in the input list. If the space required to store the input numbers is not counted, it has a space requirement of , otherwise is required.
Different algorithms may complete the same task with a different set of instructions in less or more time, space, or 'effort' than others. For example, a binary search algorithm (with cost ) outperforms a sequential search (cost ) when used for table lookups on sorted lists or arrays.
Formal versus empirical
The analysis, and study of algorithms is a discipline of computer science. Algorithms are often studied abstractly, without referencing any specific programming language or implementation. Algorithm analysis resembles other mathematical disciplines as it focuses on the algorithm's properties, not implementation. Pseudocode is typical for analysis as it is a simple and general representation. Most algorithms are implemented on particular hardware/software platforms and their algorithmic efficiency is tested using real code. The efficiency of a particular algorithm may be insignificant for many "one-off" problems but it may be critical for algorithms designed for fast interactive, commercial, or long-life scientific usage. Scaling from small n to large n frequently exposes inefficient algorithms that are otherwise benign.
Empirical testing is useful for uncovering unexpected interactions that affect performance. Benchmarks may be used to compare before/after potential improvements to an algorithm after program optimization.
Empirical tests cannot replace formal analysis, though, and are non-trivial to perform fairly.
Execution efficiency
To illustrate the potential improvements possible even in well-established algorithms, a recent significant innovation, relating to FFT algorithms (used heavily in the field of image processing), can decrease processing time up to 1,000 times for applications like medical imaging. In general, speed improvements depend on special properties of the problem, which are very common in practical applications.Haitham Hassanieh, Piotr Indyk, Dina Katabi, and Eric Price, "ACM-SIAM Symposium On Discrete Algorithms (SODA) , Kyoto, January 2012. See also the sFFT Web Page . Speedups of this magnitude enable computing devices that make extensive use of image processing (like digital cameras and medical equipment) to consume less power.
Best Case and Worst Case
The best case of an algorithm refers to the scenario or input for which the algorithm or data structure takes the least time and resources to complete its tasks. The worst case of an algorithm is the case that causes the algorithm or data structure to consume the maximum period of time and computational resources.
Design
Algorithm design is a method or mathematical process for problem-solving and engineering algorithms. The design of algorithms is part of many solution theories, such as divide-and-conquer or dynamic programming within operation research. Techniques for designing and implementing algorithm designs are also called algorithm design patterns, with examples including the template method pattern and the decorator pattern. One of the most important aspects of algorithm design is resource (run-time, memory usage) efficiency; the big O notation is used to describe e.g., an algorithm's run-time growth as the size of its input increases.
Structured programming
Per the Church–Turing thesis, any algorithm can be computed by any Turing complete model. Turing completeness only requires four instruction types—conditional GOTO, unconditional GOTO, assignment, HALT. However, Kemeny and Kurtz observe that, while "undisciplined" use of unconditional GOTOs and conditional IF-THEN GOTOs can result in "spaghetti code", a programmer can write structured programs using only these instructions; on the other hand "it is also possible, and not too hard, to write badly structured programs in a structured language".John G. Kemeny and Thomas E. Kurtz 1985 Back to Basic: The History, Corruption, and Future of the Language, Addison-Wesley Publishing Company, Inc. Reading, MA, . Tausworthe augments the three Böhm-Jacopini canonical structures:Tausworthe 1977:101 SEQUENCE, IF-THEN-ELSE, and WHILE-DO, with two more: DO-WHILE and CASE.Tausworthe 1977:142 An additional benefit of a structured program is that it lends itself to proofs of correctness using mathematical induction.Knuth 1973 section 1.2.1, expanded by Tausworthe 1977 at pages 100ff and Chapter 9.1
Legal status
By themselves, algorithms are not usually patentable. In the United States, a claim consisting solely of simple manipulations of abstract concepts, numbers, or signals does not constitute "processes" (USPTO 2006), so algorithms are not patentable (as in Gottschalk v. Benson). However practical applications of algorithms are sometimes patentable. For example, in Diamond v. Diehr, the application of a simple feedback algorithm to aid in the curing of synthetic rubber was deemed patentable. The patenting of software is controversial, and there are criticized patents involving algorithms, especially data compression algorithms, such as Unisys's LZW patent. Additionally, some cryptographic algorithms have export restrictions (see export of cryptography).
Classification
By implementation
Recursion
A recursive algorithm invokes itself repeatedly until meeting a termination condition and is a common functional programming method. Iterative algorithms use repetitions such as loops or data structures like stacks to solve problems. Problems may be suited for one implementation or the other. The Tower of Hanoi is a puzzle commonly solved using recursive implementation. Every recursive version has an equivalent (but possibly more or less complex) iterative version, and vice versa.
Serial, parallel or distributed
Algorithms are usually discussed with the assumption that computers execute one instruction of an algorithm at a time on serial computers. Serial algorithms are designed for these environments, unlike parallel or distributed algorithms. Parallel algorithms take advantage of computer architectures where multiple processors can work on a problem at the same time. Distributed algorithms use multiple machines connected via a computer network. Parallel and distributed algorithms divide the problem into subproblems and collect the results back together. Resource consumption in these algorithms is not only processor cycles on each processor but also the communication overhead between the processors. Some sorting algorithms can be parallelized efficiently, but their communication overhead is expensive. Iterative algorithms are generally parallelizable, but some problems have no parallel algorithms and are called inherently serial problems.
Deterministic or non-deterministic
Deterministic algorithms solve the problem with exact decisions at every step; whereas non-deterministic algorithms solve problems via guessing. Guesses are typically made more accurate through the use of heuristics.
Exact or approximate
While many algorithms reach an exact solution, approximation algorithms seek an approximation that is close to the true solution. Such algorithms have practical value for many hard problems. For example, the Knapsack problem, where there is a set of items, and the goal is to pack the knapsack to get the maximum total value. Each item has some weight and some value. The total weight that can be carried is no more than some fixed number X. So, the solution must consider the weights of items as well as their value.
Quantum algorithm
Quantum algorithms run on a realistic model of quantum computation. The term is usually used for those algorithms that seem inherently quantum or use some essential feature of Quantum computing such as quantum superposition or quantum entanglement.
By design paradigm
Another way of classifying algorithms is by their design methodology or paradigm. Some common paradigms are:
Brute-force or exhaustive search
Brute force is a problem-solving method of systematically trying every possible option until the optimal solution is found. This approach can be very time-consuming, testing every possible combination of variables. It is often used when other methods are unavailable or too complex. Brute force can solve a variety of problems, including finding the shortest path between two points and cracking passwords.
Divide and conquer
A divide-and-conquer algorithm repeatedly reduces a problem to one or more smaller instances of itself (usually recursively) until the instances are small enough to solve easily. Merge sorting is an example of divide and conquer, where an unordered list is repeatedly split into smaller lists, which are sorted in the same way and then merged. In a simpler variant of divide and conquer called prune and search or decrease-and-conquer algorithm, which solves one smaller instance of itself, and does not require a merge step. An example of a prune and search algorithm is the binary search algorithm.
Search and enumeration
Many problems (such as playing chess) can be modelled as problems on graphs. A graph exploration algorithm specifies rules for moving around a graph and is useful for such problems. This category also includes search algorithms, branch and bound enumeration, and backtracking.
Randomized algorithm
Such algorithms make some choices randomly (or pseudo-randomly). They find approximate solutions when finding exact solutions may be impractical (see heuristic method below). For some problems, the fastest approximations must involve some randomness.For instance, the volume of a convex polytope (described using a membership oracle) can be approximated to high accuracy by a randomized polynomial time algorithm, but not by a deterministic one: see Whether randomized algorithms with polynomial time complexity can be the fastest algorithm for some problems is an open question known as the P versus NP problem. There are two large classes of such algorithms:
Monte Carlo algorithms return a correct answer with high probability. E.g. RP is the subclass of these that run in polynomial time.
Las Vegas algorithms always return the correct answer, but their running time is only probabilistically bound, e.g. ZPP.
Reduction of complexity
This technique transforms difficult problems into better-known problems solvable with (hopefully) asymptotically optimal algorithms. The goal is to find a reducing algorithm whose complexity is not dominated by the resulting reduced algorithms. For example, one selection algorithm finds the median of an unsorted list by first sorting the list (the expensive portion), and then pulling out the middle element in the sorted list (the cheap portion). This technique is also known as transform and conquer.
Back tracking
In this approach, multiple solutions are built incrementally and abandoned when it is determined that they cannot lead to a valid full solution.
Optimization problems
For optimization problems there is a more specific classification of algorithms; an algorithm for such problems may fall into one or more of the general categories described above as well as into one of the following:
Linear programming
When searching for optimal solutions to a linear function bound by linear equality and inequality constraints, the constraints can be used directly to produce optimal solutions. There are algorithms that can solve any problem in this category, such as the popular simplex algorithm.George B. Dantzig and Mukund N. Thapa. 2003. Linear Programming 2: Theory and Extensions. Springer-Verlag. Problems that can be solved with linear programming include the maximum flow problem for directed graphs. If a problem also requires that any of the unknowns be integers, then it is classified in integer programming. A linear programming algorithm can solve such a problem if it can be proved that all restrictions for integer values are superficial, i.e., the solutions satisfy these restrictions anyway. In the general case, a specialized algorithm or an algorithm that finds approximate solutions is used, depending on the difficulty of the problem.
Dynamic programming
When a problem shows optimal substructures—meaning the optimal solution can be constructed from optimal solutions to subproblems—and overlapping subproblems, meaning the same subproblems are used to solve many different problem instances, a quicker approach called dynamic programming avoids recomputing solutions. For example, Floyd–Warshall algorithm, the shortest path between a start and goal vertex in a weighted graph can be found using the shortest path to the goal from all adjacent vertices. Dynamic programming and memoization go together. Unlike divide and conquer, dynamic programming subproblems often overlap. The difference between dynamic programming and simple recursion is the caching or memoization of recursive calls. When subproblems are independent and do not repeat, memoization does not help; hence dynamic programming is not applicable to all complex problems. Using memoization dynamic programming reduces the complexity of many problems from exponential to polynomial.
The greedy method
Greedy algorithms, similarly to a dynamic programming, work by examining substructures, in this case not of the problem but of a given solution. Such algorithms start with some solution and improve it by making small modifications. For some problems, they always find the optimal solution but for others they may stop at local optima. The most popular use of greedy algorithms is finding minimal spanning trees of graphs without negative cycles. Huffman Tree, Kruskal, Prim, Sollin are greedy algorithms that can solve this optimization problem.
The heuristic method
In optimization problems, heuristic algorithms find solutions close to the optimal solution when finding the optimal solution is impractical. These algorithms get closer and closer to the optimal solution as they progress. In principle, if run for an infinite amount of time, they will find the optimal solution. They can ideally find a solution very close to the optimal solution in a relatively short time. These algorithms include local search, tabu search, simulated annealing, and genetic algorithms. Some, like simulated annealing, are non-deterministic algorithms while others, like tabu search, are deterministic. When a bound on the error of the non-optimal solution is known, the algorithm is further categorized as an approximation algorithm.
Examples
One of the simplest algorithms finds the largest number in a list of numbers of random order. Finding the solution requires looking at every number in the list. From this follows a simple algorithm, which can be described in plain English as:
High-level description:
If a set of numbers is empty, then there is no highest number.
Assume the first number in the set is the largest.
For each remaining number in the set: if this number is greater than the current largest, it becomes the new largest.
When there are no unchecked numbers left in the set, consider the current largest number to be the largest in the set.
(Quasi-)formal description:
Written in prose but much closer to the high-level language of a computer program, the following is the more formal coding of the algorithm in pseudocode or pidgin code:
Input: A list of numbers L.
Output: The largest number in the list L.
if L.size = 0 return null
largest ← L[0]
for each item in L, do
if item > largest, then
largest ← item
return largest
See also
Abstract machine
ALGOL
Algorithm = Logic + Control
Algorithm aversion
Algorithm engineering
Algorithm characterizations
Algorithmic bias
Algorithmic composition
Algorithmic entities
Algorithmic synthesis
Algorithmic technique
Algorithmic topology
Computational mathematics
Garbage in, garbage out
Introduction to Algorithms (textbook)
Government by algorithm
List of algorithms
List of algorithm books
List of algorithm general topics
Medium is the message
Regulation of algorithms
Theory of computation
Computability theory
Computational complexity theory
Notes
Bibliography
Bell, C. Gordon and Newell, Allen (1971), Computer Structures: Readings and Examples, McGraw–Hill Book Company, New York. .
Includes a bibliography of 56 references.
,
: cf. Chapter 3 Turing machines where they discuss "certain enumerable sets not effectively (mechanically) enumerable".
Campagnolo, M.L., Moore, C., and Costa, J.F. (2000) An analog characterization of the subrecursive functions. In Proc. of the 4th Conference on Real Numbers and Computers, Odense University, pp. 91–109
Reprinted in The Undecidable, p. 89ff. The first expression of "Church's Thesis". See in particular page 100 (The Undecidable) where he defines the notion of "effective calculability" in terms of "an algorithm", and he uses the word "terminates", etc.
Reprinted in The Undecidable, p. 110ff. Church shows that the Entscheidungsproblem is unsolvable in about 3 pages of text and 3 pages of footnotes.
Davis gives commentary before each article. Papers of Gödel, Alonzo Church, Turing, Rosser, Kleene, and Emil Post are included; those cited in the article are listed here by author's name.
Davis offers concise biographies of Leibniz, Boole, Frege, Cantor, Hilbert, Gödel and Turing with von Neumann as the show-stealing villain. Very brief bios of Joseph-Marie Jacquard, Babbage, Ada Lovelace, Claude Shannon, Howard Aiken, etc.
,
Yuri Gurevich, Sequential Abstract State Machines Capture Sequential Algorithms, ACM Transactions on Computational Logic, Vol 1, no 1 (July 2000), pp. 77–111. Includes bibliography of 33 sources.
, 3rd edition 1976[?], (pbk.)
, . Cf. Chapter "The Spirit of Truth" for a history leading to, and a discussion of, his proof.
Presented to the American Mathematical Society, September 1935. Reprinted in The Undecidable, p. 237ff. Kleene's definition of "general recursion" (known now as mu-recursion) was used by Church in his 1935 paper An Unsolvable Problem of Elementary Number Theory that proved the "decision problem" to be "undecidable" (i.e., a negative result).
Reprinted in The Undecidable, p. 255ff. Kleene refined his definition of "general recursion" and proceeded in his chapter "12. Algorithmic theories" to posit "Thesis I" (p. 274); he would later repeat this thesis (in Kleene 1952:300) and name it "Church's Thesis"(Kleene 1952:317) (i.e., the Church thesis).
Kosovsky, N.K. Elements of Mathematical Logic and its Application to the theory of Subrecursive Algorithms, LSU Publ., Leningrad, 1981
A.A. Markov (1954) Theory of algorithms. [Translated by Jacques J. Schorr-Kon and PST staff] Imprint Moscow, Academy of Sciences of the USSR, 1954 [i.e., Jerusalem, Israel Program for Scientific Translations, 1961; available from the Office of Technical Services, U.S. Dept. of Commerce, Washington] Description 444 p. 28 cm. Added t.p. in Russian Translation of Works of the Mathematical Institute, Academy of Sciences of the USSR, v. 42. Original title: Teoriya algerifmov. [QA248.M2943 Dartmouth College library. U.S. Dept. of Commerce, Office of Technical Services, number OTS .]
Minsky expands his "...idea of an algorithm – an effective procedure..." in chapter 5.1 Computability, Effective Procedures and Algorithms. Infinite machines.
Reprinted in The Undecidable, pp. 289ff. Post defines a simple algorithmic-like process of a man writing marks or erasing marks and going from box to box and eventually halting, as he follows a list of simple instructions. This is cited by Kleene as one source of his "Thesis I", the so-called Church–Turing thesis.
Reprinted in The Undecidable, p. 223ff. Herein is Rosser's famous definition of "effective method": "...a method each step of which is precisely predetermined and which is certain to produce the answer in a finite number of steps... a machine which will then solve any problem of the set with no human intervention beyond inserting the question and (later) reading the answer" (p. 225–226, The Undecidable)
Cf. in particular the first chapter titled: Algorithms, Turing Machines, and Programs. His succinct informal definition: "...any sequence of instructions that can be obeyed by a robot, is called an algorithm" (p. 4).
. Corrections, ibid, vol. 43(1937) pp. 544–546. Reprinted in The Undecidable, p. 116ff. Turing's famous paper completed as a Master's dissertation while at King's College Cambridge UK.
Reprinted in The Undecidable, pp. 155ff. Turing's paper that defined "the oracle" was his PhD thesis while at Princeton.
United States Patent and Trademark Office (2006), 2106.02 **>Mathematical Algorithms: 2100 Patentability, Manual of Patent Examining Procedure (MPEP). Latest revision August 2006
Zaslavsky, C. (1970). Mathematics of the Yoruba People and of Their Neighbors in Southern Nigeria. The Two-Year College Mathematics Journal, 1(2), 76–99. https://doi.org/10.2307/3027363
NIST Releases First 3 Finalized Post-Quantum Encryption Standards. https://www.nist.gov/news-events/news/2024/08/nist-releases-first-3-finalized-post-quantum-encryption-standards
Further reading
Jon Kleinberg, Éva Tardos(2006): Algorithm Design, Pearson/Addison-Wesley, ISBN 978-0-32129535-4
Knuth, Donald E. (2000). Selected Papers on Analysis of Algorithms . Stanford, California: Center for the Study of Language and Information.
Knuth, Donald E. (2010). Selected Papers on Design of Algorithms . Stanford, California: Center for the Study of Language and Information.
External links
Dictionary of Algorithms and Data Structures – National Institute of Standards and Technology
Algorithm repositories
The Stony Brook Algorithm Repository – State University of New York at Stony Brook
Collected Algorithms of the ACM – Associations for Computing Machinery
The Stanford GraphBase – Stanford University
Category:Articles with example pseudocode
Category:Mathematical logic
Category:Theoretical computer science
|
computer_science
| 5,020
|
842
|
Aegean Sea
|
https://en.wikipedia.org/wiki/Aegean_Sea
|
The Aegean Sea is an elongated embayment of the Mediterranean Sea between Europe and Asia. It is located between the Balkans and Anatolia, and covers an area of some . In the north, the Aegean is connected to the Marmara Sea, which in turn connects to the Black Sea, by the straits of the Dardanelles and the Bosphorus, respectively. The Aegean Islands are located within the sea and some bound it on its southern periphery, including Crete and Rhodes. The sea reaches a maximum depth of 2,639 m (8,658 ft) to the west of Karpathos. The Thracian Sea and the Sea of Crete are main subdivisions of the Aegean Sea.
The Aegean Islands can be divided into several island groups, including the Dodecanese, the Cyclades, the Sporades, the Saronic islands and the North Aegean Islands, as well as Crete and its surrounding islands. The Dodecanese, located to the southeast, includes the islands of Rhodes, Kos, and Patmos; the islands of Delos and Naxos are within the Cyclades to the south of the sea. Lesbos is part of the North Aegean Islands. Euboea, the second-largest island in Greece, is located in the Aegean, despite being administered as part of Central Greece. Nine out of twelve of the Administrative regions of Greece border the sea, along with the Turkish provinces of Edirne, Çanakkale, Balıkesir, İzmir, Aydın and Muğla to the east of the sea. Various Turkish islands in the sea are Imbros, Tenedos, Cunda Island, and the Foça Islands.
The Aegean Sea has been historically important, especially regarding the civilization of Ancient Greece, which inhabited the area around the coast of the Aegean and the Aegean islands. The Aegean islands facilitated contact between the people of the area and between Europe and Asia. Along with the Greeks, Thracians lived along the northern coasts. The Romans conquered the area under the Roman Empire, and later the Byzantine Empire held it against advances by the First Bulgarian Empire. The Fourth Crusade weakened Byzantine control of the area, and it was eventually conquered by the Ottoman Empire, with the exception of Crete, which was a Venetian colony until 1669. The Greek War of Independence allowed a Greek state on the coast of the Aegean from 1829 onwards. The Ottoman Empire held a presence over the sea for over 500 years until it was replaced by modern Turkey.
The rocks making up the floor of the Aegean are mainly limestone, though often greatly altered by volcanic activity that has convulsed the region in relatively recent geologic times. Of particular interest are the richly colored sediments in the region of the islands of Santorini and Milos, in the south Aegean. Notable cities on the Aegean coastline include Athens, Thessaloniki, Volos, Kavala, and Heraklion in Greece, and İzmir and Bodrum in Turkey.
Several issues concerning sovereignty within the Aegean Sea are disputed between Greece and Turkey. The Aegean dispute has had a large effect on Greece-Turkey relations since the 1970s. Issues include the delimitation of territorial waters, national airspace, exclusive economic zones, and flight information regions.
Name and etymology
The name Aegaeus, used by Late Latin authors, referred to Aegeus, who was said to have jumped into that sea to drown himself (rather than throw himself from the Athenian acropolis, as told by some Greek authors). He was the father of Theseus, the mythical king and founder-hero of Athens. Aegeus had told Theseus to put up white sails when returning if he was successful in killing the Minotaur. When Theseus returned, he forgot these instructions, and Aegeus thought his son had died, so he drowned himself in the sea.Hyginus, Fab. 43; Serv. Verg. A. 3.74; Scriptores rerum mythicarum Latini, ed. Bode, i. p. 117 (Second Vatican Mythographer 125).
The sea was known in Latin as Mare Aegaeum while under the control of the Roman Empire. The Venetians, who ruled many Greek islands in the High and Late Middle Ages, popularized the name Archipelago (, meaning "main sea" or "chief sea"), a name that held on in many European countries until the early modern period. In some South Slavic languages, the Aegean is called White Sea (; ; ).Zbornik Matice srpske za društvene nauke: (1961), Volumes 28–31, p.74 The Turkish name for the sea is Ege Denizi, which is derived from the Greek name, and Adalar Denizi meaning "Sea of Islands".
Geography
The Aegean Sea is an elongated embayment of the Mediterranean Sea and covers about in area, measuring about longitudinally and latitudinal. The sea's maximum depth is , located at a point west of Karpathos. The Aegean Islands are found within its waters, with the following islands delimiting the sea on the south, generally from west to east: Kythera, Antikythera, Crete, Kasos, Karpathos and Rhodes. The Anatolian peninsula marks the eastern boundary of the sea, while the Greek mainland marks the west. Several seas are contained within the Aegean Sea; the Thracian Sea is a section of the Aegean located to the north, the Icarian Sea to the east, the Myrtoan Sea to the west, while the Sea of Crete is the southern section.
The Greek regions that border the sea, in alphabetical order, are Attica, Central Greece, Central Macedonia, Crete, Eastern Macedonia and Thrace, North Aegean, Peloponnese, South Aegean, and Thessaly. The traditional Greek region of Macedonia also borders the sea, to the north.
The Aegean Islands, which almost all belong to Greece, can be divided into seven groups:
Northeastern Aegean Islands, which lie in the Thracian Sea
East Aegean Islands (Euboea)
Northern Sporades
Cyclades
Saronic Islands (or Argo-Saronic Islands)
Dodecanese (or Southern Sporades)Administratively, the Greek Dodecanese also contains Kastellorizo, situated further east outside the Aegean proper.
Crete
Many of the Aegean islands or island chains, are geographical extensions of the mountains on the mainland. One chain extends across the sea to Chios, another extends across Euboea to Samos, and a third extends across the Peloponnese and Crete to Rhodes, dividing the Aegean from the Mediterranean.
The bays and gulfs of the Aegean beginning at the South and moving clockwise include on Crete, the Mirabello, Almyros, Souda and Chania bays or gulfs, on the mainland the Myrtoan Sea to the west with the Argolic Gulf, the Saronic Gulf northwestward, the Petalies Gulf which connects with the South Euboic Sea, the Pagasetic Gulf which connects with the North Euboic Sea, the Thermian Gulf northwestward, the Chalkidiki Peninsula including the Cassandra and the Singitic Gulfs, northward the Strymonian Gulf and the Gulf of Kavala and the rest are in Turkey; Saros Gulf, Edremit Gulf, Dikili Gulf, Gulf of Çandarlı, Gulf of İzmir, Gulf of Kuşadası, Gulf of Gökova, Güllük Gulf.
The Aegean Sea is connected to the Sea of Marmara by the Dardanelles, also known from Classical Antiquity as the Hellespont. The Dardanelles are located to the northeast of the sea. It ultimately connects with the Black Sea through the Bosporus strait, upon which lies the city of Istanbul. The Dardanelles and the Bosporus are known as the Turkish Straits.
Extent
According to the International Hydrographic Organization, the limits of the Aegean Sea as follows:
On the south: A line running from Cape Aspro (28°16′E) in Asia Minor, to Cum Burnù (Capo della Sabbia) the Northeast extreme of the Island of Rhodes, through the island to Cape Prasonisi, the Southwest point thereof, on to Vrontos Point (35°33′N) in Skarpanto [Karpathos], through this island to Castello Point, the South extreme thereof, across to Cape Plaka (East extremity of Crete), through Crete to Agria Grabusa, the Northwest extreme thereof, thence to Cape Apolytares in Antikythera Island, through the island to Psira Rock (off the Northwest point) and across to Cape Trakhili in Kythira Island, through Kythira to the Northwest point (Cape Karavugia) and thence to Cape Santa Maria () in the Morea.
In the Dardanelles: A line joining Kum Kale (26°11′E) and Cape Helles.
Hydrography
Aegean surface water circulates in a counterclockwise gyre, with hypersaline Mediterranean water moving northward along the west coast of Turkey, before being displaced by less dense Black Sea outflow. The dense Mediterranean water sinks below the Black Sea inflow to a depth of , then flows through the Dardanelles Strait and into the Sea of Marmara at velocities of . The Black Sea outflow moves westward along the northern Aegean Sea, then flows southwards along the east coast of Greece.
The physical oceanography of the Aegean Sea is controlled mainly by the regional climate, the fresh water discharge from major rivers draining southeastern Europe, and the seasonal variations in the Black Sea surface water outflow through the Dardanelles Strait.
AnalysisYagar, D., 1994. Late glacial-Holocene evolution of the Aegean Sea. Ph.D. Thesis, Inst. Mar. Sci. Technol., Dokuz Eyltil Univ., 329 pp. (Unpubl.) of the Aegean during 1991 and 1992 revealed three distinct water masses:
Aegean Sea Surface Water – thick veneer, with summer temperatures of 19–24 °C and winter temperatures ranging from in the north to in the very south.
Aegean Sea Intermediate Water – Aegean Sea Intermediate Water extends from to with temperatures ranging from .
Aegean Sea Bottom Water – occurring at depths below with a very uniform temperature () and salinity (3.91–3.92%).
Climate
The climate of the Aegean Sea largely reflects the climate of Greece and Western Turkey, which is to say, predominantly Mediterranean. According to the Köppen climate classification, most of the Aegean is classified as Hot-summer Mediterranean (Csa), with hotter and drier summers along with milder and wetter winters. However, high temperatures during summers are generally not quite as high as those in arid or semiarid climates due to the presence of a large body of water. This is most predominant in the west and east coasts of the Aegean, and within the Aegean islands. In the north of the Aegean Sea, the climate is instead classified as Cold semi-arid (BSk), which feature cooler summers than Hot-summer Mediterranean climates. The Etesian winds are a dominant weather influence in the Aegean Basin.
The below table lists climate conditions of some major Aegean cities:
+Climate characteristics of some major cities on the Aegean coastCityMean temperature (daily high)Mean total rainfallJanuaryJulyJanuaryJuly°C°F°C°FmmindaysmmindaysAlexandroupolis8.447.130.186.260.42.386.817.60.692.5Bodrum15.159.234.293.6134.15.2812.31.30.051.5Heraklion15.259.428.683.591.53.610.11.00.040.1İzmir12.454.333.291.8132.75.2212.61.70.070.4Thessaloniki9.348.732.590.535.21.398.827.31.073.8Source: World Meteorological Organization, Turkish State Meteorological Service"Resmi İstatistikler: İllerimize Ait Genel İstatistik Verileri" (in Turkish). Turkish State Meteorological Service. Retrieved 4 May 2019.
Population
Numerous Greek and Turkish settlements are located along their mainland coast, as well as on towns on the Aegean islands. The largest cities are Athens and Thessaloniki in Greece and İzmir in Turkey. The most populated of the Aegean islands is Crete, followed by Euboea and Rhodes.
+Most populous urban areas on the Aegean coastRankCityCountryRegion/CountyPopulation (urban)1AthensGreeceCentral Greece3,090,5082İzmirTurkeyİzmir Province2,948,6093ThessalonikiGreeceMacedonia824,6764BodrumTurkeyMuğla Province198,3355ÇanakkaleTurkeyÇanakkale Province182,3896HeraklionGreeceCrete173,9937VolosGreeceThessaly144,4498KuşadasıTurkeyAydın Province133,1779ChaniaGreeceCrete108,64210DidimTurkeyAydın Province100,189
Biogeography and ecology
Protected areas
Greece has established several marine protected areas along its coasts. According to the Network of Managers of Marine Protected Areas in the Mediterranean (MedPAN), four Greek MPAs are participating in the Network. These include Alonnisos Marine Park, while the Missolonghi–Aitoliko Lagoons and the island of Zakynthos are not on the Aegean.
History
Ancient history
200px|thumb|upright=1.25|Female figure from Naxos (2800–2300 BC)
The current coastline dates back to about 4000 BC. Before that time, at the peak of the last ice age (about 18,000 years ago) sea levels everywhere were lower, and there were large well-watered coastal plains instead of much of the northern Aegean. When they were first occupied, the present-day islands including Milos with its important obsidian production were probably still connected to the mainland. The present coastal arrangement appeared around 9,000 years ago, with post-ice age sea levels continuing to rise for another 3,000 years after that.
The subsequent Bronze Age civilizations of Greece and the Aegean Sea have given rise to the general term Aegean civilization. In ancient times, the sea was the birthplace of two ancient civilizations – the Minoans of Crete and the Mycenaeans of the Peloponnese.Tracey Cullen, Aegean Prehistory: A Review (American Journal of Archaeology. Supplement, 1); Oliver Dickinson, The Aegean Bronze Age (Cambridge World Archaeology).
The Minoan civilization was a Bronze Age civilization on the island of Crete and other Aegean islands, flourishing from around 3000 to 1450 BC before a period of decline, finally ending at around 1100 BC. It represented the first advanced civilization in Europe, leaving behind massive building complexes, tools, stunning artwork, writing systems, and a massive network of trade. The Minoan period saw extensive trade between Crete, Aegean, and Mediterranean settlements, particularly the Near East. The most notable Minoan palace is that of Knossos, followed by that of Phaistos. The Mycenaean Greeks arose on the mainland, becoming the first advanced civilization in mainland Greece, which lasted from approximately 1600 to 1100 BC. It is believed that the site of Mycenae, which sits close to the Aegean coast, was the center of Mycenaean civilization. The Mycenaeans introduced several innovations in the fields of engineering, architecture and military infrastructure, while trade over vast areas of the Mediterranean, including the Aegean, was essential for the Mycenaean economy. Their syllabic script, the Linear B, offers the first written records of the Greek language and their religion already included several deities that can also be found in the Olympic Pantheon. Mycenaean Greece was dominated by a warrior elite society and consisted of a network of palace-centered states that developed rigid hierarchical, political, social and economic systems. At the head of this society was the king, known as wanax.
The civilization of Mycenaean Greeks perished with the collapse of Bronze Age culture in the eastern Mediterranean, to be followed by the so-called Greek Dark Ages. It is undetermined what cause the collapse of the Mycenaeans. During the Greek Dark Ages, writing in the Linear B script ceased, vital trade links were lost, and towns and villages were abandoned.
Ancient Greece
200px|thumb|upright=1.25|A fleet of Athenian trireme
200px|thumb|upright=1.25|Library of Celsus, a Roman structure in important sea port Ephesus
The Archaic period followed the Greek Dark Ages in the 8th century BC. Greece became divided into small self-governing communities, and adopted the Phoenician alphabet, modifying it to create the Greek alphabet. By the 6th century BC several cities had emerged as dominant in Greek affairs: Athens, Sparta, Corinth, and Thebes, of which Athens, Sparta, and Corinth were closest to the Aegean Sea. Each of them had brought the surrounding rural areas and smaller towns under their control, and Athens and Corinth had become major maritime and mercantile powers as well. In the 8th and 7th centuries BC many Greeks migrated to form colonies in Magna Graecia (Southern Italy and Sicily), Asia Minor and further afield. The Aegean Sea was the setting for one of the most pivotal naval engagements in history, when, on 20 September 480 B.C., the Athenian fleet gained a decisive victory over the Persian fleet of the Xerxes II of Persia at the Battle of Salamis. Thus ending any further attempt of western expansion by the Achaemenid Empire.
The Aegean Sea would later come to be under the control, albeit briefly, of the Kingdom of Macedonia. Philip II and his son Alexander the Great led a series of conquests that led not only to the unification of the Greek mainland and the control of the Aegean Sea under his rule, but also the destruction of the Achaemenid Empire. After Alexander the Great's death, his empire was divided among his generals. Cassander became king of the Hellenistic kingdom of Macedon, which held territory along the western coast of the Aegean, roughly corresponding to modern-day Greece. The Kingdom of Lysimachus had control over the sea's eastern coast. Greece had entered the Hellenistic period.
Roman rule
The Macedonian Wars were a series of conflicts fought by the Roman Republic and its Greek allies in the eastern Mediterranean against several different major Greek kingdoms. They resulted in Roman control or influence over the eastern Mediterranean basin, including the Aegean, in addition to their hegemony in the western Mediterranean after the Punic Wars. During Roman rule, the land around the Aegean Sea fell under the provinces of Achaea, Macedonia, Thracia, Asia and Creta et Cyrenaica (island of Crete).
Medieval period
200px|thumb|upright=1.25|Emirate of Crete, after early conquest of Arabs
The fall of the Western Roman Empire allowed its successor state, the Byzantine Empire, to continue Roman control over the Aegean Sea. However, their territory would later be threatened by the early Muslim conquests initiated by Muhammad in the 7th century. Although the Rashidun Caliphate did not manage to obtain land along the coast of the Aegean Sea, its conquest of the Eastern Anatolian peninsula as well as Egypt, the Levant, and North Africa left the Byzantine Empire weakened. The Umayyad Caliphate expanded the territorial gains of the Rashidun Caliphate, conquering much of North Africa, and threatened the Byzantine Empire's control of Western Anatolia, where it meets the Aegean Sea.
During the 820s, Crete was conquered by a group of Berbers Andalusians exiles led by Abu Hafs Umar al-Iqritishi, and it became an independent Islamic state. The Byzantine Empire launched a campaign that took most of the island back in 842 and 843 under Theoktistos, but the re-conquest was not completed and was soon reversed. Later attempts by the Byzantine Empire to recover the island were without success. For the approximately 135 years of its existence, the emirate of Crete was one of the major foes of Byzantium. Crete commanded the sea lanes of the Eastern Mediterranean and functioned as a forward base and haven for Muslim corsair fleets that ravaged the Byzantine-controlled shores of the Aegean Sea. Crete returned to Byzantine rule under Nikephoros II Phokas, who launched a huge campaign against the Emirate of Crete in 960 to 961.
Meanwhile, the Bulgarian Empire threatened Byzantine control of Northern Greece and the Aegean coast to the south. Under Presian and his successor Boris I, the Bulgarian Empire managed to obtain a small portion of the northern Aegean coast. Simeon I of Bulgaria led Bulgaria to its greatest territorial expansion, and managed to conqueror much of the northern and western coasts of the Aegean. The Byzantines later regained control. The Second Bulgarian Empire achieved similar success along, again, the northern and western coasts, under Ivan Asen II of Bulgaria.
200px|thumb|upright=1.25|A 1528 map of the Aegean Sea by Turkish geographer Piri Reis
The Seljuk Turks, under the Seljuk Empire, invaded the Byzantine Empire in 1068, from which they annexed almost all the territories of Anatolia, including the east coast of the Aegean Sea, during the reign of Alp Arslan, the second Sultan of the Seljuk Empire. After the death of his successor, Malik Shah I, the empire was divided, and Malik Shah was succeeded in Anatolia by Kilij Arslan I, who founded the Sultanate of Rum. The Byzantines yet again recaptured the eastern coast of the Aegean.
After Constantinople was occupied by Western European and Venetian forces during the Fourth Crusade, the area around the Aegean Sea was fragmented into multiple entities, including the Latin Empire, the Kingdom of Thessalonica, the Empire of Nicaea, the Principality of Achaea, and the Duchy of Athens. The Venetians created the maritime state of the Duchy of the Archipelago, which included all the Cyclades except Mykonos and Tinos. The Empire of Nicaea, a Byzantine rump state, managed to affect the Recapture of Constantinople from the Latins in 1261 and defeat Epirus. Byzantine successes were not to last; the Ottomans would conquer the area around the Aegean coast, but before their expansion the Byzantine Empire had already been weakened from internal conflict. By the late 14th century, the Byzantine Empire had lost all control of the coast of the Aegean Sea and could only exercise power around their capital, Constantinople. The Ottoman Empire then gained control of all the Aegean coast with the exception of Crete, which was a Venetian colony until 1669.
Modern period
200px|thumb|upright=1.25|German Tanks in Rhodes during the WW2
The Greek War of Independence allowed a Greek state on the coast of the Aegean from 1829 onward. The Ottoman Empire held a presence over the sea for over 500 years until its dissolution following World War I, when it was replaced by modern Turkey. During the war, Greece gained control over the area around the northern coast of the Aegean. By the 1930s, Greece and Turkey had about resumed their present-day borders.
In the Italo-Turkish War of 1912, Italy captured the Dodecanese islands, and had occupied them since, reneging on the 1919 Venizelos–Tittoni agreement to cede them to Greece. The Greco-Italian War took place from October 1940 to April 1941 as part of the Balkans Campaign of World War II. The Italian war aim was to establish a Greek puppet state, which would permit the Italian annexation of the Sporades and Cyclades islands in the Aegean Sea, to be administered as a part of the Italian Aegean Islands. The German invasion resulted in the Axis occupation of Greece. The German troops evacuated Athens on 12 October 1944, and by the end of the month, they had withdrawn from mainland Greece. Greece was then liberated by Allied troops.
Economy and politics
Many of the islands in the Aegean have safe harbours and bays. In ancient times, navigation through the sea was easier than travelling across the rough terrain of the Greek mainland, and to some extent, the coastal areas of Anatolia. Many of the islands are volcanic, and marble and iron are mined on other islands. The larger islands have some fertile valleys and plains.
Of the main islands in the Aegean Sea, two belong to Turkey – Bozcaada (Tenedos) and Gökçeada (Imbros); the rest belong to Greece. Between the two countries, there are political disputes over several aspects of political control over the Aegean space, including the size of territorial waters, air control and the delimitation of economic rights to the continental shelf. These issues are known as the Aegean dispute.
Transport
Multiple ports are located along the Greek and Turkish coasts of the Aegean Sea. The port of Piraeus in Athens is the chief port in Greece, the largest passenger port in Europe"Presentation". http://www.olp.gr. Archived from the original on 20 December 2008. Retrieved 27 December 2008. and the third largest in the world,"ANEK Lines – Piraeus". http://www.anek.gr. Archived from the original on 3 December 2008. Retrieved 27 December 2008. servicing about 20 million passengers annually. With a throughput of 1.4 million TEUs, Piraeus is placed among the top ten ports in container traffic in Europe and the top container port in the Eastern Mediterranean."Container terminal". http://www.olp.gr . Archived from the original on 20 December 2008. Retrieved 27 December 2008. Piraeus is also the commercial hub of Greek shipping. Piraeus bi-annually acts as the focus for a major shipping convention, known as Posidonia, which attracts maritime industry professionals from all over the world. Piraeus is currently Greece's third-busiest port in terms of tons of goods transported, behind Agioi Theodoroi and Thessaloniki. The central port serves ferry routes to almost every island in the eastern portion of Greece, the island of Crete, the Cyclades, the Dodecanese, and much of the northern and the eastern Aegean Sea, while the western part of the port is used for cargo services.
As of 2007, the Port of Thessaloniki was the second-largest container port in Greece after the port of Piraeus, making it one of the busiest ports in Greece. In 2007, the Port of Thessaloniki handled 14,373,245 tonnes of cargo and 222,824 TEU's. Paloukia, on the island of Salamis, is a major passenger port.
Fishing
Fish are Greece's second-largest agricultural export, and Greece has Europe's largest fishing fleet. Fish captured include sardines, mackerel, grouper, grey mullets, sea bass, and seabream. There is a considerable difference between fish catches between the pelagic and demersal zones; with respect to pelagic fisheries, the catches from the northern, central and southern Aegean area groupings are dominated, respectively, by anchovy, horse mackerels, and boops. For demersal fisheries, the catches from the northern and southern Aegean area groupings are dominated by grey mullets and pickerel (Spicara smaris) respectively.
The industry has been impacted by the Great Recession. Overfishing and habitat destruction is also a concern, threatening grouper, and seabream populations, resulting in perhaps a 50% decline of fish catch. To address these concerns, Greek fishermen have been offered a compensation by the government. Although some species are defined as protected or threatened under EU legislation, several illegal species such as the molluscs Pinna nobilis, Charonia tritonis and Lithophaga lithophaga, can be bought in restaurants and fish markets around Greece.
Tourism
The Aegean islands within the Aegean Sea are significant tourist destinations. Tourism to the Aegean islands contributes a significant portion of tourism in Greece, especially since the second half of the 20th century. A total of five UNESCO World Heritage sites are located the Aegean Islands; these include the Monastery of Saint John the Theologian and the Cave of the Apocalypse on Patmos,Centre, UNESCO World Heritage. "The Historic Centre (Chorá) with the Monastery of Saint-John the Theologian and the Cave of the Apocalypse on the Island of Pátmos". whc.unesco.org. Retrieved 8 September 2016. the Pythagoreion and Heraion of Samos in Samos, the Nea Moni of Chios,"Monasteries of Daphni, Hosios Loukas and Nea Moni of Chios". UNESCO. Retrieved 30 September 2012. the island of Delos,Centre, UNESCO World Heritage. "Delos". whc.unesco.org. Retrieved 7 September 2016. and the Medieval City of Rhodes.Centre, UNESCO World Heritage. "Medieval City of Rhodes". whc.unesco.org. Retrieved 7 September 2016.
Greece is one of the most visited countries in Europe and the world with over 33 million visitors in 2018,"Tourism Ministry statistics impress". Retrieved 30 January 2019. and the tourism industry around a quarter of Greece's Gross Domestic Product. The islands of Santorini, Crete, Lesbos, Delos, and Mykonos are common tourist destinations. An estimated 2 million tourists visit Santorini annually. However, concerns relating to overtourism have arisen in recent years, such as issues of inadequate infrastructure and overcrowding. Alongside Greece, Turkey has also been successful in developing resort areas and attracting large number of tourists, contributing to tourism in Turkey. The phrase "Blue Cruise" refers to recreational voyages along the Turkish Riviera, including across the Aegean. The ancient city of Troy, a World Heritage Site, is on the Turkish coast of the Aegean.
Greece and Turkey both take part in the Blue Flag beach certification programme of the Foundation for Environmental Education. The certification is awarded for beaches and marinas meeting strict quality standards including environmental protection, water quality, safety and services criteria. As of 2015, the Blue Flag has been awarded to 395 beaches and 9 marinas in Greece. Southern Aegean beaches on the Turkish coast include Muğla, with 102 beaches awarded with the blue flag, along with İzmir and Aydın, who have 49 and 30 beaches awarded respectively.
See also
Exclusive economic zone of Greece
Geography of Turkey
List of Greek place names
Aegean Boat Report
Notes
References
External links
Category:Seas of Greece
Category:Seas of Turkey
Category:Marginal seas of the Mediterranean
Category:European seas
Category:Seas of Asia
Category:Geography of West Asia
Category:Landforms of Çanakkale Province
Category:Landforms of Muğla Province
Category:Landforms of İzmir Province
Category:Landforms of Balıkesir Province
Category:Landforms of Edirne Province
Category:Landforms of Aydın Province
|
geography
| 4,488
|
874
|
Ancient Egypt
|
https://en.wikipedia.org/wiki/Ancient_Egypt
|
Ancient Egypt was a cradle of civilization concentrated along the lower reaches of the Nile River in Northeast Africa. It emerged from prehistoric Egypt around 3150BC (according to conventional Egyptian chronology), when Upper and Lower Egypt were amalgamated by Menes, who is believed by the majority of Egyptologists to have been the same person as Narmer. The history of ancient Egypt unfolded as a series of stable kingdoms interspersed by the "Intermediate Periods" of relative instability. These stable kingdoms existed in one of three periods: the Old Kingdom of the Early Bronze Age; the Middle Kingdom of the Middle Bronze Age; or the New Kingdom of the Late Bronze Age.
The pinnacle of ancient Egyptian power was achieved during the New Kingdom, which extended its rule to much of Nubia and a considerable portion of the Levant. After this period, Egypt entered an era of slow decline. Over the course of its history, it was invaded or conquered by a number of foreign civilizations, including the Hyksos, the Kushites, the Assyrians, the Persians, and the Greeks and then the Romans. The end of ancient Egypt is variously defined as occurring with the end of the Late Period during the Wars of Alexander the Great in 332 BC or with the end of the Greek-ruled Ptolemaic Kingdom during the Roman conquest of Egypt in 30 BC. In AD 642, the Arab conquest of Egypt brought an end to the region's millennium-long Greco-Roman period.
The success of ancient Egyptian civilization came partly from its ability to adapt to the Nile's conditions for agriculture. The predictable flooding of the Nile and controlled irrigation of its fertile valley produced surplus crops, which supported a more dense population, and thereby substantial social and cultural development. With resources to spare, the administration sponsored the mineral exploitation of the valley and its surrounding desert regions, the early development of an independent writing system, the organization of collective construction and agricultural projects, trade with other civilizations, and a military to assert Egyptian dominance throughout the Near East. Motivating and organizing these activities was a bureaucracy of elite scribes, religious leaders, and administrators under the control of the reigning pharaoh, who ensured the cooperation and unity of the Egyptian people in the context of an elaborate system of religious beliefs.
Among the many achievements of ancient Egypt are: the quarrying, surveying, and construction techniques that supported the building of monumental pyramids, temples, and obelisks; a system of mathematics; a practical and effective system of medicine; irrigation systems and agricultural production techniques; the first known planked boats; Egyptian faience and glass technology; new forms of literature; and the earliest known peace treaty, which was ratified with the Anatolia-based Hittite Empire. Its art and architecture were widely copied and its antiquities were carried off to be studied, admired, or coveted in the far corners of the world. Likewise, its monumental ruins inspired the imaginations of travelers and writers for millennia. A newfound European and Egyptian respect for antiquities and excavations that began in earnest in the early modern period has led to much scientific investigation of ancient Egypt and its society, as well as a greater appreciation of its cultural legacy.
History
The Nile has been the lifeline of its region for much of human history. The fertile floodplain of the Nile gave humans the opportunity to develop a settled agricultural economy and a more sophisticated, centralized society that became a cornerstone in the history of human civilization.
Predynastic period
In Predynastic and Early Dynastic times, the Egyptian climate was much less arid than it is today. Large regions of Egypt were savanna and traversed by herds of grazing ungulates. Foliage and fauna were far more prolific in all environs, and the Nile region supported large populations of waterfowl. Hunting would have been common for Egyptians, and this is also the period when many animals were first domesticated.
By about 5500 BC, small tribes living in the Nile valley had developed into a series of cultures demonstrating firm control of agriculture and animal husbandry, and identifiable by their pottery and personal items, such as combs, bracelets, and beads. The largest of these early cultures in upper (Southern) Egypt was the Badarian culture, which probably originated in the Western Desert; it was known for its high-quality ceramics, stone tools, and its use of copper.
The Badari was followed by the Naqada culture: the Naqada I (Amratian), the Naqada II (Gerzeh), and Naqada III (Semainean). These brought a number of technological improvements. As early as the Naqada I Period, predynastic Egyptians imported obsidian from Ethiopia, used to shape blades and other objects from flakes. Mutual trade with the Levant was established during Naqada II (); this period was also the beginning of trade with Mesopotamia, which continued into the early dynastic period and beyond. Over a period of about 1,000 years, the Naqada culture developed from a few small farming communities into a powerful civilization whose leaders were in complete control of the people and resources of the Nile valley. Establishing a power center at Nekhen, and later at Abydos, Naqada III leaders expanded their control of Egypt northwards along the Nile. They also traded with Nubia to the south, the oases of the western desert to the west, and the cultures of the eastern Mediterranean and Near East to the east.
The Naqada culture manufactured a diverse selection of material goods, reflective of the increasing power and wealth of the elite, as well as societal personal-use items, which included combs, small statuary, painted pottery, high quality decorative stone vases, cosmetic palettes, and jewelry made of gold, lapis, and ivory. They also developed a ceramic glaze known as faience, which was used well into the Roman Period to decorate cups, amulets, and figurines. During the last predynastic phase, the Naqada culture began using written symbols that eventually were developed into a full system of hieroglyphs for writing the ancient Egyptian language.
Early Dynastic Period ( BC)
The Early Dynastic Period was approximately contemporary to the early Sumerian-Akkadian civilization of Mesopotamia and of ancient Elam. The third-centuryBC Egyptian priest Manetho grouped the long line of kings from Menes to his own time into 30 dynasties, a system still used today. He began his official history with the king named "Meni" (or Menes in Greek), who was believed to have united the two kingdoms of Upper and Lower Egypt.
The transition to a unified state happened more gradually than ancient Egyptian writers represented, and there is no contemporary record of Menes. Some scholars now believe, however, that the mythical Menes may have been the king Narmer, who is depicted wearing royal regalia on the ceremonial Narmer Palette, in a symbolic act of unification. In the Early Dynastic Period, which began about 3000BC, the first of the Dynastic kings solidified control over Lower Egypt by establishing a capital at Memphis, from which he could control the labor force and agriculture of the fertile delta region, as well as the lucrative and critical trade routes to the Levant. The increasing power and wealth of the kings during the early dynastic period was reflected in their elaborate mastaba tombs and mortuary cult structures at Abydos, which were used to celebrate the deified king after his death. The strong institution of kingship developed by the kings served to legitimize state control over the land, labor, and resources that were essential to the survival and growth of ancient Egyptian civilization.
Old Kingdom (2686–2181 BC)
Major advances in architecture, art, and technology were made during the Old Kingdom, fueled by the increased agricultural productivity and resulting population growth, made possible by a well-developed central administration. Some of ancient Egypt's crowning achievements, the Giza pyramids and Great Sphinx, were constructed during the Old Kingdom. Under the direction of the vizier, state officials collected taxes, coordinated irrigation projects to improve crop yield, and drafted peasants to work on construction projects.
With the rise of central administration in Egypt, a new class of educated scribes and officials emerged and were granted estates by the king as payment for their services. Kings also made land grants to their mortuary cults and local temples, to ensure that these institutions had the resources to worship the king after his death. Scholars believe that five centuries of these practices slowly eroded the economic vitality of Egypt, and that the economy could no longer afford to support a large centralized administration. As the power of the kings diminished, regional governors called nomarchs began to challenge the supremacy of the office of king. This, coupled with severe droughts between 2200 and 2150BC, is believed to have caused the country to enter the 140-year period of famine and strife known as the First Intermediate Period.
First Intermediate Period (2181–2055 BC)
After Egypt's central government collapsed at the end of the Old Kingdom, the administration could no longer support or stabilize the country's economy. The ensuing food shortages and political disputes escalated into famines and small-scale civil wars. Yet despite difficult problems, local leaders, owing no tribute to the king, used their new-found independence to establish a thriving culture in the provinces. Once in control of their own resources, the provinces became economically richer—which was demonstrated by larger and better burials among all social classes.
Free from their loyalties to the king, local rulers began competing with each other for territorial control and political power. By 2160BC, rulers in Herakleopolis controlled Lower Egypt in the north, while a rival clan based in Thebes, the Intef family, took control of Upper Egypt in the south. As the Intefs grew in power and expanded their control northward, a clash between the two rival dynasties became inevitable. Around 2055BC the northern Theban forces under Nebhepetre Mentuhotep II finally defeated the Herakleopolitan rulers, reuniting the Two Lands. They inaugurated a period of economic and cultural renaissance known as the Middle Kingdom.
Middle Kingdom (2134–1690 BC)
The kings of the Middle Kingdom restored the country's stability, which saw a resurgence of art and monumental building projects, and a new flourishing of literature. Mentuhotep II and his Eleventh Dynasty successors ruled from Thebes, but the vizier Amenemhat I, upon assuming the kingship at the beginning of the Twelfth Dynasty around 1985BC, shifted the kingdom's capital to the city of Itjtawy, located in Faiyum. From Itjtawy, the kings of the Twelfth Dynasty undertook a far-sighted land reclamation and irrigation scheme to increase agricultural output in the region. Moreover, the military reconquered territory in Nubia that was rich in quarries and gold mines, while laborers built a defensive structure in the Eastern Delta, called the "Walls of the Ruler", to defend against foreign attack.
With the kings having secured the country militarily and politically and with vast agricultural and mineral wealth at their disposal, the nation's population, arts, and religion flourished. The Middle Kingdom displayed an increase in expressions of personal piety toward the gods. Middle Kingdom literature featured sophisticated themes and characters written in a confident, eloquent style. The relief and portrait sculpture of the period captured subtle, individual details that reached new heights of technical sophistication.
Second Intermediate Period (1674–1549 BC) and the Hyksos
Around 1785BC, as the power of the Middle Kingdom kings weakened, a Western Asian people called the Hyksos, who had already settled in the Delta, seized control of Egypt and established their capital at Avaris, forcing the former central government to retreat to Thebes. The king was treated as a vassal and expected to pay tribute. The Hyksos ('foreign rulers') retained Egyptian models of government and identified as kings, thereby integrating Egyptian elements into their culture.
After retreating south, the native Theban kings found themselves trapped between the Canaanite Hyksos ruling the north and the Hyksos' Nubian allies, the Kushites, to the south. After years of vassalage, Thebes gathered enough strength to challenge the Hyksos in a conflict that lasted more than 30 years, until 1555BC. Ahmose I waged a series of campaigns that permanently eradicated the Hyksos' presence in Egypt. He is considered the founder of the Eighteenth Dynasty, and the military became a central priority for his successors, who sought to expand Egypt's borders and attempted to gain mastery of the Near East.
New Kingdom (1549–1069 BC)
The New Kingdom pharaohs established a period of unprecedented prosperity by securing their borders and strengthening diplomatic ties with their neighbours, including the Mitanni Empire, Assyria, and Canaan. Military campaigns waged under Tuthmosis I and his grandson Tuthmosis III extended the influence of the pharaohs to the largest empire Egypt had ever seen.
Between their reigns, Hatshepsut, a queen who established herself as pharaoh, launched many building projects, including the restoration of temples damaged by the Hyksos, and sent trading expeditions to Punt and the Sinai. When Tuthmosis III died in 1425BC, Egypt had an empire extending from Niya in north west Syria to the Fourth Cataract of the Nile in Nubia, cementing loyalties and opening access to critical imports such as bronze and wood.
The New Kingdom pharaohs began a large-scale building campaign to promote the god Amun, whose growing cult was based in Karnak. They also constructed monuments to glorify their own achievements, both real and imagined. The Karnak temple is the largest Egyptian temple ever built.
Around 1350BC, the stability of the New Kingdom was threatened when Amenhotep IV ascended the throne and instituted a series of radical and chaotic reforms. Changing his name to Akhenaten, he touted the previously obscure sun deity Aten as the supreme deity, suppressed the worship of most other deities, and moved the capital to the new city of Akhetaten (modern-day Amarna). He was devoted to his new religion and artistic style. After his death, the cult of the Aten was quickly abandoned and the traditional religious order restored. The subsequent pharaohs, Tutankhamun, Ay, and Horemheb, worked to erase all mention of Akhenaten's heresy, now known as the Amarna Period.
Around 1279BC, Ramesses II, also known as Ramesses the Great, ascended the throne, and went on to build more temples, erect more statues and obelisks, and sire more children than any other pharaoh in history. A bold military leader, Ramesses II led his army against the Hittites in the Battle of Kadesh (in modern Syria) and, after fighting to a stalemate, finally agreed to the first recorded peace treaty, around 1258BC.
Egypt's wealth, however, made it a tempting target for invasion, particularly by the Libyan Berbers to the west, and the Sea Peoples, a conjectured confederation of seafarers from the Aegean Sea. Initially, the military was able to repel these invasions, but Egypt eventually lost control of its remaining territories in southern Canaan, much of it falling to the Assyrians. The effects of external threats were exacerbated by internal problems such as corruption, tomb robbery, and civil unrest. After regaining their power, the high priests at the temple of Amun in Thebes accumulated vast tracts of land and wealth, and their expanded power splintered the country during the Third Intermediate Period.
Third Intermediate Period (1069–653 BC)
Following the death of Ramesses XI in 1078BC, Smendes assumed authority over the northern part of Egypt, ruling from the city of Tanis. The south was effectively controlled by the High Priests of Amun at Thebes, who recognized Smendes in name only. During this time, Libyans had been settling in the western delta, and chieftains of these settlers began increasing their autonomy. Libyan princes took control of the delta under Shoshenq I in 945BC, founding the so-called Libyan or Bubastite dynasty that would rule for some 200 years. Shoshenq also gained control of southern Egypt by placing his family members in important priestly positions. Libyan control began to erode as a rival dynasty in the delta arose in Leontopolis, and Kushites threatened from the south.
Around 727BC the Kushite king Piye invaded northward, seizing control of Thebes and eventually the Delta, which established the 25th Dynasty. During the 25th Dynasty, Pharaoh Taharqa created an empire nearly as large as the New Kingdom's. Twenty-fifth Dynasty pharaohs built, or restored, temples and monuments throughout the Nile valley, including at Memphis, Karnak, Kawa, and Jebel Barkal. During this period, the Nile valley saw the first widespread construction of pyramids (many in modern Sudan) since the Middle Kingdom.
Egypt's far-reaching prestige declined considerably toward the end of the Third Intermediate Period. Its foreign allies had fallen into the Assyrian sphere of influence, and by 700BC war between the two states became inevitable. Between 671 and 667BC the Assyrians began the Assyrian conquest of Egypt. The reigns of both Taharqa and his successor, Tanutamun, were filled with frequent conflict with the Assyrians. Ultimately, the Assyrians pushed the Kushites back into Nubia, occupied Memphis, and sacked the temples of Thebes.
Late Period (653–332 BC)
The Assyrians left control of Egypt to a series of vassals who became known as the Saite kings of the Twenty-Sixth Dynasty. By 653BC, the Saite king Psamtik I was able to oust the Assyrians with the help of Greek mercenaries, who were recruited to form Egypt's first navy. Greek influence expanded greatly as the city-state of Naucratis became the home of Greeks in the Nile Delta. The Saite kings based in the new capital of Sais witnessed a brief but spirited resurgence in the economy and culture, but in 525BC, the Persian Empire, led by Cambyses II, began its conquest of Egypt, eventually defeating the pharaoh Psamtik III at the Battle of Pelusium. Cambyses II then assumed the formal title of pharaoh, but ruled Egypt from Iran, leaving Egypt under the control of a satrap. A few revolts against the Persians marked the 5th centuryBC, but Egypt was never able to overthrow the Persians until the end of the century.
Following its annexation by Persia, Egypt was joined with Cyprus and Phoenicia in the sixth satrapy of the Achaemenid Persian Empire. This first period of Persian rule over Egypt, also known as the Twenty-Seventh Dynasty, ended in 402BC, when Egypt regained independence under a series of native dynasties. The last of these dynasties, the Thirtieth, proved to be the last native royal house of ancient Egypt, ending with the kingship of Nectanebo II. A brief restoration of Persian rule, sometimes known as the Thirty-First Dynasty, began in 343BC, but shortly after, in 332BC, the Persian ruler Mazaces handed Egypt over to Alexander the Great without a fight.
Ptolemaic period (332–30 BC)
In 332BC, Alexander the Great conquered Egypt with little resistance from the Persians and was welcomed by the Egyptians as a deliverer. The administration established by Alexander's successors, the Macedonian Ptolemaic Kingdom, was based on an Egyptian model and based in the new capital city of Alexandria. The city showcased the power and prestige of Hellenistic rule, and became a centre of learning and culture that included the famous Library of Alexandria and the Mouseion. The Lighthouse of Alexandria lit the way for the many ships that kept trade flowing through the city—as the Ptolemies made commerce and revenue-generating enterprises, such as papyrus manufacturing, their top priority.
Hellenistic culture did not supplant native Egyptian culture, as the Ptolemies supported time-honored traditions in an effort to secure the loyalty of the populace. They built new temples in Egyptian style, supported traditional cults, and portrayed themselves as pharaohs. Some traditions merged, as Greek and Egyptian gods were syncretized into composite deities, such as Serapis, and classical Greek forms of sculpture influenced traditional Egyptian motifs. Despite their efforts to appease the Egyptians, the Ptolemies were challenged by native rebellion, bitter family rivalries, and frequent mob violence in Alexandria. In addition, as Rome relied more heavily on imports of grain from Egypt, the Romans took great interest in the political situation in the country. Continued Egyptian revolts, ambitious politicians, and powerful opponents from the Near East made this situation unstable, leading Rome to send forces to secure the country as a province of its empire.
Roman period (30 BC – AD 642)
Egypt became a province of the Roman Empire in 30BC, following the defeat of Mark Antony and Ptolemaic Queen Cleopatra VII by Octavian (later Emperor Augustus) in the Battle of Actium. The Romans relied heavily on grain shipments from Egypt, and the Roman army, under the control of a prefect appointed by the emperor, quelled rebellions, strictly enforced the collection of heavy taxes, and prevented attacks by bandits, which had become a notorious problem during the period. Alexandria became an increasingly important center on the trade route with the orient, as exotic luxuries were in high demand in Rome.
Although the Romans had a more hostile attitude than the Greeks towards the Egyptians, some traditions such as mummification and worship of the traditional gods continued. The art of mummy portraiture flourished, and some Roman emperors had themselves depicted as pharaohs, though not to the extent that the Ptolemies had. The former lived outside Egypt and did not perform the ceremonial functions of Egyptian kingship. Local administration became Roman in style and closed to native Egyptians.
From the mid-first century AD, Christianity took root in Egypt and it was originally seen as another cult that could be accepted. However, it was an uncompromising religion that sought to win converts from the pagan Egyptian and Greco-Roman religions and threatened popular religious traditions. This led to the persecution of converts to Christianity, culminating in the great purges of Diocletian starting in 303, but eventually Christianity won out. In 391, the Christian emperor Theodosius introduced legislation that banned pagan rites and closed temples. Alexandria became the scene of great anti-pagan riots with public and private religious imagery destroyed. As a consequence, Egypt's native religious culture was continually in decline. While the native population continued to speak their language, the ability to read hieroglyphic writing slowly disappeared as the role of the Egyptian temple priests and priestesses diminished. The temples themselves were sometimes converted to churches or abandoned to the desert.
Government and economy
Administration and commerce
The pharaoh was the absolute monarch of the country and, at least in theory, wielded complete control of the land and its resources. The king was the supreme military commander and head of the government, who relied on a bureaucracy of officials to manage his affairs. In charge of the administration was his second in command, the vizier, who acted as the king's representative and coordinated land surveys, the treasury, building projects, the legal system, and the archives. At a regional level, the country was divided into as many as 42 administrative regions called nomes each governed by a nomarch, who was accountable to the vizier for his jurisdiction. The temples formed the backbone of the economy. Not only were they places of worship, but were also responsible for collecting and storing the kingdom's wealth in a system of granaries and treasuries administered by overseers, who redistributed grain and goods.
Much of the economy was centrally organized and strictly controlled. Although the ancient Egyptians did not use coinage until the Late period, they did use a type of money-barter system, with standard sacks of grain and the deben, a weight of roughly of copper or silver, forming a common denominator. Workers were paid in grain: A simple laborer might earn sacks or ca. of grain per month, while a foreman might earn sacks or roughly . Prices were fixed across the country and recorded in lists to facilitate trading; for example a shirt cost five copper deben, while a cow cost 140deben. Grain could be traded for other goods, according to the fixed price list. During the fifth centuryBC coined money was introduced into Egypt from abroad. At first the coins were used as standardized pieces of precious metal rather than true money, but in the following centuries international traders came to rely on coinage.
Social status
Egyptian society was highly stratified, and social status was expressly displayed. Farmers made up the bulk of the population, but agricultural produce was owned directly by the state, temple, or noble family that owned the land. Farmers were also subject to a labor tax and were required to work on irrigation or construction projects in a corvée system. Artists and craftsmen were of higher status than farmers, but they were also under state control, working in the shops attached to the temples and paid directly from the state treasury. Scribes and officials formed the upper class in ancient Egypt, known as the "white kilt class" in reference to the bleached linen garments that served as a mark of their rank. The upper class prominently displayed their social status in art and literature. Below the nobility were the priests, physicians, and engineers with specialized training in their field. It is unclear whether slavery as understood today existed in ancient Egypt; there is difference of opinions among authors.
The ancient Egyptians viewed men and women, including people from all social classes, as essentially equal under the law, and even the lowliest peasant was entitled to petition the vizier and his court for redress. Although slaves were mostly used as indentured servants, they were able to buy and sell their servitude, work their way to freedom or nobility, and were usually treated by doctors in the workplace. Both men and women had the right to own and sell property, make contracts, marry and divorce, receive inheritance, and pursue legal disputes in court. Married couples could own property jointly and protect themselves from divorce by agreeing to marriage contracts, which stipulated the financial obligations of the husband to his wife and children should the marriage end. Compared with their counterparts in ancient Greece, Rome, and even more modern places around the world, ancient Egyptian women had a greater range of personal choices, legal rights, and opportunities for achievement. Women such as Hatshepsut and Cleopatra VII even became pharaohs, while others wielded power as Divine Wives of Amun. Despite these freedoms, ancient Egyptian women did not often take part in official roles in the administration, aside from the royal high priestesses, apparently served only secondary roles in the temples (not much data for many dynasties), and were not so probably to be as educated as men.
Legal system
The head of the legal system was officially the pharaoh, who was responsible for enacting laws, delivering justice, and maintaining law and order, a concept the ancient Egyptians referred to as Ma'at. Although no legal codes from ancient Egypt survive, court documents show that Egyptian law was based on a common-sense view of right and wrong that emphasized reaching agreements and resolving conflicts rather than strictly adhering to a complicated set of statutes. Local councils of elders, known as Kenbet in the New Kingdom, were responsible for ruling in court cases involving small claims and minor disputes. More serious cases involving murder, major land transactions, and tomb robbery were referred to the Great Kenbet, over which the vizier or pharaoh presided. Plaintiffs and defendants were expected to represent themselves and were required to swear an oath that they had told the truth. In some cases, the state took on both the role of prosecutor and judge, and it could torture the accused with beatings to obtain a confession and the names of any co-conspirators. Whether the charges were trivial or serious, court scribes documented the complaint, testimony, and verdict of the case for future reference.
Punishment for minor crimes involved either imposition of fines, beatings, facial mutilation, or exile, depending on the severity of the offense. Serious crimes such as murder and tomb robbery were punished by execution, carried out by decapitation, drowning, or impaling the criminal on a stake. Punishment could also be extended to the criminal's family. Beginning in the New Kingdom, oracles played a major role in the legal system, dispensing justice in both civil and criminal cases. The procedure was to ask the god a "yes" or "no" question concerning the right or wrong of an issue. The god, carried by a number of priests, rendered judgement by choosing one or the other, moving forward or backward, or pointing to one of the answers written on a piece of papyrus or an ostracon.
Agriculture
A combination of favorable geographical features contributed to the success of ancient Egyptian culture, the most important of which was the rich fertile soil resulting from annual inundations of the Nile River. The ancient Egyptians were thus able to produce an abundance of food, allowing the population to devote more time and resources to cultural, technological, and artistic pursuits. Land management was crucial in ancient Egypt because taxes were assessed based on the amount of land a person owned.
Farming in Egypt was dependent on the cycle of the Nile River. The Egyptians recognized three seasons: Akhet (flooding), Peret (planting), and Shemu (harvesting). The flooding season lasted from June to September, depositing on the river's banks a layer of mineral-rich silt ideal for growing crops. After the floodwaters had receded, the growing season lasted from October to February. Farmers plowed and planted seeds in the fields, which were irrigated with ditches and canals. Egypt received little rainfall, so farmers relied on the Nile to water their crops. From March to May, farmers used sickles to harvest their crops, which were then threshed with a flail to separate the straw from the grain. Winnowing removed the chaff from the grain, and the grain was then ground into flour, brewed to make beer, or stored for later use.
The ancient Egyptians cultivated emmer and barley, and several other cereal grains, all of which were used to make the two main food staples of bread and beer. Flax plants, uprooted before they started flowering, were grown for the fibers of their stems. These fibers were split along their length and spun into thread, which was used to weave sheets of linen and to make clothing. Papyrus growing on the banks of the Nile River was used to make paper. Vegetables and fruits were grown in garden plots, close to habitations and on higher ground, and had to be watered by hand. Vegetables included leeks, garlic, melons, squashes, pulses, lettuce, and other crops, in addition to grapes that were made into wine.
Animals
The Egyptians believed that a balanced relationship between people and animals was an essential element of the cosmic order; thus humans, animals and plants were believed to be members of a single whole. Animals, both domesticated and wild, were therefore a critical source of spirituality, companionship, and sustenance to the ancient Egyptians. Cattle were the most important livestock; the administration collected taxes on livestock in regular censuses, and the size of a herd reflected the prestige and importance of the estate or temple that owned them. In addition to cattle, the ancient Egyptians kept sheep, goats, and pigs. Poultry, such as ducks, geese, and pigeons, were captured in nets and bred on farms, where they were force-fed with dough to fatten them. The Nile provided a plentiful source of fish. Bees were also domesticated from at least the Old Kingdom, and provided both honey and wax.
The ancient Egyptians used donkeys and oxen as beasts of burden, and they were responsible for plowing the fields and trampling seed into the soil. The slaughter of a fattened ox was also a central part of an offering ritual. Horses were introduced by the Hyksos in the Second Intermediate Period. Camels, although known from the New Kingdom, were not used as beasts of burden until the Late Period. There is also evidence to suggest that elephants were briefly used in the Late Period but largely abandoned due to lack of grazing land. Cats, dogs, and monkeys were common family pets, while more exotic pets imported from the heart of Africa, such as Sub-Saharan African lions, were reserved for royalty. Herodotus observed that the Egyptians were the only people to keep their animals with them in their houses. During the Late Period, the worship of the gods in their animal form was extremely popular, such as the cat goddess Bastet and the ibis god Thoth, and these animals were kept in large numbers for the purpose of ritual sacrifice.
Natural resources
Egypt is rich in building and decorative stone, copper and lead ores, gold, and semiprecious stones. These natural resources allowed the ancient Egyptians to build monuments, sculpt statues, make tools, and fashion jewelry. Embalmers used salts from the Wadi Natrun for mummification, which also provided the gypsum needed to make plaster. Ore-bearing rock formations were found in distant, inhospitable wadis in the Eastern Desert and the Sinai, requiring large, state-controlled expeditions to obtain natural resources found there. There were extensive gold mines in Nubia, and one of the first maps known is of a gold mine in this region. The Wadi Hammamat was a notable source of granite, greywacke, and gold. Flint was the first mineral collected and used to make tools, and flint handaxes are the earliest pieces of evidence of habitation in the Nile valley. Nodules of the mineral were carefully flaked to make blades and arrowheads of moderate hardness and durability even after copper was adopted for this purpose. Ancient Egyptians were among the first to use minerals such as sulfur as cosmetic substances.
The Egyptians worked deposits of the lead ore galena at Gebel Rosas to make net sinkers, plumb bobs, and small figurines. Copper was the most important metal for toolmaking in ancient Egypt and was smelted in furnaces from malachite ore mined in the Sinai. Workers collected gold by washing the nuggets out of sediment in alluvial deposits, or by the more labor-intensive process of grinding and washing gold-bearing quartzite. Iron deposits found in upper Egypt were used in the Late Period. High-quality building stones were abundant in Egypt; the ancient Egyptians quarried limestone all along the Nile valley, granite from Aswan, and basalt and sandstone from the wadis of the Eastern Desert. Deposits of decorative stones such as porphyry, greywacke, alabaster, and carnelian dotted the Eastern Desert and were collected even before the First Dynasty. In the Ptolemaic and Roman Periods, miners worked deposits of emeralds in Wadi Sikait and amethyst in Wadi el-Hudi.
Trade
The ancient Egyptians engaged in trade with their foreign neighbors to obtain rare, exotic goods not found in Egypt. In the Predynastic Period, they established trade with Nubia to obtain gold and incense. They also established trade with Palestine, as evidenced by Palestinian-style oil jugs found in the burials of the First Dynasty pharaohs. An Egyptian colony stationed in southern Canaan dates to slightly before the First Dynasty. Tell es-Sakan in present-day Gaza was established as an Egyptian settlement in the late 4th millennium BC, and is theorised to have been the main Egyptian colonial site in the region. Narmer had Egyptian pottery produced in Canaan and exported back to Egypt.
By the Second Dynasty at latest, ancient Egyptian trade with Byblos yielded a critical source of quality timber not found in Egypt. By the Fifth Dynasty, trade with Punt provided gold, aromatic resins, ebony, ivory, and wild animals such as monkeys and baboons. Egypt relied on trade with Anatolia for essential quantities of tin as well as supplementary supplies of copper, both metals being necessary for the manufacture of bronze. The ancient Egyptians prized the blue stone lapis lazuli, which had to be imported from far-away Afghanistan. Egypt's Mediterranean trade partners also included Greece and Crete, which provided, among other goods, supplies of olive oil.
Language
Historical development
The Egyptian language is a northern Afro-Asiatic language closely related to the Berber and Semitic languages. It has the longest known history of any language having been written from BC to the Middle Ages and remaining as a spoken language for longer. The phases of ancient Egyptian are Old Egyptian, Middle Egyptian (Classical Egyptian), Late Egyptian, Demotic and Coptic. Egyptian writings do not show dialect differences before Coptic, but it was probably spoken in regional dialects around Memphis and later Thebes.
Ancient Egyptian was a synthetic language, but it became more analytic later on. Late Egyptian developed prefixal definite and indefinite articles, which replaced the older inflectional suffixes. There was a change from the older verb–subject–object word order to subject–verb–object. The Egyptian hieroglyphic, hieratic, and demotic scripts were eventually replaced by the more phonetic Coptic alphabet. Coptic is still used in the liturgy of the Egyptian Orthodox Church, and traces of it are found in modern Egyptian Arabic.
Sounds and grammar
Ancient Egyptian has 25 consonants similar to those of other Afro-Asiatic languages. These include pharyngeal and emphatic consonants, voiced and voiceless stops, voiceless fricatives and voiced and voiceless affricates. It has three long and three short vowels, which expanded in Late Egyptian to about nine. The basic word in Egyptian, similar to Semitic and Berber, is a triliteral or biliteral root of consonants and semiconsonants. Suffixes are added to form words. The verb conjugation corresponds to the person. For example, the triconsonantal skeleton is the semantic core of the word 'hear'; its basic conjugation is , 'he hears'. If the subject is a noun, suffixes are not added to the verb: , 'the woman hears'.
Adjectives are derived from nouns through a process that Egyptologists call nisbation because of its similarity with Arabic. The word order is in verbal and adjectival sentences, and in nominal and adverbial sentences. The subject can be moved to the beginning of sentences if it is long and is followed by a resumptive pronoun. Verbs and nouns are negated by the particle n, but nn is used for adverbial and adjectival sentences. Stress falls on the ultimate or penultimate syllable, which can be open (CV) or closed (CVC).
Writing
Hieroglyphic writing dates from BC, and is composed of hundreds of symbols. A hieroglyph can represent a word, a sound, or a silent determinative; and the same symbol can serve different purposes in different contexts. Hieroglyphs were a formal script, used on stone monuments and in tombs, that could be as detailed as individual works of art. In day-to-day writing, scribes used a cursive form of writing, called hieratic, which was quicker and easier. While formal hieroglyphs may be read in rows or columns in either direction (though typically written from right to left), hieratic was always written from right to left, usually in horizontal rows. A new form of writing, Demotic, became the prevalent writing style, and it is this form of writing—along with formal hieroglyphs—that accompany the Greek text on the Rosetta Stone.
Around the first century AD, the Coptic alphabet started to be used alongside the Demotic script. Coptic is a modified Greek alphabet with the addition of some Demotic signs. Although formal hieroglyphs were used in a ceremonial role until the fourth century, towards the end only a small handful of priests could still read them. As the traditional religious establishments were disbanded, knowledge of hieroglyphic writing was mostly lost. Attempts to decipher them date to the Byzantine and Islamic periods in Egypt, but only in the 1820s, after the discovery of the Rosetta Stone and years of research by Thomas Young and Jean-François Champollion, were hieroglyphs substantially deciphered.
Literature
Writing first appeared in association with kingship on labels and tags for items found in royal tombs. It was primarily an occupation of the scribes, who worked out of the Per Ankh institution or the House of Life. The latter comprised offices, libraries (called House of Books), laboratories and observatories. Some of the best-known pieces of ancient Egyptian literature, such as the Pyramid and Coffin Texts, were written in Classical Egyptian, which continued to be the language of writing until about 1300BC. Late Egyptian was spoken from the New Kingdom onward and is represented in Ramesside administrative documents, love poetry and tales, as well as in Demotic and Coptic texts. During this period, the tradition of writing had evolved into the tomb autobiography, such as those of Harkhuf and Weni. The genre known as Sebayt ('instructions') was developed to communicate teachings and guidance from famous nobles; the Ipuwer papyrus, a poem of lamentations describing natural disasters and social upheaval, is a famous example.
The Story of Sinuhe, written in Middle Egyptian, might be the classic of Egyptian literature. Also written at this time was the Westcar Papyrus, a set of stories told to Khufu by his sons relating the marvels performed by priests. The Instruction of Amenemope is considered a masterpiece of Near Eastern literature. Towards the end of the New Kingdom, the vernacular language was more often employed to write popular pieces such as the Story of Wenamun and the Instruction of Any. The former tells the story of a noble who is robbed on his way to buy cedar from Lebanon and of his struggle to return to Egypt. From about 700BC, narrative stories and instructions, such as the popular Instructions of Onchsheshonqy, as well as personal and business documents were written in the demotic script and phase of Egyptian. Many stories written in demotic during the Greco-Roman period were set in previous historical eras, when Egypt was an independent nation ruled by great pharaohs such as Ramesses II.
Culture
Daily life
Most ancient Egyptians were farmers tied to the land. Their dwellings were restricted to immediate family members, and were constructed of mudbrick designed to remain cool in the heat of the day. Each home had a kitchen with an open roof, which contained a grindstone for milling grain and a small oven for baking the bread. Ceramics served as household wares for the storage, preparation, transport, and consumption of food, drink, and raw materials. Walls were painted white and could be covered with dyed linen wall hangings. Floors were covered with reed mats, while wooden stools, beds raised from the floor and individual tables comprised the furniture.
The ancient Egyptians placed a great value on hygiene and appearance. Most bathed in the Nile and used a pasty soap made from animal fat and chalk. Men shaved their entire bodies for cleanliness; perfumes and aromatic ointments covered bad odors and soothed skin. Clothing was made from simple linen sheets that were bleached white, and both men and women of the upper classes wore wigs, jewelry, and cosmetics. Children went without clothing until maturity, at about age 12, and at this age males were circumcised and had their heads shaved. Mothers were responsible for taking care of the children, while the father provided the family's income.
Music and dance were popular entertainments for those who could afford them. Early instruments included flutes and harps, while instruments similar to trumpets, oboes, and pipes developed later and became popular. In the New Kingdom, the Egyptians played on bells, cymbals, tambourines, drums, and imported lutes and lyres from Asia. The sistrum was a rattle-like musical instrument that was especially important in religious ceremonies.
The ancient Egyptians enjoyed a variety of leisure activities, including games and music. Senet, a board game where pieces moved according to random chance, was particularly popular from the earliest times; another similar game was mehen, which had a circular gaming board. "Hounds and Jackals" also known as 58 holes is another example of board games played in ancient Egypt. The first complete set of this game was discovered from a Theban tomb of the Egyptian pharaoh Amenemhat IV that dates to the 13th Dynasty. Juggling and ball games were popular with children, and wrestling is also documented in a tomb at Beni Hasan. The wealthy members of ancient Egyptian society enjoyed hunting, fishing, and boating as well.
The excavation of the workers' village of Deir el-Medina has resulted in one of the most thoroughly documented accounts of community life in the ancient world, which spans almost four hundred years. There is no comparable site in which the organization, social interactions, and working and living conditions of a community have been studied in such detail.
Cuisine
Egyptian cuisine remained remarkably stable over time; indeed, the cuisine of modern Egypt retains some striking similarities to the cuisine of the ancients. The staple diet consisted of bread and beer, supplemented with vegetables such as onions and garlic, and fruit such as dates and figs. Wine and meat were enjoyed by all on feast days while the upper classes indulged on a more regular basis. Fish, meat, and fowl could be salted or dried, and could be cooked in stews or roasted on a grill.
Architecture
The architecture of ancient Egypt includes some of the most famous structures in the world: the Great Pyramids of Giza and the temples at Thebes. Building projects were organized and funded by the state for religious and commemorative purposes, but also to reinforce the wide-ranging power of the pharaoh. The ancient Egyptians were skilled builders; using only simple but effective tools and sighting instruments, architects could build large stone structures with great accuracy and precision that is still envied today.
The domestic dwellings of elite and ordinary Egyptians alike were constructed from perishable materials such as mudbricks and wood, and have not survived. Peasants lived in simple homes, while the palaces of the elite and the pharaoh were more elaborate structures. A few surviving New Kingdom palaces, such as those in Malkata and Amarna, show richly decorated walls and floors with scenes of people, birds, water pools, deities and geometric designs. Important structures such as temples and tombs that were intended to last forever were constructed of stone instead of mudbricks. The architectural elements used in the world's first large-scale stone building, Djoser's mortuary complex, include post and lintel supports in the papyrus and lotus motif.
The earliest preserved ancient Egyptian temples, such as those at Giza, consist of single, enclosed halls with roof slabs supported by columns. In the New Kingdom, architects added the pylon, the open courtyard, and the enclosed hypostyle hall to the front of the temple's sanctuary, a style that was standard until the Greco-Roman period. The earliest and most popular tomb architecture in the Old Kingdom was the mastaba, a flat-roofed rectangular structure of mudbrick or stone built over an underground burial chamber. The step pyramid of Djoser is a series of stone mastabas stacked on top of each other. Pyramids were built during the Old and Middle Kingdoms, but most later rulers abandoned them in favor of less conspicuous rock-cut tombs. The use of the pyramid form continued in private tomb chapels of the New Kingdom and in the royal pyramids of Nubia.
Art
The ancient Egyptians produced art to serve functional purposes. For over 3500 years, artists adhered to artistic forms and iconography that were developed during the Old Kingdom, following a strict set of principles that resisted foreign influence and internal change. These artistic standards—simple lines, shapes, and flat areas of color combined with the characteristic flat projection of figures with no indication of spatial depth—created a sense of order and balance within a composition. Images and text were intimately interwoven on tomb and temple walls, coffins, stelae, and even statues. The Narmer Palette, for example, displays figures that can also be read as hieroglyphs. Because of the rigid rules that governed its highly stylized and symbolic appearance, ancient Egyptian art served its political and religious purposes with precision and clarity.
Ancient Egyptian artisans used stone as a medium for carving statues and fine reliefs, but used wood as a cheap and easily carved substitute. Paints were obtained from minerals such as iron ores (red and yellow ochres), copper ores (blue and green), soot or charcoal (black), and limestone (white). Paints could be mixed with gum arabic as a binder and pressed into cakes, which could be moistened with water when needed.
Pharaohs used reliefs to record victories in battle, royal decrees, and religious scenes. Common citizens had access to pieces of funerary art, such as shabti statues and books of the dead, which they believed would protect them in the afterlife. During the Middle Kingdom, wooden or clay models depicting scenes from everyday life became popular additions to the tomb. In an attempt to duplicate the activities of the living in the afterlife, these models show laborers, houses, boats, and even military formations that are scale representations of the ideal ancient Egyptian afterlife.
Despite the homogeneity of ancient Egyptian art, the styles of particular times and places sometimes reflected changing cultural or political attitudes. After the invasion of the Hyksos in the Second Intermediate Period, Minoan-style frescoes were found in Avaris. The most striking example of a politically driven change in artistic forms comes from the Amarna Period, where figures were radically altered to conform to Akhenaten's revolutionary religious ideas. This style, known as Amarna art, was quickly abandoned after Akhenaten's death and replaced by the traditional forms.
Religious beliefs
Beliefs in the divine and in the afterlife were ingrained in ancient Egyptian civilization from its inception; pharaonic rule was based on the divine right of kings. The Egyptian pantheon was populated by gods who had supernatural powers and were called on for help or protection. However, the gods were not always viewed as benevolent, and Egyptians believed they had to be appeased with offerings and prayers. The structure of this pantheon changed continually as new deities were promoted in the hierarchy, but priests made no effort to organize the diverse and sometimes conflicting myths and stories into a coherent system. These various conceptions of divinity were not considered contradictory but rather layers in the multiple facets of reality.
alt=Painted relief of a seated man with green skin and tight garments, a man with the head of a jackal, and a man with the head of a falcon|thumb|The gods Osiris, Anubis, and Horus in the tomb of Horemheb (KV57) in the Valley of the Kings
Gods were worshiped in cult temples administered by priests acting on the king's behalf. At the center of the temple was the cult statue in a shrine. Temples were not places of public worship or congregation, and only on select feast days and celebrations was a shrine carrying the statue of the god brought out for public worship. Normally, the god's domain was sealed off from the outside world and was only accessible to temple officials. Common citizens could worship private statues in their homes, and amulets offered protection against the forces of chaos. After the New Kingdom, the pharaoh's role as a spiritual intermediary was de-emphasized as religious customs shifted to direct worship of the gods. As a result, priests developed a system of oracles to communicate the will of the gods directly to the people.
The Egyptians believed that every human being was composed of physical and spiritual parts or aspects. In addition to the body, each person had a šwt (shadow), a ba (personality or soul), a ka (life-force), and a name. The heart, rather than the brain, was considered the seat of thoughts and emotions. After death, the spiritual aspects were released from the body and could move at will, but they required the physical remains (or a substitute, such as a statue) as a permanent home. The ultimate goal of the deceased was to rejoin his ka and ba and become one of the "blessed dead", living on as an akh, or "effective one". For this to happen, the deceased had to be judged worthy in a trial, in which the heart was weighed against a "feather of truth". If deemed worthy, the deceased could continue their existence on earth in spiritual form. If they were not deemed worthy, their heart was eaten by Ammit the Devourer and they were erased from the Universe.
Burial customs
The ancient Egyptians maintained an elaborate set of burial customs that they believed were necessary to ensure immortality after death. These customs involved preserving the body by mummification, performing burial ceremonies, and interring with the body goods the deceased would use in the afterlife. Before the Old Kingdom, bodies buried in desert pits were naturally preserved by desiccation. The arid, desert conditions were a boon throughout the history of ancient Egypt for burials of the poor, who could not afford the elaborate burial preparations available to the elite. Wealthier Egyptians began to bury their dead in stone tombs and use artificial mummification, which involved removing the internal organs, wrapping the body in linen, and burying it in a rectangular stone sarcophagus or wooden coffin. Beginning in the Fourth Dynasty, some parts were preserved separately in canopic jars.
By the New Kingdom, the ancient Egyptians had perfected the art of mummification; the best technique took 70 days and involved removing the internal organs, removing the brain through the nose, and desiccating the body in a mixture of salts called natron. The body was then wrapped in linen with protective amulets inserted between layers and placed in a decorated anthropoid coffin. Mummies of the Late Period were also placed in painted cartonnage mummy cases. Actual preservation practices declined during the Ptolemaic and Roman eras, while greater emphasis was placed on the outer appearance of the mummy, which was decorated.
Wealthy Egyptians were buried with larger quantities of luxury items, but all burials, regardless of social status, included goods for the deceased. Funerary texts were often included in the grave, and, beginning in the New Kingdom, so were shabti statues that were believed to perform manual labor for them in the afterlife. Rituals in which the deceased was magically re-animated accompanied burials. After burial, living relatives were expected to occasionally bring food to the tomb and recite prayers on behalf of the deceased.
Military
The ancient Egyptian military was responsible for defending Egypt against foreign invasion, and for maintaining Egypt's domination in the ancient Near East. The military protected mining expeditions to the Sinai during the Old Kingdom and fought civil wars during the First and Second Intermediate Periods. The military was responsible for maintaining fortifications along important trade routes, such as those found at the city of Buhen on the way to Nubia. Forts also were constructed to serve as military bases, such as the fortress at Sile, which was a base of operations for expeditions to the Levant. In the New Kingdom, a series of pharaohs used the standing Egyptian army to attack and conquer Kush and parts of the Levant.
Typical military equipment included bows and arrows, spears, and round-topped shields made by stretching animal skin over a wooden frame. In the New Kingdom, the military began using chariots that had earlier been introduced by the Hyksos invaders. Weapons and armor continued to improve after the adoption of bronze: shields were now made from solid wood with a bronze buckle, spears were tipped with a bronze point, and the khopesh was adopted from Asiatic soldiers. The pharaoh was usually depicted in art and literature riding at the head of the army; it has been suggested that at least a few pharaohs, such as Seqenenre Tao II and his sons, did do so. However, it has also been argued that "kings of this period did not personally act as frontline war leaders, fighting alongside their troops". Soldiers were recruited from the general population, but during, and especially after, the New Kingdom, mercenaries from Nubia, Kush, and Libya were hired to fight for Egypt.
Technology, medicine and mathematics
Technology
In technology, medicine, and mathematics, ancient Egypt achieved a relatively high standard of productivity and sophistication. Traditional empiricism, as evidenced by the Edwin Smith and Ebers papyri (), is first credited to Egypt. The Egyptians created their own alphabet and decimal system.
Faience and glass
Even before the Old Kingdom, the ancient Egyptians had developed a glassy material known as faience, which they treated as a type of artificial semi-precious stone. Faience is a non-clay ceramic made of silica, small amounts of lime and soda, and a colorant, typically copper. The material was used to make beads, tiles, figurines, and small wares. Several methods can be used to create faience, but typically production involved application of the powdered materials in the form of a paste over a clay core, which was then fired. By a related technique, the ancient Egyptians produced a pigment known as Egyptian blue, also called blue frit, which is produced by fusing (or sintering) silica, copper, lime, and an alkali such as natron. The product can be ground up and used as a pigment.
The ancient Egyptians could fabricate a wide variety of objects from glass with great skill, but it is not clear whether they developed the process independently. It is also unclear whether they made their own raw glass or merely imported pre-made ingots, which they melted and finished. However, they did have technical expertise in making objects, as well as adding trace elements to control the color of the finished glass. A range of colors could be produced, including yellow, red, green, blue, purple, and white, and the glass could be made either transparent or opaque.
Medicine
The medical problems of the ancient Egyptians stemmed directly from their environment. Living and working close to the Nile brought hazards from malaria and debilitating schistosomiasis parasites, which caused liver and intestinal damage. Dangerous wildlife such as crocodiles and hippos were also a common threat. The lifelong labors of farming and building put stress on the spine and joints, and traumatic injuries from construction and warfare all took a significant toll on the body. The grit and sand from stone-ground flour abraded teeth, leaving them susceptible to abscesses (though caries were rare).
The diets of the wealthy were rich in sugars, which promoted periodontal disease. Despite the flattering physiques portrayed on tomb walls, the overweight mummies of many of the upper class show the effects of a life of overindulgence. Adult life expectancy was about 35 for men and 30 for women, but reaching adulthood was difficult as about one-third of the population died in infancy.
Ancient Egyptian physicians were renowned in the ancient Near East for their healing skills, and some, such as Imhotep, remained famous long after their deaths. Herodotus remarked that there was a high degree of specialization among Egyptian physicians, with some treating only the head or the stomach, while others were eye-doctors and dentists. Training of physicians took place at the Per Ankh or "House of Life" institution, most notably those headquartered in Per-Bastet during the New Kingdom and at Abydos and Saïs in the Late period. Medical papyri show empirical knowledge of anatomy, injuries, and practical treatments.
Wounds were treated by bandaging with raw meat, white linen, sutures, nets, pads, and swabs soaked with honey to prevent infection, while opium, thyme, and belladona were used to relieve pain. The earliest records of burn treatment describe burn dressings that use the milk from mothers of male babies. Prayers were made to the goddess Isis. Moldy bread, honey, and copper salts were also used to prevent infection from dirt in burns. Garlic and onions were used regularly to promote good health and were thought to relieve asthma symptoms. Ancient Egyptian surgeons stitched wounds, set broken bones, and amputated diseased limbs, but they recognized that some injuries were so serious that they could only make the patient comfortable until death occurred.
Maritime technology
Early Egyptians knew how to assemble planks of wood into a ship hull and had mastered advanced forms of shipbuilding as early as 3000BC. The Archaeological Institute of America reports that the oldest planked ships known are the Abydos boats. A group of 14 discovered ships in Abydos were constructed of wooden planks "sewn" together. Discovered by Egyptologist David O'Connor of New York University, woven straps were found to have been used to lash the planks together, and reeds or grass stuffed between the planks helped to seal the seams. Because the ships are all buried together and near a mortuary belonging to Pharaoh Khasekhemwy, originally they were all thought to have belonged to him, but one of the 14 ships dates to 3000BC, and the associated pottery jars buried with the vessels also suggest earlier dating. The ship dating to 3000BC was long and is now thought to perhaps have belonged to an earlier pharaoh, perhaps one as early as Hor-Aha.
Early Egyptians also knew how to assemble planks of wood with treenails to fasten them together, using pitch for caulking the seams. The "Khufu ship", a vessel sealed into a pit in the Giza pyramid complex at the foot of the Great Pyramid of Giza in the Fourth Dynasty around 2500BC, is a full-size surviving example that may have filled the symbolic function of a solar barque. Early Egyptians also knew how to fasten the planks of this ship together with mortise and tenon joints.
In 1977, an ancient north–south canal was discovered extending from Lake Timsah to the Ballah Lakes. It was dated to the Middle Kingdom of Egypt by extrapolating dates of ancient sites constructed along its course.
In 2011, archaeologists from Italy, the United States, and Egypt, excavating a dried-up lagoon known as Mersa Gawasis, unearthed traces of an ancient harbor that once launched early voyages, such as Hatshepsut's Punt, expedition onto the open ocean. Some of the site's most evocative evidence for the ancient Egyptians' seafaring prowess include large ship timbers and hundreds of feet of ropes, made from papyrus, coiled in huge bundles. In 2013, a team of Franco-Egyptian archaeologists discovered what is believed to be the world's oldest port, dating back about 4500 years, from the time of King Khufu, on the Red Sea coast, near Wadi el-Jarf (about 110 miles south of Suez).
Mathematics
The earliest attested examples of mathematical calculations date to the predynastic Naqada period, and show a fully developed numeral system. The importance of mathematics to an educated Egyptian is suggested by a New Kingdom fictional letter in which the writer proposes a scholarly competition between himself and another scribe regarding everyday calculation tasks such as accounting of land, labor, and grain. Texts such as the Rhind Mathematical Papyrus and the Moscow Mathematical Papyrus show that the ancient Egyptians could perform the four basic mathematical operations—addition, subtraction, multiplication, and division—use fractions, calculate the areas of rectangles, triangles, and circles and compute the volumes of boxes, columns and pyramids. They understood basic concepts of algebra and geometry, and could solve systems of equations.
Mathematical notation was decimal, and based on hieroglyphic signs for each power of ten up to one million. Each of these could be written as many times as necessary to add up to the desired number; so to write the number eighty or eight hundred, the symbol for ten or one hundred was written eight times respectively. Because their methods of calculation could not handle most fractions with a numerator greater than one, they had to write fractions as the sum of several fractions. For example, they resolved the fraction two-fifths into the sum of one-third + one-fifteenth. Standard tables of values facilitated this. Some common fractions, however, were written with a special glyph—the equivalent of the modern two-thirds is shown on the right.
Ancient Egyptian mathematicians knew the Pythagorean theorem as an empirical formula. They were aware, for example, that a triangle had a right angle opposite the hypotenuse when its sides were in a 3–4–5 ratio. They were able to estimate the area of a circle by subtracting one-ninth from its diameter and squaring the result:
Area ≈ [()D]2 = ()r2 ≈ 3.16r2,
a reasonable approximation of the formula .
Population
Estimates of the size of the population range from 1–1.5 million in the 3rd millennium BC to possibly 2–3 million by the 1st millennium BC, before growing significantly towards the end of that millennium.
Historical scholarship has generally regarded the peopling of the Egyptian Nile Valley from archaeological and biological data, to be the result of interaction between coastal northern Africans, "neolithic" Saharans, Nilotic hunters, and riverine proto-Nubians with some influence and migration from the Levant.
In 2025, the UNESCO International Scientific Committee members for drafting the General History of Africa Volumes IX-XI reached the view that Egypt had African and Eurasian populations, with Upper Egypt now repositioned as the origin of pharaonic unification, with close archaeological, genetic, linguistic and biological anthropological affinities identified between the Upper Egyptian populations and Sub-Saharan groups.
Archaeogenetics
According to historian William Stiebling and archaeologist Susan N. Helft, conflicting DNA analysis on recent genetic samples such as the Amarna royal mummies has led to a lack of consensus on the genetic makeup of the ancient Egyptians and their geographic origins.
The genetic history of Ancient Egypt remains a developing field, and is relevant for the understanding of population demographic events connecting Africa and Eurasia. To date, the amount of genome-wide aDNA analyses on ancient specimens from Egypt and Sudan remain scarce, although studies on uniparental haplogroups in ancient individuals have been carried out several times, pointing broadly to affinities with other African and Eurasian groups.
The currently most advanced full genome analyses was published in a 2025 article by the scientific journal Nature, a whole-genome genetic study of an Old Kingdom adult male Egyptian of relatively high-status, codenamed "Old Kingdom individual (NUE001)", who was radiocarbon-dated to 2855–2570 BC, with funerary practices archeologically attributed to the Third and Fourth Dynasty, excavated in Nuwayrat (Nuerat, نويرات), in a cliff 265 km south of Cairo. Before this study, whole-genome sequencing of ancient Egyptians from the early periods of Egyptian Dynastic history had not yet been accomplished, mainly because of the problematic DNA preservation conditions in Egypt. The corpse had been placed intact in a large circular clay pot without embalming, and then installed inside a cliff tomb, which accounts for the comparatively good level of conservation of the skeleton and its DNA. Most of his genome was found to be associated with North African Neolithic ancestry, but about 20% of his genetic ancestry could be sourced to the eastern Fertile Crescent, including Mesopotamia. Overall, the 2025 study "provides direct evidence of genetic ancestry related to the eastern Fertile Crescent in ancient Egypt". This genetic connection suggests that there had been ancient migration flows from the eastern Fertile Crescent to Egypt, in addition to the exchanges of objects and imagery (domesticated animals and plants, writing systems...) already observed. This suggests a pattern of wide cultural and demographic expansion from the Mesopotamian region, which affected both Anatolia and Egypt during this period. The authors acknowledged some limitations of the study, such as the results deriving from one single Egyptian genome and known limitations predicting specific phenotypic traits in understudied populations.Morez Jacobs. Supplementary Table S10. Predicted phenotypes from the HirisPlexS system..
An earlier partial genomic analyses had been made on much later specimens recovered from the Nile River Valley, Abusir el-Meleq, Egypt, dating from the 787 BC-23 AD time period. Two of the individuals were dated to the Pre-Ptolemaic Period (New Kingdom to Late Period), and one individual to the Ptolemaic Period. These results point to a genetic continuity of Ancient Egyptians with modern Egyptians. The results further point to a close genetic affinity between ancient Egyptians and Middle Eastern populations, especially ancient groups from the Levant.
Ancient Egyptians also displayed affinities to Nubians to the south of Egypt, in modern-day Sudan. Archaeological and historical evidence support interactions between Egyptian and Nubian populations more than 5000 years ago, with socio-political dynamics between Egyptians and Nubians ranging from peaceful coexistence to variably successful attempts of conquest. A study on sixty-six ancient Nubian individuals revealed significant contact with ancient Egyptians, characterized by the presence of % Neolithic/Bronze Age Levantine ancestry in these individuals. Such geneflow of Levantine-like ancestry corresponds with archaeological and botanic evidence, pointing to a Neolithic movement around 7,000 years ago.
Modern Egyptians, like modern Nubians, also underwent subsequent admixture events, contributing both "Sub-Saharan" African-like and West Asian-like ancestries, since the Roman period, with significance on the African Slave Trade and the Spread of Islam.
Genetic analysis of a modern Upper Egyptian population in Adaima by Eric Crubézy had identified genetic markers common across Africa, with 71% of the Adaima samples carrying E1b1 haplogroup and 3% carrying the L0f mitochondrial haplogroup. A secondary review, published in UNESCO General History of Africa Volume IX, in 2025 noted the results were preliminary and need to be confirmed by other laboratories with new sequencing methods. This was supported by an anthropological study which found the notable presence of dental markers, characteristic of Khoisan people, in a predynastic-era cemetery at Adaïma. The genetic marker E1b1 was identified in a number of genetic studies to have wide distribution across Egypt, with "P2/215/M35.1 (E1b1b), for short M35, likely also originated in eastern tropical Africa, and is predominantly distributed in an arc from the Horn of Africa up through Egypt". Multiple STR analyses of the Amarna royal mummies (including Rameses III, Tutankhamun and Amenhotep III) were deployed to estimate their ethnicity have found they had strong affinities with modern Sub-Saharan populations. Nonetheless, these forms of analysis were not exhaustive as only 8 of the 13 CODIS markets were used.
Some scholars, such as Christopher Ehret, caution that a wider sampling area is needed and argue that the current data is inconclusive on the origin of ancient Egyptians. They also point out issues with the previously used methodology such as the sampling size, comparative approach and a "biased interpretation" of the genetic data. They argue in favor for a link between Ancient Egypt and the northern Horn of Africa. This latter view has been attributed to the corresponding archaeological, genetic, linguistic and biological anthropological sources of evidence which broadly indicate that the earliest Egyptians and Nubians were the descendants of populations in northeast Africa.
Legacy
The culture and monuments of ancient Egypt have left a lasting legacy on the world. Egyptian civilization significantly influenced the Kingdom of Kush and Meroë with both adopting Egyptian religious and architectural norms (hundreds of pyramids (6–30 meters high) were built in Egypt/Sudan), as well as using Egyptian writing as the basis of the Meroitic script. Meroitic is the oldest written language in Africa, other than Egyptian, and was used from the 2nd century BC until the early 5th century AD. The cult of the goddess Isis, for example, became popular in the Roman Empire, as obelisks and other relics were transported back to Rome. The Romans also imported building materials from Egypt to erect Egyptian-style structures. Early historians such as Herodotus, Strabo, and Diodorus Siculus studied and wrote about the land, which Romans came to view as a place of mystery.
During the Middle Ages and the Renaissance, Egyptian pagan culture was in decline after the rise of Christianity and later Islam, but interest in Egyptian antiquity continued in the writings of medieval scholars such as Dhul-Nun al-Misri and al-Maqrizi. In the seventeenth and eighteenth centuries, European travelers and tourists brought back antiquities and wrote stories of their journeys, leading to a wave of Egyptomania across Europe, as evident in symbolism such as the Eye of Providence and the Great Seal of the United States. This renewed interest sent collectors to Egypt, who took, purchased, or were given many important antiquities. Napoleon arranged the first studies in Egyptology when he brought some 150 scientists and artists to study and document Egypt's natural history, which was published in the Description de l'Égypte.
In the 20th century, the Egyptian Government and archaeologists alike recognized the importance of cultural respect and integrity in excavations. Since the 2010s, the Ministry of Tourism and Antiquities has overseen excavations and the recovery of artifacts.
See also
Dugurasu
Egyptology
Glossary of ancient Egypt artifacts
Index of ancient Egypt–related articles
Outline of ancient Egypt
List of ancient Egyptians
List of Ancient Egyptian inventions and discoveries
Archaeology of Ancient Egypt
Archeological Map of Egypt
British school of diffusionism
Notes
References
Citations
Works cited
Further reading
External links
BBC History: Egyptiansprovides a reliable general overview and further links
Ancient Egyptian Science: A Source Book Door Marshall Clagett, 1989
Napoleon on the Nile: Soldiers, Artists, and the Rediscovery of Egypt, Art History.
Digital Egypt for Universities. Scholarly treatment with broad coverage and cross references (internal and external). Artifacts used extensively to illustrate topics.
Priests of Ancient Egypt In-depth-information about Ancient Egypt's priests, religious services and temples. Much picture material and bibliography. In English and German.
UCLA Encyclopedia of Egyptology
Ancient Egypt and the Role of Women by Joann Fletcher
Category:Ancient Egypt
Category:Bronze Age civilizations
Category:Cradle of civilization
Egypt
Category:Former monarchies of Asia
Category:Ancient peoples
Category:History of Egypt by period
Category:History of the Mediterranean
|
ancient_medieval
| 12,036
|
1016
|
Achill Island
|
https://en.wikipedia.org/wiki/Achill_Island
|
Achill Island (; ) is located off the west coast of Ireland in the historical barony of Burrishoole, County Mayo. It is the largest of the Irish isles and has an area of approximately . Achill had a population of 2,345 in the 2022 census. The island, which has been connected to the mainland by a bridge since 1887, is served by Michael Davitt Bridge, between the villages of Achill Sound and Polranny. Other centres of population include the villages of Keel, Dooagh, Dooega, Dooniver, and Dugort. There are a number of peat bogs on the island.
Roughly half of the island, including the villages of Achill Sound and Bun an Churraigh, are in the Gaeltacht (traditional Irish-speaking region) of Ireland, although the vast majority of the island's population speaks English as their daily language.
The island is within a civil parish, also called Achill, that includes Achillbeg, Inishbiggle and the Corraun Peninsula.
History
It is believed that at the end of the Neolithic Period (around 4000 BC), Achill had a population of 500–1,000 people. The island was mostly forest until the Neolithic people began crop cultivation. Settlement increased during the Iron Age, and the dispersal of small promontory forts around the coast indicates the warlike nature of the times. Megalithic tombs and forts can be seen at Slievemore, along the Atlantic Drive and on Achillbeg.
Overlords
Achill Island lies in the historical barony of Burrishoole, in the territory of ancient Umhall (Umhall Uactarach and Umhall Ioctarach), that originally encompassed an area extending from the County Galway/Mayo border to Achill Head.
The hereditary chieftains of Umhall were the O'Malleys, recorded in the area in 814 AD when they successfully repelled an incursion by Viking attackers in Clew Bay. The Anglo-Norman invasion of Connacht in 1235 AD saw the territory of Umhall taken over by the Butlers and later by the de Burgos. The Butler Lordship of Burrishoole continued into the late 14th century when Thomas le Botiller was recorded as being in possession of Akkyll and Owyll.
Immigration
In the 17th and 18th centuries, there was migration to Achill from other parts of Ireland, including from Ulster, due to the political and religious turmoil of the time. For a period, there were two different dialects of Irish being spoken on Achill. This led to several townlands being recorded as having two names during the 1824 Ordnance Survey, and some maps today give different names for the same place. Achill Irish has been described as having an Ulster Irish superstratum on top of a northern Connacht Irish substratum. In the 19th and early 20th centuries, seasonal migration of farm workers to East Lothian to pick potatoes took place; these groups of 'tattie howkers' were known as Achill workers, although not all were from Achill, and were organised for potato merchants by gaffers or gangers. Groups travelled from farm to farm to harvest the crop and were allocated basic accommodation. On 15 September 1937, ten young migrant potato pickers from Achill died in a fire at Kirkintilloch in Scotland.
Achill was connected to the mainland by Michael Davitt Bridge, a bridge connecting Achill Sound and Polranny, in 1887.
Specific historical sites and events
Grace O'Malley's Castle
Carrickkildavnet Castle is a 15th-century tower house associated with the O'Malley Clan, who were once a ruling family of Achill. Grace O' Malley, or Granuaile, the most famous of the O'Malleys, was born on Clare Island around 1530. Her father was the chieftain of the barony of Murrisk. The O'Malleys were a powerful seafaring family, who traded widely. Grace became a fearless leader and gained fame as a sea captain and pirate. She is reputed to have met Queen Elizabeth I in 1593. She died around 1603 and is buried in the O'Malley family tomb on Clare Island.
Achill Mission
The Achill Mission, also known as 'the Colony' at Dugort, was founded in 1831 by the Anglican (Church of Ireland) Rev Edward Nangle. The mission included schools, cottages, an orphanage, an infirmary and a guesthouse.
The Colony gave rise to mixed assessments, particularly during the Great Famine when charges of "souperism" were leveled against Nangle. The provision of food across the Achill Mission schools - which also provided 'scriptural' religious instruction - was particularly controversial.
For almost forty years, Nangle edited a newspaper called the Achill Missionary Herald and Western Witness, which was printed in Achill. He expanded his mission into Mweelin, Kilgeever, West Achill where a school, church, rectory, cottages and a training school were built. Edward's wife, Eliza, suffered poor health in Achill and died in 1852; she is buried with six of the Nangle children on the slopes of Slievemore in North Achill.
In 1848, at the height of the Great Famine, the Achill Mission published a prospectus seeking to raise funds for the acquisition of significant additional lands from Sir Richard O'Donnell. The document gives an overview, from the Mission's perspective, of its activities in Achill over the previous decade and a half including considerable sectarian unrest. In 1851, Edward Nangle confirmed the purchase of the land which made the Achill Mission the largest landowner on the island.
The Achill Mission began to decline slowly after Nangle was moved from Achill and it closed in the 1880s. When Edward Nangle died in 1883 there were opposing views on his legacy.
Railway
In 1894, the Westport – Newport railway line was extended to Achill Sound. The railway station is now a hostel. The train provided a great service to Achill, but it also is said to have fulfilled an ancient prophecy. Brian Rua O' Cearbhain had prophesied that 'carts on iron wheels' would carry bodies into Achill on their first and last journey. In 1894, the first train on the Achill railway carried the bodies of victims of the Clew Bay Drowning. This tragedy occurred when a boat overturned in Clew Bay, drowning thirty-two young people. They had been going to meet the steamer SS Elm which would take them to Britain for potato picking.
The Kirkintilloch Fire in 1937 almost fulfilled the second part of the prophecy when the bodies of ten victims were carried by rail to Achill. While it was not literally the last train, the railway closed just two weeks later. These people had died in a fire in a bothy in Kirkintilloch. This term referred to the temporary accommodation provided for those who went to Scotland to pick potatoes, a migratory pattern that had been established in the early nineteenth century.
Kildamhnait
Kildamhnait on the south-east coast of Achill is named after St. Damhnait, or Dymphna, who founded a church there in the 7th century. There is also a holy well just outside the graveyard. The present church was built in the 1700s and the graveyard contains memorials to the victims of two of Achill's greatest tragedies, the Kirchintilloch Fire (1937) and the Clew Bay Drowning (1894).
The Monastery
In 1852, John MacHale, Roman Catholic Archbishop of Tuam, purchased land in Bunnacurry, on which a Franciscan Monastery was established, which, for many years, provided an education for local children. The building of the monastery was marked by a conflict between the Protestants of the mission colony and the workers building the monastery. The dispute is known in the island folklore as the Battle of the Stones.
A monk who lived at the monastery for almost thirty years was Paul Carney. He wrote a biography of James Lynchehaun who was convicted for the 1894 attack on an Englishwoman named Agnes MacDonnell, which left her face disfigured, and the burning of her home, Valley House, Tonatanvally, North Achill. The home was rebuilt and MacDonnell died there in 1923, while Lynchehaun escaped to the US after serving 7 years and successfully resisted extradition but spent his last years in Scotland, where he died. Carney's great-grandniece, Patricia Byrne, wrote her own account of Mrs MacDonnell and Lynchehaun, entitled The Veiled Woman of Achill."Assault on Achill" , irishtimes.com. Accessed 27 October 2022.
Carney also wrote accounts of his lengthy fundraising trips across the U.S. at the start of the 20th century. The ruins of this monastery are still to be seen in Bunnacurry today.
Valley House
The historic Valley House is in Tonatanvally, "The Valley", near Dugort, in the northeast of Achill Island. The present building sits on the site of a hunting lodge built by the Earl of Cavan in the 19th century. Its notoriety arises from an incident in 1894 in which the then owner, an Englishwoman, Agnes McDonnell, was savagely beaten and the house set alight by a local man, James Lynchehaun. Lynchehaun had been employed by McDonnell as her land agent, but the two fell out and he was sacked and told to quit his accommodation on her estate. A lengthy legal battle ensued, with Lynchehaun refusing to leave. At the time, the issue of land ownership in Ireland was politically charged. After the events at the Valley House in 1895, Lynchehaun falsely claimed his actions were carried out on behalf of the Irish Republican Brotherhood. He escaped from custody after serving seven years and fled to the United States seeking political asylum (although Michael Davitt refused to shake his hand, calling Lynchehaun a "murderer"), where he successfully defeated legal attempts by the British authorities to have him extradited to face charges arising from the attack and the burning of the Valley House. McDonnell suffered severe injuries from the attack but survived and lived for another 23 years, dying in 1923. Lynchehaun is said to have returned to Achill on two occasions, once in disguise as an American tourist, and died in Girvan, Scotland, in 1937. The Valley House is now a hostel and bar.
Deserted Village
Close to Dugort, at the base of Slievemore mountain lies the Deserted Village. There are between 80 and 100 ruined houses in the village. The houses were built of unmortared stone. Each house consisted of just one room. In the area surrounding the Deserted Village, including on the mountain slopes, there is evidence of 'lazy beds' in which crops like potatoes were grown. In Achill, as in other areas of Ireland, a 'rundale' system was used for farming. This meant that the land around a village was rented from a landlord. This land was then shared by all the villagers to graze their cattle and sheep. Each family would then have two or three small pieces of land scattered about the village, which they used to grow crops. For many years people lived in the village and then in 1845 famine struck in Achill as it did in the rest of Ireland. Most of the families moved to the nearby village of Dooagh, which is beside the sea, while others emigrated. Living beside the sea meant that fish and shellfish could be used for food. The village was completely abandoned and is now known as the 'Deserted Village'.
While abandoned, the families that moved to Dooagh (and their descendants) continued to use the village as a 'booley village'.Deserted village, Slievemore, Achill Island, achill247.com Retrieved on 17 February 2008. This means that during the summer season, the younger members of the family, teenage boys and girls, would take the livestock to the area and tend flocks or herds on the hillside and stay in the houses of the Deserted Village. They would then return to Dooagh in the autumn. This custom continued until the 1940s. Boolying was also carried out in other areas of Achill, including Annagh on Croaghaun mountain and in Curraun. At Ailt, Kildownet, the remains of a similar deserted village can be found. This village was deserted in 1855 when the tenants were evicted by the local landlord so the land could be used for cattle grazing; the tenants were forced to rent holdings in Currane, Dooega and Slievemore. Others emigrated to America.
Archaeology
In 2009, a summer field school excavated Round House 2 on Slievemore Mountain under the direction of archaeologist Stuart Rathbone. Only the outside north wall, entrance way and inside of the Round House were completely excavated.Amanda Burt, member of Achill Field School, Summer 2009.
From 2004 to 2006, the Achill Island Maritime Archaeology Project directed by Chuck Meide was sponsored by the College of William and Mary, the Institute of Maritime History, the Achill Folklife Centre (now the Achill Archaeology Centre), and the Lighthouse Archaeological Maritime Program (LAMP). This project focused on the documentation of archaeological resources related to Achill's rich maritime heritage. Maritime archaeologists recorded a 19th-century fishing station, an ice house, boat house ruins, a number of anchors which had been salvaged from the sea, 19th-century and more recent currach pens, a number of traditional vernacular watercraft including a possibly 100-year-old Achill yawl, and the remains of four historic shipwrecks.
Other places of interest
The cliffs of Croaghaun on the western end of the island are the third highest sea cliffs in Europe but are inaccessible by road. Near the westernmost point of Achill, Achill Head, is Keem Bay. Keel Beach is visited by tourists and used as a surfing location. South of Keem beach is Moytoge Head, which with its rounded appearance drops dramatically down to the ocean. An old British observation post, built during World War I to prevent the Germans from landing arms for the Irish Republican Army, still stands on Moytoge. During the Emergency (WWII), this post was rebuilt by the Irish Defence Forces as a lookout post for the Coast Watching Service wing of the Defence Forces. It operated from 1939 to 1945.See Michael Kennedy, Guarding Neutral Ireland (Dublin, 2008), p. 50
The mountain of Slievemore, (672 m) rises dramatically in the north of the island. On its slops is an abandoned village, the "Deserted Village". West of this ruined village is an old Martello tower, again built by the British to warn of any possible French invasion during the Napoleonic Wars. The area also has an approximately 5000-year-old Neolithic tomb.
Achillbeg (, Little Achill) is a small island just off Achill's southern tip. Its inhabitants were resettled on Achill in the 1960s.Jonathan Beaumont (2005), Achillbeg: The Life of an Island; A plaque to the boxer Johnny Kilbane is situated on Achillbeg and was erected to celebrate 100 years since his first championship win.
Caisleán Ghráinne, also known as Kildownet Castle, is a small tower house built in the early 1400s. It is located in Cloughmore, on the south of Achill Island. It is noted for its associations with Grace O'Malley, along with the larger Rockfleet Castle in Newport.
Economy and tourism
While a number of attempts at setting up small industrial units on the island have been made, its economy is largely dependent on tourism. Subventions from Achill people working abroad allowed a number of families to remain living in Achill throughout the 19th and 20th centuries. In the past, fishing was a significant activity but this aspect of the economy has since reduced. At one stage, the island was known for its shark fishing, and basking shark in particular was fished for its valuable shark liver oil.
During the 1960s and 1970s, there was growth in tourism. The largest employers on Achill include its two hotels. The island has several bars, cafes and restaurants. The island's Atlantic location means that seafood, including lobster, mussels, salmon, trout and winkles, are common. Lamb and beef are also popular.
Religion
Most people on Achill are either Roman Catholic or Anglican (Church of Ireland).
Catholic churches on the island include: Bunnacurry Church (Saint Josephs), The Valley Church (only open for certain events), Pollagh Church, Dooega Church and Achill Sound Church.
There is a Church of Ireland church (St. Thomas's church) at Dugort.
The House of Prayer, a controversial "religious retreat" on the island, was established in 1993.
Artists
For almost two centuries, a number of artists have had a close relationship with Achill Island, including the landscape painter Paul Henry. Within the emerging Irish Free State, Paul Henry's landscapes from Achill and other areas reinforced a vision of Ireland of communities living in harmony with the land. He lived in Achill for almost a decade with his wife, artist Grace Henry and, while using similar subject-matter, the pair developed very different styles.
This relationship of artists with Achill was particularly intense in the early decades of the twentieth century when Eva O'Flaherty (1874–1963) became a focal point for artistic networking on the island. A network of over 200 artists linked to Achill is charted in "Achill Painters - An Island History" and includes painters such as the Belgian Marie Howet, the American Robert Henri, the modernist painter Mainie Jellett and contemporary artist Camille Souter.
The 2018 Coming Home Art & The Great Hunger exhibition, in partnership with The Great Hunger Museum of Quinnipiac University, USA, featured Achill's Deserted Village and the island lazy beds prominently in works by Geraldine O'Reilly and Alanna O'Kelly; also included was an 1873 painting, 'Cottage, Achill Island' by Alexander Williams – one of the first artists to open up the island to a wider audience.
Education
Following the 1695 introduction of Penal Laws that prohibited Catholics and Presbyterians in Ireland from practising their religion, unofficial or secret hedge schools were active in Achill during the 17th to 19th centuries.
At the turn of the 21st century, nine national schools operated on the island; as of 2022, there were six. The island also had two secondary schools, McHale College and Scoil Damhnait, which amalgamated in 2011 to form Coláiste Pobail Acla.
Transport
Rail
Achill railway station, still on the mainland and not on the island, was opened by the Midland Great Western Railway on 13 May 1895, the terminus of its line from Westport via Newport and Mulranny. The station, and the line, were closed by the Great Southern Railways on 1 October 1937. The Great Western Greenway, created during 2010 and 2011, follows the line's route and has proved to be very successful in attracting visitors to Achill and the surrounding areas.
Road
The R319 road is the main road onto the island.
Bus Éireann's route 450 operates several times daily to Westport and Louisburgh from the island. Bus Éireann also provides transport for the area's secondary school children.
Sport
Achill has a Gaelic football club which competes in the junior championship and division 1E of the Mayo League. There are also Achill Rovers which play in the Mayo Association Football League.
There is a 9-hole links golf course on the island. Outdoor activities can be done through Achill Outdoor Education Centre. Achill Island's rugged landscape and the surrounding ocean offers multiple locations for outdoor adventure activities, like surfing, kite-surfing and sea kayaking. Fishing and watersports are also common. Sailing regattas featuring a local vessel type, the Achill Yawl, have been run since the 19th century.
Demographics
In 2016, the population was 2,594, with 5.2% claiming they spoke Irish on a daily basis outside the education system. The island's population has declined from around 6,000 before the Great Famine of the mid-19th century.
The table below reports data on Achill Island's population taken from Discover the Islands of Ireland (Alex Ritsema, Collins Press, 1999) and the census of Ireland.
Notable people
Heinrich Böll, German writer who spent several summers with his family and later lived several months per year on the island
Charles Boycott (1832–1897), unpopular landowner from whom the term boycott arose
Nancy Corrigan, pioneer aviator, second female commercial pilot in the US.
Dermot Freyer (1883–1970), writer who opened a hotel on the island
Graham Greene, author, who stayed on the island several times in the late 1940s and wrote parts of several novels there
Paul Henry, artist, stayed on the island for a number of years in the early 1900s
James Kilbane, singer, lives on the island
Johnny Kilbane, boxer
Saoirse McHugh, former Green Party politician
Danny McNamara, musician
Richard McNamara, musician
Eva O'Flaherty, Nationalist, model and milliner
Manus Patten, recipient of the Scott Medal
Thomas Patten, from Dooega. Died during the Siege of Madrid in December 1936
Honor Tracy, author, lived there until her death in 1989
In popular culture
The island is featured throughout the film The Banshees of Inisherin in various locations on the island including Keem Bay, Cloughmore, and Purteen Pier.
The island is also the primary setting of the visual novel If Found....
Further reading
Heinrich Böll: Irisches Tagebuch, Berlin, 1957
Bob Kingston The Deserted Village at Slievemore, Castlebar, 1990
Theresa McDonald: Achill: 5000 B.C. to 1900 A.D.: Archeology History Folklore, I.A.S. Publications [1992]
Rosa Meehan: The Story of Mayo, Castlebar, 2003
James Carney: The Playboy & the Yellow lady, 1986 Poolbeg
Hugo Hamilton: The Island of Talking, 2007
Mealla Nī Ghiobúin: Dugort, Achill Island 1831–1861: The Rise and Fall of a Missionary Community, 2001
Patricia Byrne: The Veiled Woman of Achill – Island Outrage & A Playboy Drama, 2012
Mary J. Murphy: Achill's Eva O'Flaherty – Forgotten Island Heroine, 2011
Patricia Byrne: The Preacher and The Prelate – The Achill Mission Colony and The Battle for Souls in Famine Ireland, 2018
Mary J. Murphy, Achill Painters – An Island History, 2020
See also
List of islands of County Mayo
References
External links
Colaiste Pobail Acla students project on the Achill area
Achill Island Maritime Archaeology Project
VisitAchill multilingual visitor's site
Category:Islands of County Mayo
Category:Gaeltacht places in County Mayo
|
geography
| 3,551
|
1206
|
Atomic orbital
|
https://en.wikipedia.org/wiki/Atomic_orbital
|
In quantum mechanics, an atomic orbital () is a function describing the location and wave-like behavior of an electron in an atom. This function describes an electron's charge distribution around the atom's nucleus, and can be used to calculate the probability of finding an electron in a specific region around the nucleus.
Each orbital in an atom is characterized by a set of values of three quantum numbers , , and , which respectively correspond to an electron's energy, its orbital angular momentum, and its orbital angular momentum projected along a chosen axis (magnetic quantum number). The orbitals with a well-defined magnetic quantum number are generally complex-valued. Real-valued orbitals can be formed as linear combinations of and orbitals, and are often labeled using associated harmonic polynomials (e.g., xy, ) which describe their angular structure.
An orbital can be occupied by a maximum of two electrons, each with its own projection of spin . The simple names s orbital, p orbital, d orbital, and f orbital refer to orbitals with angular momentum quantum number and respectively. These names, together with their n values, are used to describe electron configurations of atoms. They are derived from description by early spectroscopists of certain series of alkali metal spectroscopic lines as sharp, principal, diffuse, and fundamental. Orbitals for continue alphabetically (g, h, i, k, ...), omitting j because some languages do not distinguish between letters "i" and "j".
Atomic orbitals are basic building blocks of the atomic orbital model (or electron cloud or wave mechanics model), a modern framework for visualizing submicroscopic behavior of electrons in matter. In this model, the electron cloud of an atom may be seen as being built up (in approximation) in an electron configuration that is a product of simpler hydrogen-like atomic orbitals. The repeating periodicity of blocks of 2, 6, 10, and 14 elements within sections of the periodic table arises naturally from the total number of electrons that occupy a complete set of s, p, d, and f orbitals, respectively, though for higher values of quantum number , particularly when the atom bears a positive charge, energies of certain sub-shells become very similar and therefore, the order in which they are said to be populated by electrons (e.g., Cr = [Ar]4s13d5 and Cr2+ = [Ar]3d4) can be rationalized only somewhat arbitrarily.
Electron properties
With the development of quantum mechanics and experimental findings (such as the two slit diffraction of electrons), it was found that the electrons orbiting a nucleus could not be fully described as particles, but needed to be explained by wave–particle duality. In this sense, electrons have the following properties:
Wave-like properties:
Electrons do not orbit a nucleus in the manner of a planet orbiting a star, but instead exist as standing waves. Thus the lowest possible energy an electron can take is similar to the fundamental frequency of a wave on a string. Higher energy states are similar to harmonics of that fundamental frequency.
The electrons are never in a single point location, though the probability of interacting with the electron at a single point can be found from the electron's wave function. The electron's charge acts like it is smeared out in space in a continuous distribution, proportional at any point to the squared magnitude of the electron's wave function.
Particle-like properties:
The number of electrons orbiting a nucleus can be only an integer.
Electrons jump between orbitals like particles. For example, if one photon strikes the electrons, only one electron changes state as a result.
Electrons retain particle-like properties such as: each wave state has the same electric charge as its electron particle. Each wave state has a single discrete spin (spin up or spin down) depending on its superposition.
Thus, electrons cannot be described simply as solid particles. An analogy might be that of a large and often oddly shaped "atmosphere" (the electron), distributed around a relatively tiny planet (the nucleus). Atomic orbitals exactly describe the shape of this "atmosphere" only when one electron is present. When more electrons are added, the additional electrons tend to more evenly fill in a volume of space around the nucleus so that the resulting collection ("electron cloud") tends toward a generally spherical zone of probability describing the electron's location, because of the uncertainty principle.
One should remember that these orbital 'states', as described here, are merely eigenstates of an electron in its orbit. An actual electron exists in a superposition of states, which is like a weighted average, but with complex number weights. For instance, an electron could be in a pure eigenstate (2, 1, 0), or a mixed state (2, 1, 0) + (2, 1, 1), or even the mixed state (2, 1, 0) + (2, 1, 1). For each eigenstate, a property has an eigenvalue. Therefore, for the three states just mentioned, the value of is 2, and the value of is 1. For the second and third states, the value for is a superposition of 0 and 1. As a superposition of states, it is ambiguous—either exactly 0 or exactly 1—not an intermediate or average value like the fraction . A superposition of eigenstates (2, 1, 1) and (3, 2, 1) would have an ambiguous and , but would definitely be 1. Eigenstates make it easier to deal with the math. You can choose a different basis of eigenstates by superimposing eigenstates from any other basis (see Real orbitals below).
Formal quantum mechanical definition
Atomic orbitals may be defined more precisely in formal quantum mechanical language. They are approximate solutions to the Schrödinger equation for the electrons bound to the atom by the electric field of the atom's nucleus. Specifically, in quantum mechanics, the state of an atom, i.e., an eigenstate of the atomic Hamiltonian, is approximated by an expansion (see configuration interaction expansion and basis set) into linear combinations of anti-symmetrized products (Slater determinants) of one-electron functions. The spatial components of these one-electron functions are called atomic orbitals. (When one considers also their spin component, one speaks of atomic spin orbitals.) A state is actually a function of the coordinates of all the electrons, so that their motion is correlated, but this is often approximated by this independent-particle model of products of single electron wave functions.Roger Penrose, The Road to Reality. (The London dispersion force, for example, depends on the correlations of the motion of the electrons.)
In atomic physics, the atomic spectral lines correspond to transitions (quantum leaps) between quantum states of an atom. These states are labeled by a set of quantum numbers summarized in the term symbol and usually associated with particular electron configurations, i.e., by occupation schemes of atomic orbitals (for example, 1s2 2s2 2p6 for the ground state of neon-term symbol: 1S0).
This notation means that the corresponding Slater determinants have a clear higher weight in the configuration interaction expansion. The atomic orbital concept is therefore a key concept for visualizing the excitation process associated with a given transition. For example, one can say for a given transition that it corresponds to the excitation of an electron from an occupied orbital to a given unoccupied orbital. Nevertheless, one has to keep in mind that electrons are fermions ruled by the Pauli exclusion principle and cannot be distinguished from each other. Moreover, it sometimes happens that the configuration interaction expansion converges very slowly and that one cannot speak about simple one-determinant wave function at all. This is the case when electron correlation is large.
Fundamentally, an atomic orbital is a one-electron wave function, even though many electrons are not in one-electron atoms, and so the one-electron view is an approximation. When thinking about orbitals, we are often given an orbital visualization heavily influenced by the Hartree–Fock approximation, which is one way to reduce the complexities of molecular orbital theory.
Types of orbital
Atomic orbitals can be the hydrogen-like "orbitals" which are exact solutions to the Schrödinger equation for a hydrogen-like "atom" (i.e., atom with one electron). Alternatively, atomic orbitals refer to functions that depend on the coordinates of one electron (i.e., orbitals) but are used as starting points for approximating wave functions that depend on the simultaneous coordinates of all the electrons in an atom or molecule. The coordinate systems chosen for orbitals are usually spherical coordinates in atoms and Cartesian in polyatomic molecules. The advantage of spherical coordinates here is that an orbital wave function is a product of three factors each dependent on a single coordinate: . The angular factors of atomic orbitals generate s, p, d, etc. functions as real combinations of spherical harmonics (where and are quantum numbers). There are typically three mathematical forms for the radial functions which can be chosen as a starting point for the calculation of the properties of atoms and molecules with many electrons:
The hydrogen-like orbitals are derived from the exact solutions of the Schrödinger equation for one electron and a nucleus, for a hydrogen-like atom. The part of the function that depends on distance r from the nucleus has radial nodes and decays as .
The Slater-type orbital (STO) is a form without radial nodes but decays from the nucleus as does a hydrogen-like orbital.
The form of the Gaussian type orbital (Gaussians) has no radial nodes and decays as .
Although hydrogen-like orbitals are still used as pedagogical tools, the advent of computers has made STOs preferable for atoms and diatomic molecules since combinations of STOs can replace the nodes in hydrogen-like orbitals. Gaussians are typically used in molecules with three or more atoms. Although not as accurate by themselves as STOs, combinations of many Gaussians can attain the accuracy of hydrogen-like orbitals.
History
The term orbital was introduced by Robert S. Mulliken in 1932 as short for one-electron orbital wave function. Niels Bohr explained around 1913 that electrons might revolve around a compact nucleus with definite angular momentum. Bohr's model was an improvement on the 1911 explanations of Ernest Rutherford, that of the electron moving around a nucleus. Japanese physicist Hantaro Nagaoka published an orbit-based hypothesis for electron behavior as early as 1904. These theories were each built upon new observations starting with simple understanding and becoming more correct and complex. Explaining the behavior of these electron "orbits" was one of the driving forces behind the development of quantum mechanics.
Early models
With J. J. Thomson's discovery of the electron in 1897, it became clear that atoms were not the smallest building blocks of nature, but were rather composite particles. The newly discovered structure within atoms tempted many to imagine how the atom's constituent parts might interact with each other. Thomson theorized that multiple electrons revolve in orbit-like rings within a positively charged jelly-like substance, and between the electron's discovery and 1909, this "plum pudding model" was the most widely accepted explanation of atomic structure.
Shortly after Thomson's discovery, Hantaro Nagaoka predicted a different model for electronic structure. Unlike the plum pudding model, the positive charge in Nagaoka's "Saturnian Model" was concentrated into a central core, pulling the electrons into circular orbits reminiscent of Saturn's rings. Few people took notice of Nagaoka's work at the time, and Nagaoka himself recognized a fundamental defect in the theory even at its conception, namely that a classical charged object cannot sustain orbital motion because it is accelerating and therefore loses energy due to electromagnetic radiation. Nevertheless, the Saturnian model turned out to have more in common with modern theory than any of its contemporaries.
Bohr atom
In 1909, Ernest Rutherford discovered that the bulk of the atomic mass was tightly condensed into a nucleus, which was also found to be positively charged. It became clear from his analysis in 1911 that the plum pudding model could not explain atomic structure. In 1913, Rutherford's post-doctoral student, Niels Bohr, proposed a new model of the atom, wherein electrons orbited the nucleus with classical periods, but were permitted to have only discrete values of angular momentum, quantized in units ħ. This constraint automatically allowed only certain electron energies. The Bohr model of the atom fixed the problem of energy loss from radiation from a ground state (by declaring that there was no state below this), and more importantly explained the origin of spectral lines.
After Bohr's use of Einstein's explanation of the photoelectric effect to relate energy levels in atoms with the wavelength of emitted light, the connection between the structure of electrons in atoms and the emission and absorption spectra of atoms became an increasingly useful tool in the understanding of electrons in atoms. The most prominent feature of emission and absorption spectra (known experimentally since the middle of the 19th century), was that these atomic spectra contained discrete lines. The significance of the Bohr model was that it related the lines in emission and absorption spectra to the energy differences between the orbits that electrons could take around an atom. This was, however, not achieved by Bohr through giving the electrons some kind of wave-like properties, since the idea that electrons could behave as matter waves was not suggested until eleven years later. Still, the Bohr model's use of quantized angular momenta and therefore quantized energy levels was a significant step toward the understanding of electrons in atoms, and also a significant step towards the development of quantum mechanics in suggesting that quantized restraints must account for all discontinuous energy levels and spectra in atoms.
With de Broglie's suggestion of the existence of electron matter waves in 1924, and for a short time before the full 1926 Schrödinger equation treatment of hydrogen-like atoms, a Bohr electron "wavelength" could be seen to be a function of its momentum; so a Bohr orbiting electron was seen to orbit in a circle at a multiple of its half-wavelength. The Bohr model for a short time could be seen as a classical model with an additional constraint provided by the 'wavelength' argument. However, this period was immediately superseded by the full three-dimensional wave mechanics of 1926. In our current understanding of physics, the Bohr model is called a semi-classical model because of its quantization of angular momentum, not primarily because of its relationship with electron wavelength, which appeared in hindsight a dozen years after the Bohr model was proposed.
The Bohr model was able to explain the emission and absorption spectra of hydrogen. The energies of electrons in the n = 1, 2, 3, etc. states in the Bohr model match those of current physics. However, this did not explain similarities between different atoms, as expressed by the periodic table, such as the fact that helium (two electrons), neon (10 electrons), and argon (18 electrons) exhibit similar chemical inertness. Modern quantum mechanics explains this in terms of electron shells and subshells which can each hold a number of electrons determined by the Pauli exclusion principle. Thus the n = 1 state can hold one or two electrons, while the n = 2 state can hold up to eight electrons in 2s and 2p subshells. In helium, all n = 1 states are fully occupied; the same is true for n = 1 and n = 2 in neon. In argon, the 3s and 3p subshells are similarly fully occupied by eight electrons; quantum mechanics also allows a 3d subshell but this is at higher energy than the 3s and 3p in argon (contrary to the situation for hydrogen) and remains empty.
Modern conceptions and connections to the Heisenberg uncertainty principle
Immediately after Heisenberg discovered his uncertainty principle, Bohr noted that the existence of any sort of wave packet implies uncertainty in the wave frequency and wavelength, since a spread of frequencies is needed to create the packet itself. In quantum mechanics, where all particle momenta are associated with waves, it is the formation of such a wave packet which localizes the wave, and thus the particle, in space. In states where a quantum mechanical particle is bound, it must be localized as a wave packet, and the existence of the packet and its minimum size implies a spread and minimal value in particle wavelength, and thus also momentum and energy. In quantum mechanics, as a particle is localized to a smaller region in space, the associated compressed wave packet requires a larger and larger range of momenta, and thus larger kinetic energy. Thus the binding energy to contain or trap a particle in a smaller region of space increases without bound as the region of space grows smaller. Particles cannot be restricted to a geometric point in space, since this would require infinite particle momentum.
In chemistry, Erwin Schrödinger, Linus Pauling, Mulliken and others noted that the consequence of Heisenberg's relation was that the electron, as a wave packet, could not be considered to have an exact location in its orbital. Max Born suggested that the electron's position needed to be described by a probability distribution which was connected with finding the electron at some point in the wave-function which described its associated wave packet. The new quantum mechanics did not give exact results, but only the probabilities for the occurrence of a variety of possible such results. Heisenberg held that the path of a moving particle has no meaning if we cannot observe it, as we cannot with electrons in an atom.
Orbital names
Orbital notation and subshells
Orbitals have been given names, which are usually given in the form:
where X is the energy level corresponding to the principal quantum number ; type is a lower-case letter denoting the shape or subshell of the orbital, corresponding to the angular momentum quantum number .
For example, the orbital 1s (pronounced as the individual numbers and letters: "'one' 'ess'") is the lowest energy level () and has an angular quantum number of , denoted as s. Orbitals with are denoted as p, d and f respectively.
The set of orbitals for a given n and is called a subshell, denoted
.
The superscript y shows the number of electrons in the subshell. For example, the notation 2p4 indicates that the 2p subshell of an atom contains 4 electrons. This subshell has 3 orbitals, each with n = 2 and = 1.
X-ray notation
There is also another, less common system still used in X-ray science known as X-ray notation, which is a continuation of the notations used before orbital theory was well understood. In this system, the principal quantum number is given a letter associated with it. For , the letters associated with those numbers are K, L, M, N, O, ... respectively.
Hydrogen-like orbitals
The simplest atomic orbitals are those that are calculated for systems with a single electron, such as the hydrogen atom. An atom of any other element ionized down to a single electron (He+, Li2+, etc.) is very similar to hydrogen, and the orbitals take the same form. In the Schrödinger equation for this system of one negative and one positive particle, the atomic orbitals are the eigenstates of the Hamiltonian operator for the energy. They can be obtained analytically, meaning that the resulting orbitals are products of a polynomial series, and exponential and trigonometric functions. (see hydrogen atom).
For atoms with two or more electrons, the governing equations can be solved only with the use of methods of iterative approximation. Orbitals of multi-electron atoms are qualitatively similar to those of hydrogen, and in the simplest models, they are taken to have the same form. For more rigorous and precise analysis, numerical approximations must be used.
A given (hydrogen-like) atomic orbital is identified by unique values of three quantum numbers: , , and . The rules restricting the values of the quantum numbers, and their energies (see below), explain the electron configuration of the atoms and the periodic table.
The stationary states (quantum states) of a hydrogen-like atom are its atomic orbitals. However, in general, an electron's behavior is not fully described by a single orbital. Electron states are best represented by time-depending "mixtures" (linear combinations) of multiple orbitals. See Linear combination of atomic orbitals molecular orbital method.
The quantum number first appeared in the Bohr model where it determines the radius of each circular electron orbit. In modern quantum mechanics however, determines the mean distance of the electron from the nucleus; all electrons with the same value of n lie at the same average distance. For this reason, orbitals with the same value of n are said to comprise a "shell". Orbitals with the same value of n and also the same value of are even more closely related, and are said to comprise a "subshell".
Quantum numbers
Because of the quantum mechanical nature of the electrons around a nucleus, atomic orbitals can be uniquely defined by a set of integers known as quantum numbers. These quantum numbers occur only in certain combinations of values, and their physical interpretation changes depending on whether real or complex versions of the atomic orbitals are employed.
Complex orbitals
In physics, the most common orbital descriptions are based on the solutions to the hydrogen atom, where orbitals are given by the product between a radial function and a pure spherical harmonic. The quantum numbers, together with the rules governing their possible values, are as follows:
The principal quantum number describes the energy of the electron and is always a positive integer. In fact, it can be any positive integer, but for reasons discussed below, large numbers are seldom encountered. Each atom has, in general, many orbitals associated with each value of n; these orbitals together are sometimes called electron shells.
The azimuthal quantum number describes the orbital angular momentum of each electron and is a non-negative integer. Within a shell where is some integer , ranges across all (integer) values satisfying the relation . For instance, the shell has only orbitals with , and the shell has only orbitals with , and . The set of orbitals associated with a particular value of are sometimes collectively called a subshell.
The magnetic quantum number, , describes the projection of the orbital angular momentum along a chosen axis. It determines the magnitude of the current circulating around that axis and the orbital contribution to the magnetic moment of an electron via the Ampèrian loop model. Within a subshell , obtains the integer values in the range .
The above results may be summarized in the following table. Each cell represents a subshell, and lists the values of available in that subshell. Empty cells represent subshells that do not exist.
... ... 0 −1, 0, 1 ... 0 −1, 0, 1 −2, −1, 0, 1, 2 ... 0 −1, 0, 1 −2, −1, 0, 1, 2 −3, −2, −1, 0, 1, 2, 3 ... 0 −1, 0, 1 −2, −1, 0, 1, 2 −3, −2, −1, 0, 1, 2, 3 −4, −3, −2, −1, 0, 1, 2, 3, 4 ... ... ... ... ... ... ... ...
Subshells are usually identified by their - and -values. is represented by its numerical value, but is represented by a letter as follows: 0 is represented by 's', 1 by 'p', 2 by 'd', 3 by 'f', and 4 by 'g'. For instance, one may speak of the subshell with and as a '2s subshell'.
Each electron also has angular momentum in the form of quantum mechanical spin given by spin s = . Its projection along a specified axis is given by the spin magnetic quantum number, ms, which can be + or −. These values are also called "spin up" or "spin down" respectively.
The Pauli exclusion principle states that no two electrons in an atom can have the same values of all four quantum numbers. If there are two electrons in an orbital with given values for three quantum numbers, (, , ), these two electrons must differ in their spin projection ms.
The above conventions imply a preferred axis (for example, the z direction in Cartesian coordinates), and they also imply a preferred direction along this preferred axis. Otherwise there would be no sense in distinguishing from . As such, the model is most useful when applied to physical systems that share these symmetries. The Stern–Gerlach experimentwhere an atom is exposed to a magnetic fieldprovides one such example.
Real orbitals
Instead of the complex orbitals described above, it is common, especially in the chemistry literature, to use real atomic orbitals. These real orbitals arise from simple linear combinations of complex orbitals. Using the Condon–Shortley phase convention, real orbitals are related to complex orbitals in the same way that the real spherical harmonics are related to complex spherical harmonics. Letting denote a complex orbital with quantum numbers , , and , the real orbitals may be defined by
If , with the radial part of the orbital, this definition is equivalent to where is the real spherical harmonic related to either the real or imaginary part of the complex spherical harmonic .
Real spherical harmonics are physically relevant when an atom is embedded in a crystalline solid, in which case there are multiple preferred symmetry axes but no single preferred direction. Real atomic orbitals are also more frequently encountered in introductory chemistry textbooks and shown in common orbital visualizations. In real hydrogen-like orbitals, quantum numbers and have the same interpretation and significance as their complex counterparts, but is no longer a good quantum number (but its absolute value is).
Some real orbitals are given specific names beyond the simple designation. Orbitals with quantum number are called orbitals. With this one can already assign names to complex orbitals such as ; the first symbol is the quantum number, the second character is the symbol for that particular quantum number and the subscript is the quantum number.
As an example of how the full orbital names are generated for real orbitals, one may calculate . From the table of spherical harmonics, with . Then
Likewise . As a more complicated example:
In all these cases we generate a Cartesian label for the orbital by examining, and abbreviating, the polynomial in appearing in the numerator. We ignore any terms in the polynomial except for the term with the highest exponent in .
We then use the abbreviated polynomial as a subscript label for the atomic state, using the same nomenclature as above to indicate the and quantum numbers.
The expression above all use the Condon–Shortley phase convention which is favored by quantum physicists. Other conventions exist for the phase of the spherical harmonics. Under these different conventions the and orbitals may appear, for example, as the sum and difference of and , contrary to what is shown above.
Below is a list of these Cartesian polynomial names for the atomic orbitals. There does not seem to be reference in the literature as to how to abbreviate the long Cartesian spherical harmonic polynomials for so there does not seem be consensus on the naming of orbitals or higher according to this nomenclature.
Shapes of orbitals
Simple pictures showing orbital shapes are intended to describe the angular forms of regions in space where the electrons occupying the orbital are likely to be found. The diagrams cannot show the entire region where an electron can be found, since according to quantum mechanics there is a non-zero probability of finding the electron (almost) anywhere in space. Instead the diagrams are approximate representations of boundary or contour surfaces where the probability density has a constant value, chosen so that there is a certain probability (for example 90%) of finding the electron within the contour. Although as the square of an absolute value is everywhere non-negative, the sign of the wave function is often indicated in each subregion of the orbital picture.
Sometimes the function is graphed to show its phases, rather than which shows probability density but has no phase (which is lost when taking absolute value, since is a complex number). orbital graphs tend to have less spherical, thinner lobes than graphs, but have the same number of lobes in the same places, and otherwise are recognizable. This article, to show wave function phase, shows mostly graphs.
The lobes can be seen as standing wave interference patterns between the two counter-rotating, ring-resonant traveling wave and modes; the projection of the orbital onto the xy plane has a resonant wavelength around the circumference. Although rarely shown, the traveling wave solutions can be seen as rotating banded tori; the bands represent phase information. For each there are two standing wave solutions and . If , the orbital is vertical, counter rotating information is unknown, and the orbital is z-axis symmetric. If there are no counter rotating modes. There are only radial modes and the shape is spherically symmetric.
Nodal planes and nodal spheres are surfaces on which the probability density vanishes. The number of nodal surfaces is controlled by the quantum numbers and . An orbital with azimuthal quantum number has radial nodal planes passing through the origin. For example, the s orbitals () are spherically symmetric and have no nodal planes, whereas the p orbitals () have a single nodal plane between the lobes. The number of nodal spheres equals , consistent with the restriction on the quantum numbers. The principal quantum number controls the total number of nodal surfaces which is . Loosely speaking, is energy, is analogous to eccentricity, and is orientation.
In general, determines size and energy of the orbital for a given nucleus; as increases, the size of the orbital increases. The higher nuclear charge of heavier elements causes their orbitals to contract by comparison to lighter ones, so that the size of the atom remains very roughly constant, even as the number of electrons increases.
Also in general terms, determines an orbital's shape, and its orientation. However, since some orbitals are described by equations in complex numbers, the shape sometimes depends on also. Together, the whole set of orbitals for a given and fill space as symmetrically as possible, though with increasingly complex sets of lobes and nodes.
The single s orbitals () are shaped like spheres. For it is roughly a solid ball (densest at center and fades outward exponentially), but for , each single s orbital is made of spherically symmetric surfaces which are nested shells (i.e., the "wave-structure" is radial, following a sinusoidal radial component as well). See illustration of a cross-section of these nested shells, at right. The s orbitals for all numbers are the only orbitals with an anti-node (a region of high wave function density) at the center of the nucleus. All other orbitals (p, d, f, etc.) have angular momentum, and thus avoid the nucleus (having a wave node at the nucleus). Recently, there has been an effort to experimentally image the 1s and 2p orbitals in a SrTiO3 crystal using scanning transmission electron microscopy with energy dispersive x-ray spectroscopy. Because the imaging was conducted using an electron beam, Coulombic beam-orbital interaction that is often termed as the impact parameter effect is included in the outcome (see the figure at right).
The shapes of p, d and f orbitals are described verbally here and shown graphically in the Orbitals table below. The three p orbitals for have the form of two ellipsoids with a point of tangency at the nucleus (the two-lobed shape is sometimes referred to as a "dumbbell"—there are two lobes pointing in opposite directions from each other). The three p orbitals in each shell are oriented at right angles to each other, as determined by their respective linear combination of values of . The overall result is a lobe pointing along each direction of the primary axes.
Four of the five d orbitals for look similar, each with four pear-shaped lobes, each lobe tangent at right angles to two others, and the centers of all four lying in one plane. Three of these planes are the xy-, xz-, and yz-planes—the lobes are between the pairs of primary axes—and the fourth has the center along the x and y axes themselves. The fifth and final d orbital consists of three regions of high probability density: a torus in between two pear-shaped regions placed symmetrically on its z axis. The overall total of 18 directional lobes point in every primary axis direction and between every pair.
There are seven f orbitals, each with shapes more complex than those of the d orbitals.
Additionally, as is the case with the s orbitals, individual p, d, f and g orbitals with values higher than the lowest possible value, exhibit an additional radial node structure which is reminiscent of harmonic waves of the same type, as compared with the lowest (or fundamental) mode of the wave. As with s orbitals, this phenomenon provides p, d, f, and g orbitals at the next higher possible value of (for example, 3p orbitals vs. the fundamental 2p), an additional node in each lobe. Still higher values of further increase the number of radial nodes, for each type of orbital.
The shapes of atomic orbitals in one-electron atom are related to 3-dimensional spherical harmonics. These shapes are not unique, and any linear combination is valid, like a transformation to cubic harmonics, in fact it is possible to generate sets where all the d's are the same shape, just like the and are the same shape.
Although individual orbitals are most often shown independent of each other, the orbitals coexist around the nucleus at the same time. Also, in 1927, Albrecht Unsöld proved that if one sums the electron density of all orbitals of a particular azimuthal quantum number of the same shell (e.g., all three 2p orbitals, or all five 3d orbitals) where each orbital is occupied by an electron or each is occupied by an electron pair, then all angular dependence disappears; that is, the resulting total density of all the atomic orbitals in that subshell (those with the same ) is spherical. This is known as Unsöld's theorem.
Orbitals table
This table shows the real hydrogen-like wave functions for all atomic orbitals up to 7s, and therefore covers the occupied orbitals in the ground state of all elements in the periodic table up to radium. "ψ" graphs are shown with − and + wave function phases shown in two different colors (arbitrarily red and blue). The orbital is the same as the orbital, but the and are formed by taking linear combinations of the and orbitals (which is why they are listed under the label). Also, the and are not the same shape as the , since they are pure spherical harmonics.
s ()p ()d ()f () s pz px py dz2 dxz dyz dxy dx2−y2 fz3 fxz2 fyz2 fxyz fz(x2−y2) fx(x2−3y2) fy(3x2−y2) 50px 50px 50px 50px 50px 50px 50px 50px 50px 50px 50px 50px 50px 50px 50px 50px 50px 50px 50px 50px 50px 50px 50px 50px 50px 50px 50px 50px 50px 50px 50px 50px 50px 50px 50px 50px 50px 50px 50px . . . . . . . . . . . . . . . . . . . . . 50px 50px 50px 50px . . . ‡ . . . ‡ . . . ‡ . . . ‡ . . . ‡ . . . * . . . * . . . * . . . * . . . * . . . * . . . * 50px . . . † . . . † . . . † . . . * . . . * . . . * . . . * . . . * . . . * . . . * . . . * . . . * . . . * . . . * . . . *
* No elements with 6f, 7d or 7f electrons have been discovered yet.
† Elements with 7p electrons have been discovered, but their electronic configurations are only predicted – save the exceptional Lr, which fills 7p1 instead of 6d1.
‡ For the elements whose highest occupied orbital is a 6d orbital, only some electronic configurations have been confirmed. (Mt, Ds, Rg and Cn are still missing).
These are the real-valued orbitals commonly used in chemistry. Only the orbitals where are eigenstates of the orbital angular momentum operator, . The columns with are combinations of two eigenstates. See comparison in the following picture: thumb|Atomic orbitals spdf m-eigenstates (right) and superpositions (left)
Qualitative understanding of shapes
The shapes of atomic orbitals can be qualitatively understood by considering the analogous case of standing waves on a circular drum. To see the analogy, the mean vibrational displacement of each bit of drum membrane from the equilibrium point over many cycles (a measure of average drum membrane velocity and momentum at that point) must be considered relative to that point's distance from the center of the drum head. If this displacement is taken as being analogous to the probability of finding an electron at a given distance from the nucleus, then it will be seen that the many modes of the vibrating disk form patterns that trace the various shapes of atomic orbitals. The basic reason for this correspondence lies in the fact that the distribution of kinetic energy and momentum in a matter-wave is predictive of where the particle associated with the wave will be. That is, the probability of finding an electron at a given place is also a function of the electron's average momentum at that point, since high electron momentum at a given position tends to "localize" the electron in that position, via the properties of electron wave-packets (see the Heisenberg uncertainty principle for details of the mechanism).
This relationship means that certain key features can be observed in both drum membrane modes and atomic orbitals. For example, in all of the modes analogous to s orbitals (the top row in the animated illustration below), it can be seen that the very center of the drum membrane vibrates most strongly, corresponding to the antinode in all s orbitals in an atom. This antinode means the electron is most likely to be at the physical position of the nucleus (which it passes straight through without scattering or striking it), since it is moving (on average) most rapidly at that point, giving it maximal momentum.
A mental "planetary orbit" picture closest to the behavior of electrons in s orbitals, all of which have no angular momentum, might perhaps be that of a Keplerian orbit with the orbital eccentricity of 1 but a finite major axis, not physically possible (because particles were to collide), but can be imagined as a limit of orbits with equal major axes but increasing eccentricity.
Below, a number of drum membrane vibration modes and the respective wave functions of the hydrogen atom are shown. A correspondence can be considered where the wave functions of a vibrating drum head are for a two-coordinate system and the wave functions for a vibrating sphere are three-coordinate .
None of the other sets of modes in a drum membrane have a central antinode, and in all of them the center of the drum does not move. These correspond to a node at the nucleus for all non-s orbitals in an atom. These orbitals all have some angular momentum, and in the planetary model, they correspond to particles in orbit with eccentricity less than 1.0, so that they do not pass straight through the center of the primary body, but keep somewhat away from it.
In addition, the drum modes analogous to p and d modes in an atom show spatial irregularity along the different radial directions from the center of the drum, whereas all of the modes analogous to s modes are perfectly symmetrical in radial direction. The non-radial-symmetry properties of non-s orbitals are necessary to localize a particle with angular momentum and a wave nature in an orbital where it must tend to stay away from the central attraction force, since any particle localized at the point of central attraction could have no angular momentum. For these modes, waves in the drum head tend to avoid the central point. Such features again emphasize that the shapes of atomic orbitals are a direct consequence of the wave nature of electrons.
Orbital energy
In atoms with one electron (hydrogen-like atom), the energy of an orbital (and, consequently, any electron in the orbital) is determined mainly by . The orbital has the lowest possible energy in the atom. Each successively higher value of has a higher energy, but the difference decreases as increases. For high , the energy becomes so high that the electron can easily escape the atom. In single electron atoms, all levels with different within a given are degenerate in the Schrödinger approximation, and have the same energy. This approximation is broken slightly in the solution to the Dirac equation (where energy depends on and another quantum number ), and by the effect of the magnetic field of the nucleus and quantum electrodynamics effects. The latter induce tiny binding energy differences especially for s electrons that go nearer the nucleus, since these feel a very slightly different nuclear charge, even in one-electron atoms; see Lamb shift.
In atoms with multiple electrons, the energy of an electron depends not only on its orbital, but also on its interactions with other electrons. These interactions depend on the detail of its spatial probability distribution, and so the energy levels of orbitals depend not only on but also on . Higher values of are associated with higher values of energy; for instance, the 2p state is higher than the 2s state. When , the increase in energy of the orbital becomes so large as to push the energy of orbital above the energy of the s orbital in the next higher shell; when the energy is pushed into the shell two steps higher. The filling of the 3d orbitals does not occur until the 4s orbitals have been filled, and the filling of the 4f orbitals does not occur all the way until the 6s orbitals have been filled. (See Electron configurations of the elements (data page).)
The increase in energy for subshells of increasing angular momentum in larger atoms is due to electron–electron interaction effects, and it is specifically related to the ability of low angular momentum electrons to penetrate more effectively toward the nucleus, where they are subject to less screening from the charge of intervening electrons. Thus, in atoms with higher atomic number, the of electrons becomes more and more of a determining factor in their energy, and the principal quantum numbers of electrons becomes less and less important in their energy placement.
The energy sequence of the first 35 subshells (e.g., 1s, 2p, 3d, etc.) is given in the following table. Each cell represents a subshell with and given by its row and column indices, respectively. The number in the cell is the subshell's position in the sequence. For a linear listing of the subshells in terms of increasing energies in multielectron atoms, see the section below.
s p d f g h 1 1 2 2 3 3 4 5 7 4 6 8 10 13 5 9 11 14 17 21 612 15 18 22 26 31 716 19 23 27 32 37 820 24 28 33 38 44 925 29 34 39 45 51 1030 35 40 46 52 59
Note: empty cells indicate non-existent sublevels, while numbers in italics indicate sublevels that could (potentially) exist, but which do not hold electrons in any element currently known.
Electron placement and the periodic table
Several rules govern the placement of electrons in orbitals (electron configuration). The first dictates that no two electrons in an atom may have the same set of values of quantum numbers (this is the Pauli exclusion principle). These quantum numbers include the three that define orbitals, as well as the spin magnetic quantum number . Thus, two electrons may occupy a single orbital, so long as they have different values of . Because takes one of only two values ( or −), at most two electrons can occupy each orbital.
Additionally, an electron always tends to fall to the lowest possible energy state. It is possible for it to occupy any orbital so long as it does not violate the Pauli exclusion principle, but if lower-energy orbitals are available, this condition is unstable. The electron will eventually lose energy (by releasing a photon) and drop into the lower orbital. Thus, electrons fill orbitals in the order specified by the energy sequence given above.
This behavior is responsible for the structure of the periodic table. The table may be divided into several rows (called 'periods'), numbered starting with 1 at the top. The presently known elements occupy seven periods. If a certain period has number i, it consists of elements whose outermost electrons fall in the ith shell. Niels Bohr was the first to propose (1923) that the periodicity in the properties of the elements might be explained by the periodic filling of the electron energy levels, resulting in the electronic structure of the atom.
The periodic table may also be divided into several numbered rectangular 'blocks'. The elements belonging to a given block have this common feature: their highest-energy electrons all belong to the same -state (but the associated with that -state depends upon the period). For instance, the leftmost two columns constitute the 's-block'. The outermost electrons of Li and Be respectively belong to the 2s subshell, and those of Na and Mg to the 3s subshell.
The following is the order for filling the "subshell" orbitals, which also gives the order of the "blocks" in the periodic table:
1s, 2s, 2p, 3s, 3p, 4s, 3d, 4p, 5s, 4d, 5p, 6s, 4f, 5d, 6p, 7s, 5f, 6d, 7p
The "periodic" nature of the filling of orbitals, as well as emergence of the s, p, d, and f "blocks", is more obvious if this order of filling is given in matrix form, with increasing principal quantum numbers starting the new rows ("periods") in the matrix. Then, each subshell (composed of the first two quantum numbers) is repeated as many times as required for each pair of electrons it may contain. The result is a compressed periodic table, with each entry representing two successive elements:
1s 2s 2p2p2p3s 3p3p3p4s 3d3d3d3d3d4p4p4p5s 4d4d4d4d4d5p5p5p6s4f4f4f4f4f4f4f5d5d5d5d5d6p6p6p7s5f5f5f5f5f5f5f6d6d6d6d6d7p7p7p
Although this is the general order of orbital filling according to the Madelung rule, there are exceptions, and the actual electronic energies of each element are also dependent upon additional details of the atoms (see ).
The number of electrons in an electrically neutral atom increases with the atomic number. The electrons in the outermost shell, or valence electrons, tend to be responsible for an element's chemical behavior. Elements that contain the same number of valence electrons can be grouped together and display similar chemical properties.
Relativistic effects
For elements with high atomic number , the effects of relativity become more pronounced, and especially so for s electrons, which move at relativistic velocities as they penetrate the screening electrons near the core of high- atoms. This relativistic increase in momentum for high speed electrons causes a corresponding decrease in wavelength and contraction of 6s orbitals relative to 5d orbitals (by comparison to corresponding s and d electrons in lighter elements in the same column of the periodic table); this results in 6s valence electrons becoming lowered in energy.
Examples of significant physical outcomes of this effect include the lowered melting temperature of mercury (which results from 6s electrons not being available for metal bonding) and the golden color of gold and caesium.
In the Bohr model, an electron has a velocity given by , where is the atomic number, is the fine-structure constant, and is the speed of light. In non-relativistic quantum mechanics, therefore, any atom with an atomic number greater than 137 would require its 1s electrons to be traveling faster than the speed of light. Even in the Dirac equation, which accounts for relativistic effects, the wave function of the electron for atoms with > 137 is oscillatory and unbounded. The significance of element 137, also known as untriseptium, was first pointed out by the physicist Richard Feynman. Element 137 is sometimes informally called feynmanium (symbol Fy). However, Feynman's approximation fails to predict the exact critical value of due to the non-point-charge nature of the nucleus and very small orbital radius of inner electrons, resulting in a potential seen by inner electrons which is effectively less than . The critical value, which makes the atom unstable with regard to high-field breakdown of the vacuum and production of electron–positron pairs, does not occur until is about 173. These conditions are not seen except transiently in collisions of very heavy nuclei such as lead or uranium in accelerators, where such electron–positron production from these effects has been claimed to be observed.
There are no nodes in relativistic orbital densities, although individual components of the wave function will have nodes.
pp hybridization (conjectured)
In late period 8 elements, a hybrid of 8p3/2 and 9p1/2 is expected to exist, where "3/2" and "1/2" refer to the total angular momentum quantum number. This "pp" hybrid may be responsible for the p-block of the period due to properties similar to p subshells in ordinary valence shells. Energy levels of 8p3/2 and 9p1/2 come close due to relativistic spin–orbit effects; the 9s subshell should also participate, as these elements are expected to be analogous to the respective 5p elements indium through xenon.
Transitions between orbitals
Bound quantum states have discrete energy levels. When applied to atomic orbitals, this means that the energy differences between states are also discrete. A transition between these states (i.e., an electron absorbing or emitting a photon) can thus happen only if the photon has an energy corresponding with the exact energy difference between said states.
Consider two states of the hydrogen atom:
State , , and
State , , and
By quantum theory, state 1 has a fixed energy of , and state 2 has a fixed energy of . Now, what would happen if an electron in state 1 were to move to state 2? For this to happen, the electron would need to gain an energy of exactly . If the electron receives energy that is less than or greater than this value, it cannot jump from state 1 to state 2. Now, suppose we irradiate the atom with a broad-spectrum of light. Photons that reach the atom that have an energy of exactly will be absorbed by the electron in state 1, and that electron will jump to state 2. However, photons that are greater or lower in energy cannot be absorbed by the electron, because the electron can jump only to one of the orbitals, it cannot jump to a state between orbitals. The result is that only photons of a specific frequency will be absorbed by the atom. This creates a line in the spectrum, known as an absorption line, which corresponds to the energy difference between states 1 and 2.
The atomic orbital model thus predicts line spectra, which are observed experimentally. This is one of the main validations of the atomic orbital model.
The atomic orbital model is nevertheless an approximation to the full quantum theory, which only recognizes many electron states. The predictions of line spectra are qualitatively useful but are not quantitatively accurate for atoms and ions other than those containing only one electron.
See also
Atomic electron configuration table
Condensed matter physics
Electron configuration
Energy level
Hund's rules
Molecular orbital
Orbital overlap
Quantum chemistry
Quantum chemistry computer programs
Solid-state physics
Wave function collapse
Wiswesser's rule
References
External links
3D representation of hydrogenic orbitals
The Orbitron, a visualization of all common and uncommon atomic orbitals, from 1s to 7g
Grand table Still images of many orbitals
Category:Atomic physics
Category:Chemical bonding
Category:Electron states
Category:Quantum chemistry
Category:Articles containing video clips
|
physics
| 8,614
|
1208
|
Alan Turing
|
https://en.wikipedia.org/wiki/Alan_Turing
|
Alan Mathison Turing (; 23 June 1912 – 7 June 1954) was an English mathematician, computer scientist, logician, cryptanalyst, philosopher and theoretical biologist. He was highly influential in the development of theoretical computer science, providing a formalisation of the concepts of algorithm and computation with the Turing machine, which can be considered a model of a general-purpose computer. Turing is widely considered to be the father of theoretical computer science.
Born in London, Turing was raised in southern England. He graduated from King's College, Cambridge, and in 1938, earned a doctorate degree from Princeton University. During World War II, Turing worked for the Government Code and Cypher School at Bletchley Park, Britain's codebreaking centre that produced Ultra intelligence. He led Hut 8, the section responsible for German naval cryptanalysis. Turing devised techniques for speeding the breaking of German ciphers, including improvements to the pre-war Polish bomba method, an electromechanical machine that could find settings for the Enigma machine. He played a crucial role in cracking intercepted messages that enabled the Allies to defeat the Axis powers in the Battle of the Atlantic and other engagements.A number of sources state that Winston Churchill said that Turing made the single biggest contribution to Allied victory in the war against Nazi Germany. Whilst it may be a defensible claim, both the Churchill Centre and Turing's biographer Andrew Hodges have stated they know of no documentary evidence to support it, nor the date or context in which Churchill supposedly made it, and the Churchill Centre lists it among their Churchill 'Myths', see and A BBC News profile piece that repeated the Churchill claim has subsequently been amended to say there is no evidence for it. See Official war historian Harry Hinsley estimated that this work shortened the war in Europe by more than two years but added the caveat that this did not account for the use of the atomic bomb and other eventualities. Transcript of a lecture given on Tuesday 19 October 1993 at Cambridge University
After the war, Turing worked at the National Physical Laboratory, where he designed the Automatic Computing Engine, one of the first designs for a stored-program computer. In 1948, Turing joined Max Newman's Computing Machine Laboratory at the University of Manchester, where he contributed to the development of early Manchester computers and became interested in mathematical biology. Turing wrote on the chemical basis of morphogenesis and predicted oscillating chemical reactions such as the Belousov–Zhabotinsky reaction, first observed in the 1960s. Despite these accomplishments, he was never fully recognised during his lifetime because much of his work was covered by the Official Secrets Act.
In 1952, Turing was prosecuted for homosexual acts. He accepted hormone treatment, a procedure commonly referred to as chemical castration, as an alternative to prison. Turing died on 7 June 1954, aged 41, from cyanide poisoning. An inquest determined his death as suicide, but the evidence is also consistent with accidental poisoning. Following a campaign in 2009, British prime minister Gordon Brown made an official public apology for "the appalling way [Turing] was treated". Queen Elizabeth II granted a pardon in 2013. The term "Alan Turing law" is used informally to refer to a 2017 law in the UK that retroactively pardoned men cautioned or convicted under historical legislation that outlawed homosexual acts.
Turing left an extensive legacy in mathematics and computing which has become widely recognised with statues and many things named after him, including an annual award for computing innovation. His portrait appears on the Bank of England £50 note, first released on 23 June 2021 to coincide with his birthday. The audience vote in a 2019 BBC series named Turing the greatest scientist of the 20th century.
The cognitive scientist Douglas Hofstadter writes: “Atheist, homosexual, eccentric, marathon-running mathematician, A. M. Turing was in large part responsible not only for the concept of computers, incisive theorems about their powers, and a clear vision of the possibility of computer minds, but also for the cracking of German ciphers during the Second World War. It is fair to say we owe much to Alan Turing for the fact that we are not under Nazi rule today.”
Early life and education
Family
Turing was born in Maida Vale, London, while his father, Julius Mathison Turing, was on leave from his position with the Indian Civil Service (ICS) of the British Raj government at Chatrapur, then in the Madras Presidency and presently in Odisha state, in India. Turing's father was the son of a clergyman, the Rev. John Robert Turing, from a Scottish family of merchants that had been based in the Netherlands and included a baronet. Turing's mother, Julius's wife, was Ethel Sara Turing (), daughter of Edward Waller Stoney, chief engineer of the Madras Railways. The Stoneys were a Protestant Anglo-Irish gentry family from both County Tipperary and County Longford, while Ethel herself had spent much of her childhood in County Clare. Julius and Ethel married on 1 October 1907 at the Church of Ireland St. Bartholomew's Church on Clyde Road in Ballsbridge, Dublin.Irish Marriages 1845–1958 / Dublin South, Dublin, Ireland / Group Registration ID 1990366, SR District/Reg Area, Dublin South
Julius's work with the ICS brought the family to British India, where his grandfather had been a general in the Bengal Army. However, both Julius and Ethel wanted their children to be brought up in Britain, so they moved to Maida Vale, London, where Alan Turing was born on 23 June 1912, as recorded by a blue plaque on the outside of the house of his birth,, later the Colonnade Hotel. Turing had an elder brother, John Ferrier Turing, father of Dermot Turing, 12th Baronet of the Turing baronets. In 1922, he discovered Natural Wonders Every Child Should Know by Edwin Tenney Brewster. He credited it with opening his eyes to science.
Turing's father's civil service commission was still active during Turing's childhood years, and his parents travelled between Hastings in the United Kingdom and India, leaving their two sons to stay with a retired Army couple. At Hastings, Turing stayed at Baston Lodge, Upper Maze Hill, St Leonards-on-Sea, now marked with a blue plaque. The plaque was unveiled on 23 June 2012, the centenary of Turing's birth.
Very early in life, Turing's parents purchased a house in Guildford in 1927, and Turing lived there during school holidays. The location is also marked with a blue plaque.
School
Turing's parents enrolled him at St Michael's, a primary school at 20 Charles Road, St Leonards-on-Sea, from the age of six to nine. The headmistress recognised his talent, noting that she "...had clever boys and hardworking boys, but Alan is a genius".
Between January 1922 and 1926, Turing was educated at Hazelhurst Preparatory School, an independent school in the village of Frant in Sussex (now East Sussex). In 1926, at the age of 13, he went on to Sherborne School, an independent boarding school in the market town of Sherborne in Dorset, where he boarded at Westcott House. The first day of term coincided with the 1926 General Strike, in Britain, but Turing was so determined to attend that he rode his bicycle unaccompanied from Southampton to Sherborne, stopping overnight at an inn.
Turing's natural inclination towards mathematics and science did not earn him respect from some of the teachers at Sherborne, whose definition of education placed more emphasis on the classics. His headmaster wrote to his parents: "I hope he will not fall between two stools. If he is to stay at public school, he must aim at becoming educated. If he is to be solely a Scientific Specialist, he is wasting his time at a public school". Despite this, Turing continued to show remarkable ability in the studies he loved, solving advanced problems in 1927 without having studied even elementary calculus. In 1928, aged 16, Turing encountered Albert Einstein's work; not only did he grasp it, but it is possible that he managed to deduce Einstein's questioning of Newton's laws of motion from a text in which this was never made explicit.
Christopher Morcom
At Sherborne, Turing formed a significant friendship with fellow pupil Christopher Collan Morcom (13 July 1911 – 13 February 1930), who has been described as Turing's first love. Their relationship provided inspiration in Turing's future endeavours, but it was cut short by Morcom's death, in February 1930, from complications of bovine tuberculosis, contracted after drinking infected cow's milk some years previously.
The event caused Turing great sorrow. He coped with his grief by working that much harder on the topics of science and mathematics that he had shared with Morcom. In a letter to Morcom's mother, Frances Isobel Morcom (née Swan), Turing wrote:
Turing's relationship with Morcom's mother continued long after Morcom's death, with her sending gifts to Turing, and him sending letters, typically on Morcom's birthday. A day before the third anniversary of Morcom's death (13 February 1933), he wrote to Mrs. Morcom:
Some have speculated that Morcom's death was the cause of Turing's atheism and materialism. Apparently, at this point in his life he still believed in such concepts as a spirit, independent of the body and surviving death. In a later letter, also written to Morcom's mother, Turing wrote:
University and work on computability
After graduating from Sherborne, Turing applied for several Cambridge colleges scholarships, including Trinity and King's, eventually earning an £80 per annum scholarship (equivalent to about £4,300 as of 2023) to study at the latter. There, Turing studied the undergraduate course in Schedule B (Schedule B was a three-year scheme consisting of Parts I and II, of the Mathematical Tripos, with extra courses at the end of the third year, as Part III only emerged as a separate degree in 1934 from February 1931 to November 1934 at King's College, Cambridge, where he was awarded first-class honours in mathematics. His dissertation, On the Gaussian error function, written during his senior year and delivered in November 1934 proved a version of the central limit theorem. It was finally accepted on 16 March 1935. By spring of that same year, Turing started his master's course (Part III)—which he completed in 1937—and, at the same time, he published his first paper, a one-page article called Equivalence of left and right almost periodicity (sent on 23 April), featured in the tenth volume of the Journal of the London Mathematical Society. Later that year, Turing was elected a Fellow of King's College on the strength of his dissertation where he served as a lecturer. However, unknown to Turing, the version of the theorem he proved in his paper had already been proven by Jarl Waldemar Lindeberg in 1922. Despite this, the committee found Turing's methods original and so regarded the work worthy of consideration for the fellowship. Abram Besicovitch's report for the committee went so far as to say that if Turing's work had been published before Lindeberg's, it would have been "an important event in the mathematical literature of that year".
Between the springs of 1935 and 1936, at the same time as Alonzo Church, Turing worked on the decidability of problems, starting from Gödel's incompleteness theorems. In mid-April 1936, Turing sent Max Newman the first draft typescript of his investigations. That same month, Church published his An Unsolvable Problem of Elementary Number Theory, with similar conclusions to Turing's then-yet unpublished work. Finally, on 28 May of that year, he finished and delivered his 36-page paper for publication called "On Computable Numbers, with an Application to the Entscheidungsproblem". It was published in the Proceedings of the London Mathematical Society journal in two parts, the first on 30 November and the second on 23 December. In this paper, Turing reformulated Kurt Gödel's 1931 results on the limits of proof and computation, replacing Gödel's universal arithmetic-based formal language with the formal and simple hypothetical devices that became known as Turing machines. The Entscheidungsproblem (decision problem) was originally posed by German mathematician David Hilbert in 1928. Turing proved that his "universal computing machine" would be capable of performing any conceivable mathematical computation if it were representable as an algorithm. He went on to prove that there was no solution to the decision problem by first showing that the halting problem for Turing machines is undecidable: it is not possible to decide algorithmically whether a Turing machine will ever halt. This paper has been called "easily the most influential math paper in history".
Although Turing's proof was published shortly after Church's equivalent proof using his lambda calculus, Turing's approach is considerably more accessible and intuitive than Church's. It also included a notion of a 'Universal Machine' (now known as a universal Turing machine), with the idea that such a machine could perform the tasks of any other computation machine (as indeed could Church's lambda calculus). According to the Church–Turing thesis, Turing machines and the lambda calculus are capable of computing anything that is computable. John von Neumann acknowledged that the central concept of the modern computer was due to Turing's paper."von Neumann ... firmly emphasised to me, and to others I am sure, that the fundamental conception is owing to Turing—insofar as not anticipated by Babbage, Lovelace and others." Letter by Stanley Frankel to Brian Randell, 1972, quoted in Jack Copeland (2004) The Essential Turing, p. 22. To this day, Turing machines are a central object of study in theory of computation.
From September 1936 to July 1938, Turing spent most of his time studying under Church at Princeton University, in the second year as a Jane Eliza Procter Visiting Fellow. In addition to his purely mathematical work, he studied cryptology and also built three of four stages of an electro-mechanical binary multiplier. In June 1938, he obtained his PhD from the Department of Mathematics at Princeton; his dissertation, Systems of Logic Based on Ordinals, introduced the concept of ordinal logic and the notion of relative computing, in which Turing machines are augmented with so-called oracles, allowing the study of problems that cannot be solved by Turing machines. John von Neumann wanted to hire him as his postdoctoral assistant, but he went back to the United Kingdom.John Von Neumann: The Scientific Genius Who Pioneered the Modern Computer, Game Theory, Nuclear Deterrence, and Much More, Norman MacRae, 1999, American Mathematical Society, Chapter 8
Career and research
When Turing returned to Cambridge, he attended lectures given in 1939 by Ludwig Wittgenstein about the foundations of mathematics. The lectures have been reconstructed verbatim, including interjections from Turing and other students, from students' notes. Turing and Wittgenstein argued and disagreed, with Turing defending formalism and Wittgenstein propounding his view that mathematics does not discover any absolute truths, but rather invents them.
Cryptanalysis
During the Second World War, Turing was a leading participant in the breaking of German ciphers at Bletchley Park. The historian and wartime codebreaker Asa Briggs has said, "You needed exceptional talent, you needed genius at Bletchley and Turing's was that genius."
From September 1938, Turing worked part-time with the Government Code and Cypher School (GC&CS), the British codebreaking organisation. He concentrated on cryptanalysis of the Enigma cipher machine used by Nazi Germany, together with Dilly Knox, a senior GC&CS codebreaker. Soon after the July 1939 meeting near Warsaw at which the Polish Cipher Bureau gave the British and French details of the wiring of Enigma machine's rotors and their method of decrypting Enigma machine's messages, Turing and Knox developed a broader solution. The Polish method relied on an insecure indicator procedure that the Germans were likely to change, which they in fact did in May 1940. Turing's approach was more general, using crib-based decryption for which he produced the functional specification of the bombe (an improvement on the Polish Bomba).
On 4 September 1939, the day after the UK declared war on Germany, Turing reported to Bletchley Park, the wartime station of GC&CS.Copeland, 2006 p. 378. Like all others who came to Bletchley, he was required to sign the Official Secrets Act, in which he agreed not to disclose anything about his work at Bletchley, with severe legal penalties for violating the Act.
Specifying the bombe was the first of five major cryptanalytical advances that Turing made during the war. The others were: deducing the indicator procedure used by the German navy; developing a statistical procedure dubbed Banburismus for making much more efficient use of the bombes; developing a procedure dubbed Turingery for working out the cam settings of the wheels of the Lorenz SZ 40/42 (Tunny) cipher machine and, towards the end of the war, the development of a portable secure voice scrambler at Hanslope Park that was codenamed Delilah.
By using statistical techniques to optimise the trial of different possibilities in the code breaking process, Turing made an innovative contribution to the subject. He wrote two papers discussing mathematical approaches, titled The Applications of Probability to Cryptography and Paper on Statistics of Repetitions, which were of such value to GC&CS and its successor GCHQ that they were not released to the UK National Archives until April 2012, shortly before the centenary of his birth. A GCHQ mathematician, "who identified himself only as Richard," said at the time that the fact that the contents had been restricted under the Official Secrets Act for some 70 years demonstrated their importance, and their relevance to post-war cryptanalysis:
Turing had a reputation for eccentricity at Bletchley Park. He was known to his colleagues as "Prof" and his treatise on Enigma was known as the "Prof's Book". According to historian Ronald Lewin, Jack Good, a cryptanalyst who worked with Turing, said of his colleague:
Peter Hilton recounted his experience working with Turing in Hut 8 in his "Reminiscences of Bletchley Park" from A Century of Mathematics in America:
Hilton echoed similar thoughts in the Nova PBS documentary Decoding Nazi Secrets.
While working at Bletchley, Turing, who was a talented long-distance runner, occasionally ran the to London when he was needed for meetings, and he was capable of world-class marathon standards. Turing tried out for the 1948 British Olympic team, but he was hampered by an injury. His tryout time for the marathon was only 11 minutes slower than British silver medallist Thomas Richards' Olympic race time of 2 hours 35 minutes. He was Walton Athletic Club's best runner, a fact discovered when he passed the group while running alone. When asked why he ran so hard in training he replied:
Due to the challenges answering questions concerning what an outcome would have been if a historical event did or did not occur (the realm of counterfactual history), it is hard to estimate the precise effect Ultra intelligence had on the war.See for example and However, official war historian Harry Hinsley estimated that this work shortened the war in Europe by more than two years. He added the caveat that this did not account for the use of the atomic bomb and other eventualities. Transcript of a lecture given on Tuesday 19 October 1993 at Cambridge University
At the end of the war, a memo was sent to all those who had worked at Bletchley Park, reminding them that the code of silence dictated by the Official Secrets Act did not end with the war but would continue indefinitely. Thus, even though Turing was appointed an Officer of the Order of the British Empire (OBE) in 1946 by King George VI for his wartime services, his work remained secret for many years.
Bombe
Within weeks of arriving at Bletchley Park, Turing had specified an electromechanical machine called the bombe, which could break Enigma more effectively than the Polish bomba kryptologiczna, from which its name was derived. The bombe, with an enhancement suggested by mathematician Gordon Welchman, became one of the primary tools, and the major automated one, used to attack Enigma-enciphered messages.
The bombe detected when a contradiction had occurred and ruled out that setting, moving on to the next. Most of the possible settings would cause contradictions and be discarded, leaving only a few to be investigated in detail. A contradiction would occur when an enciphered letter would be turned back into the same plaintext letter, which was impossible with the Enigma. The first bombe was installed on 18 March 1940.
Action This Day
By late 1941, Turing and his fellow cryptanalysts Gordon Welchman, Hugh Alexander and Stuart Milner-Barry were frustrated. Building on the work of the Poles, they had set up a good working system for decrypting Enigma signals, but their limited staff and bombes meant they could not translate all the signals. In the summer, they had considerable success, and shipping losses had fallen to under 100,000 tons a month; however, they badly needed more resources to keep abreast of German adjustments. They had tried to get more people and fund more bombes through the proper channels, but had failed.
On 28 October they wrote directly to Winston Churchill explaining their difficulties, with Turing as the first named. They emphasised how small their need was compared with the vast expenditure of men and money by the forces and compared with the level of assistance they could offer to the forces. As Andrew Hodges, biographer of Turing, later wrote, "This letter had an electric effect." Churchill wrote a memo to General Ismay, which read: "ACTION THIS DAY. Make sure they have all they want on extreme priority and report to me that this has been done." On 18 November, the chief of the secret service reported that every possible measure was being taken. The cryptographers at Bletchley Park did not know of the Prime Minister's response, but as Milner-Barry recalled, "All that we did notice was that almost from that day the rough ways began miraculously to be made smooth."Copeland, The Essential Turing, pp. 336–337 . More than two hundred bombes were in operation by the end of the war.
Hut 8 and the naval Enigma
Turing decided to tackle the particularly difficult problem of cracking the German naval use of Enigma "because no one else was doing anything about it and I could have it to myself". In December 1939, Turing solved the essential part of the naval indicator system, which was more complex than the indicator systems used by the other services.
That same night, he also conceived of the idea of Banburismus, a sequential statistical technique (what Abraham Wald later called sequential analysis) to assist in breaking the naval Enigma, "though I was not sure that it would work in practice, and was not, in fact, sure until some days had actually broken". For this, he invented a measure of weight of evidence that he called the ban. Banburismus could rule out certain sequences of the Enigma rotors, substantially reducing the time needed to test settings on the bombes. Later this sequential process of accumulating sufficient weight of evidence using decibans (one tenth of a ban) was used in cryptanalysis of the Lorenz cipher.
Turing travelled to the United States in November 1942 and worked with US Navy cryptanalysts on the naval Enigma and bombe construction in Washington. He also visited their Computing Machine Laboratory in Dayton, Ohio.
Turing's reaction to the American bombe design was far from enthusiastic:
During this trip, he also assisted at Bell Labs with the development of secure speech devices. He returned to Bletchley Park in March 1943. During his absence, Hugh Alexander had officially assumed the position of head of Hut 8, although Alexander had been de facto head for some time (Turing having little interest in the day-to-day running of the section). Turing became a general consultant for cryptanalysis at Bletchley Park.
Alexander wrote of Turing's contribution:
Turingery
In July 1942, Turing devised a technique termed Turingery (or jokingly Turingismus) for use against the Lorenz cipher messages produced by the Germans' new Geheimschreiber (secret writer) machine. This was a teleprinter rotor cipher attachment codenamed Tunny at Bletchley Park. Turingery was a method of wheel-breaking, i.e., a procedure for working out the cam settings of Tunny's wheels. He also introduced the Tunny team to Tommy Flowers who, under the guidance of Max Newman, went on to build the Colossus computer, the world's first programmable digital electronic computer, which replaced a simpler prior machine (the Heath Robinson), and whose superior speed allowed the statistical decryption techniques to be applied usefully to the messages. Some have mistakenly said that Turing was a key figure in the design of the Colossus computer. Turingery and the statistical approach of Banburismus undoubtedly fed into the thinking about cryptanalysis of the Lorenz cipher, but he was not directly involved in the Colossus development.
Delilah
Following his work at Bell Labs in the US, Turing pursued the idea of electronic enciphering of speech in the telephone system. In the latter part of the war, he moved to work for the Secret Service's Radio Security Service (later HMGCC) at Hanslope Park. At the park, he further developed his knowledge of electronics with the assistance of REME officer Donald Bayley. Together they undertook the design and construction of a portable secure voice communications machine codenamed Delilah. The machine was intended for different applications, but it lacked the capability for use with long-distance radio transmissions. In any case, Delilah was completed too late to be used during the war. Though the system worked fully, with Turing demonstrating it to officials by encrypting and decrypting a recording of a Winston Churchill speech, Delilah was not adopted for use. Turing also consulted with Bell Labs on the development of SIGSALY, a secure voice system that was used in the later years of the war.
Early computers and the Turing test
Between 1945 and 1947, Turing lived in Hampton, London, while he worked on the design of the ACE (Automatic Computing Engine) at the National Physical Laboratory (NPL). He presented a paper on 19 February 1946, which was the first detailed design of a stored-program computer. Von Neumann's incomplete First Draft of a Report on the EDVAC had predated Turing's paper, but it was much less detailed and, according to John R. Womersley, Superintendent of the NPL Mathematics Division, it "contains a number of ideas which are Dr. Turing's own". citing
Although ACE was a feasible design, the effect of the Official Secrets Act surrounding the wartime work at Bletchley Park made it impossible for Turing to explain the basis of his analysis of how a computer installation involving human operators would work. This led to delays in starting the project and he became disillusioned. In late 1947 he returned to Cambridge for a sabbatical year during which he produced a seminal work on Intelligent Machinery that was not published in his lifetime.See While he was at Cambridge, the Pilot ACE was being built in his absence. It executed its first program on 10 May 1950, and a number of later computers around the world owe much to it, including the English Electric DEUCE and the American Bendix G-15. The full version of Turing's ACE was not built until after his death.
According to the memoirs of the German computer pioneer Heinz Billing from the Max Planck Institute for Physics, published by Genscher, Düsseldorf, there was a meeting between Turing and Konrad Zuse. It took place in Göttingen in 1947. The interrogation had the form of a colloquium. Participants were Womersley, Turing, Porter from England and a few German researchers like Zuse, Walther, and Billing (for more details see Herbert Bruderer, Konrad Zuse und die Schweiz).
In 1948, Turing was appointed reader in the Mathematics Department at the University of Manchester. He lived at "Copper Folly", 43 Adlington Road, in Wilmslow. A year later, he became deputy director of the Computing Machine Laboratory, where he worked on software for one of the earliest stored-program computers—the Manchester Mark 1. Turing wrote the first version of the Programmer's Manual for this machine, was elected to membership of the Manchester Literary and Philosophical Society,Minute Book of the Manchester Lit & Phil, 1954 and was recruited by Ferranti as a consultant in the development of their commercialised machine, the Ferranti Mark 1. He continued to be paid consultancy fees by Ferranti until his death. During this time, he continued to do more abstract work in mathematics, and in "Computing Machinery and Intelligence", Turing addressed the problem of artificial intelligence, and proposed an experiment that became known as the Turing test, an attempt to define a standard for a machine to be called "intelligent". The idea was that a computer could be said to "think" if a human interrogator could not tell it apart, through conversation, from a human being. In the paper, Turing suggested that rather than building a program to simulate the adult mind, it would be better to produce a simpler one to simulate a child's mind and then to subject it to a course of education. A reversed form of the Turing test is widely used on the Internet; the CAPTCHA test is intended to determine whether the user is a human or a computer.
In 1948, Turing, working with his former undergraduate colleague, D.G. Champernowne, began writing a chess program for a computer that did not yet exist. By 1950, the program was completed and dubbed the Turochamp. In 1952, he tried to implement it on a Ferranti Mark 1, but lacking enough power, the computer was unable to execute the program. Instead, Turing "ran" the program by flipping through the pages of the algorithm and carrying out its instructions on a chessboard, taking about half an hour per move. The game was recorded. According to Garry Kasparov, Turing's program "played a recognizable game of chess". The program lost to Turing's colleague Alick Glennie, although it is said that it won a game against Champernowne's wife, Isabel.
His Turing test was a significant, characteristically provocative, and lasting contribution to the debate regarding artificial intelligence, which continues after more than half a century.
Pattern formation and mathematical biology
When Turing was 39 years old in 1951, he turned to mathematical biology, finally publishing his masterpiece "The Chemical Basis of Morphogenesis" in January 1952. He was interested in morphogenesis, the development of patterns and shapes in biological organisms. He suggested that a system of chemicals reacting with each other and diffusing across space, termed a reaction–diffusion system, could account for "the main phenomena of morphogenesis". He used systems of partial differential equations to model catalytic chemical reactions. For example, if a catalyst A is required for a certain chemical reaction to take place, and if the reaction produced more of the catalyst A, then we say that the reaction is autocatalytic, and there is positive feedback that can be modelled by nonlinear differential equations. Turing discovered that patterns could be created if the chemical reaction not only produced catalyst A, but also produced an inhibitor B that slowed down the production of A. If A and B then diffused through the container at different rates, then you could have some regions where A dominated and some where B did. To calculate the extent of this, Turing would have needed a powerful computer, but these were not so freely available in 1951, so he had to use linear approximations to solve the equations by hand. These calculations gave the right qualitative results, and produced, for example, a uniform mixture that oddly enough had regularly spaced fixed red spots. The Russian biochemist Boris Belousov had performed experiments with similar results, but could not get his papers published because of the contemporary prejudice that any such thing violated the second law of thermodynamics. Belousov was not aware of Turing's paper in the Philosophical Transactions of the Royal Society.
Although published before the structure and role of DNA was understood, Turing's work on morphogenesis remains relevant today and is considered a seminal piece of work in mathematical biology. One of the early applications of Turing's paper was the work by James Murray explaining spots and stripes on the fur of cats, large and small. Further research in the area suggests that Turing's work can partially explain the growth of "feathers, hair follicles, the branching pattern of lungs, and even the left-right asymmetry that puts the heart on the left side of the chest". In 2012, Sheth, et al. found that in mice, removal of Hox genes causes an increase in the number of digits without an increase in the overall size of the limb, suggesting that Hox genes control digit formation by tuning the wavelength of a Turing-type mechanism. Later papers were not available until Collected Works of A. M. Turing was published in 1992.
A study conducted in 2023 confirmed Turing's mathematical model hypothesis. Presented by the American Physical Society, the experiment involved growing chia seeds in even layers within trays, later adjusting the available moisture. Researchers experimentally tweaked the factors which appear in the Turing equations, and, as a result, patterns resembling those seen in natural environments emerged. This is believed to be the first time that experiments with living vegetation have verified Turing's mathematical insight.
Personal life
Treasure
In the 1940s, Turing became worried about losing his savings in the event of a German invasion. In order to protect it, he bought two silver bars weighing and worth £250 (in 2022, £8,000 adjusted for inflation, £48,000 at spot price) and buried them in a wood near Bletchley Park. Upon returning to dig them up, Turing found that he was unable to break his own code describing where exactly he had hidden them. This, along with the fact that the area had been renovated, meant that he never regained the silver.
Engagement
In 1941, Turing proposed marriage to Hut 8 colleague Joan Clarke, a fellow mathematician and cryptanalyst, but their engagement was short-lived. After admitting his homosexuality to his fiancée, who was reportedly "unfazed" by the revelation, Turing decided that he could not go through with the marriage.
Chess
Turing invented a hybrid chess sport, earlier than chess boxing, referred to as round-the-house chess, one player makes a chess move, then that player runs around the house, and the other player must make a chess move before the first player returns. PDF
Homosexuality and indecency conviction
In December 1951, Turing met Arnold Murray, a 19-year-old unemployed man. Turing was walking along Manchester's Oxford Road when he met Murray just outside the Regal Cinema and invited him to lunch. The two agreed to meet again and in January 1952 began an intimate relationship. On 23 January, Turing's house in Wilmslow was burgled. Murray told Turing that he and the burglar were acquainted, and Turing reported the crime to the police. During the investigation, he acknowledged a sexual relationship with Murray. Homosexual acts were criminal offences in the United Kingdom at that time, and both men were charged with "gross indecency" under Section 11 of the Criminal Law Amendment Act 1885. Initial committal proceedings for the trial were held on 27 February during which Turing's solicitor "reserved his defence", i.e., did not argue or provide evidence against the allegations. The proceedings were held at the Sessions House in Knutsford.
Turing was later convinced by the advice of his brother and his own solicitor, and he entered a plea of guilty. The case, Regina v. Turing and Murray, was brought to trial on 31 March 1952. Turing was convicted and given a choice between imprisonment and probation. His probation would be conditional on his agreement to undergo hormonal physical changes designed to reduce libido, known as "chemical castration". He accepted the option of injections of what was then called stilboestrol (now known as diethylstilbestrol or DES), a synthetic oestrogen; this feminization of his body was continued for the course of one year. The treatment rendered Turing impotent and caused breast tissue to form. In a letter, Turing wrote that "no doubt I shall emerge from it all a different man, but quite who I've not found out". Murray was given a conditional discharge.
Turing's conviction led to the removal of his security clearance and barred him from continuing with his cryptographic consultancy for GCHQ, the British signals intelligence agency that had evolved from GC&CS in 1946, though he kept his academic post. His trial took place only months after the defection to the Soviet Union of Guy Burgess and Donald Maclean, in summer 1951, after which the Foreign Office started to consider anyone known to be homosexual as a potential security risk.
Turing was denied entry into the United States after his conviction in 1952, but was free to visit other European countries. In the summer of 1952 he visited Norway which was more tolerant of homosexuals. Among the various men he met there was one named Kjell Carlson. Kjell intended to visit Turing in the UK but the authorities intercepted Kjell's postcard detailing his travel arrangements and were able to intercept and deport him before the two could meet. It was also during this time that Turing started consulting a psychiatrist, Franz Greenbaum, with whom he got on well and who subsequently became a family friend.
Death
On 8 June 1954, at his house at 43 Adlington Road, Wilmslow, Turing's housekeeper found him dead. A post mortem was held that evening, which determined that he had died the previous day at age 41 with cyanide poisoning cited as the cause of death. When his body was discovered, an apple lay half-eaten beside his bed, and although the apple was not tested for cyanide, it was speculated that this was the means by which Turing had consumed a fatal dose.
Turing's brother, John, identified the body the following day and took the advice given by Franz Greenbaum to accept the verdict of the inquest, as there was little prospect of establishing that the death was accidental. The inquest was held the following day, which determined the cause of death to be suicide. His nephew, writer Dermot Turing, does not believe that his conviction or hormone treatment had anything to do with his suicide. He points out that the conviction ended in 1952 and the treatment the following year. Furthermore, no physiological evidence was found that the treatment had any impact on his uncle's mental health, and he had just made a list of tasks he needed to perform when he got back to his office after a public holiday. McDonald, Sally. Family of acclaimed mathematician Alan Turing on why the story of his complicated life and lasting legacy deserves to be told afresh. The Sunday Post, March 21, 2021. Turing may have inhaled cyanide fumes from an electroplating experiment in his spare room, and he often ate an apple before bed, leaving it half eaten.
Turing's remains were cremated at Woking Crematorium just two days later on 12 June 1954, with just his mother, brother, and Lyn Newman attending, and his ashes were scattered in the gardens of the crematorium, just as his father's had been. Turing's mother was on holiday in Italy at the time of his death and returned home after the inquest. She never accepted the verdict of suicide.
Philosopher Jack Copeland has questioned various aspects of the coroner's historical verdict. He suggested an alternative explanation for the cause of Turing's death: the accidental inhalation of cyanide fumes from an apparatus used to electroplate gold onto spoons. The potassium cyanide was used to dissolve the gold. Turing had such an apparatus set up in his tiny spare room. Copeland noted that the autopsy findings were more consistent with inhalation than with ingestion of the poison. Turing also habitually ate an apple before going to bed, and it was not unusual for the apple to be discarded half-eaten. Furthermore, Turing had reportedly borne his legal setbacks and hormone treatment (which had been discontinued a year previously) "with good humour" and had shown no sign of despondency before his death. He even set down a list of tasks that he intended to complete upon returning to his office after the holiday weekend. Turing's mother believed that the ingestion was accidental, resulting from her son's careless storage of laboratory chemicals. Turing biographer Andrew Hodges theorised that Turing deliberately made his death look accidental in order to shield his mother from the knowledge that he had killed himself.
Doubts on the suicide thesis have been also cast by John W. Dawson Jr. who, in his review of Hodges' book, recalls "Turing's vulnerable position in the Cold War political climate" and points out that "Turing was found dead by a maid, who discovered him 'lying neatly in his bed'—hardly what one would expect of "a man fighting for life against the suffocation induced by cyanide poisoning." Turing had given no hint of suicidal inclinations to his friends and had made no effort to put his affairs in order.
Hodges and a later biographer, David Leavitt, have both speculated that Turing was re-enacting a scene from the Walt Disney film Snow White and the Seven Dwarfs (1937), his favourite fairy tale. Both men noted that (in Leavitt's words) he took "an especially keen pleasure in the scene where the Wicked Queen immerses her apple in the poisonous brew".
It has also been suggested that Turing's belief in fortune-telling may have caused his depressed mood. As a youth, Turing had been told by a fortune-teller that he would be a genius. In mid-May 1954, shortly before his death, Turing again decided to consult a fortune-teller during a day-trip to St Annes-on-Sea with the Greenbaum family. According to the Greenbaums' daughter, Barbara:
Government apology and pardon
In August 2009, British programmer John Graham-Cumming started a petition urging the British government to apologise for Turing's prosecution as a homosexual. The petition received more than 30,000 signatures.The petition was only open to UK citizens. The prime minister, Gordon Brown, acknowledged the petition, releasing a statement on 10 September 2009 apologising and describing the treatment of Turing as "appalling":
In December 2011, William Jones and his member of Parliament, John Leech, created an e-petition requesting that the British government pardon Turing for his conviction of "gross indecency":
The petition gathered over 37,000 signatures, and was submitted to Parliament by the Manchester MP John Leech but the request was discouraged by Justice Minister Lord McNally, who said:
John Leech, the MP for Manchester Withington (2005–15), submitted several bills to Parliament and led a high-profile campaign to secure the pardon. Leech made the case in the House of Commons that Turing's contribution to the war made him a national hero and that it was "ultimately just embarrassing" that the conviction still stood. Leech continued to take the bill through Parliament and campaigned for several years, gaining the public support of numerous leading scientists, including Stephen Hawking.
On 26 July 2012, a bill was introduced in the House of Lords to grant a statutory pardon to Turing for offences under section 11 of the Criminal Law Amendment Act 1885, of which he was convicted on 31 March 1952. Late in the year in a letter to The Daily Telegraph, the physicist Stephen Hawking and 10 other signatories including the Astronomer Royal Lord Rees, President of the Royal Society Sir Paul Nurse, Lady Trumpington (who worked for Turing during the war) and Lord Sharkey (the bill's sponsor) called on Prime Minister David Cameron to act on the pardon request. The government indicated it would support the bill, and it passed its third reading in the House of Lords in October.
At the bill's second reading in the House of Commons on 29 November 2013, Conservative MP Christopher Chope objected to the bill, delaying its passage. The bill was due to return to the House of Commons on 28 February 2014, but before the bill could be debated in the House of Commons, the government elected to proceed under the royal prerogative of mercy. On 24 December 2013, Queen Elizabeth II signed a pardon for Turing's conviction for "gross indecency", with immediate effect. Announcing the pardon, Lord Chancellor Chris Grayling said Turing deserved to be "remembered and recognised for his fantastic contribution to the war effort" and not for his later criminal conviction. The Queen pronounced Turing pardoned in August 2014. It was only the fourth royal pardon granted since the conclusion of the Second World War. Pardons are normally granted only when the person is technically innocent, and a request has been made by the family or other interested party; neither condition was met in regard to Turing's conviction.
In September 2016, the government announced its intention to expand this retroactive exoneration to other men convicted of similar historical indecency offences, in what was described as an "Alan Turing law". The Alan Turing law is now an informal term for the law in the United Kingdom, contained in the Policing and Crime Act 2017, which serves as an amnesty law to retroactively pardon men who were cautioned or convicted under historical legislation that outlawed homosexual acts. The law applies in England and Wales. Due to his repeated attempts to bring attention to the issue, Leech is now regularly described as the "architect" of Turing's pardon and subsequently the Alan Turing Law, which went on to secure pardons for 75,000 other men and women. At the British premiere of a film based on Turing's life, The Imitation Game, the producers thanked Leech for bringing the topic to public attention and securing Turing's pardon.
On 19 July 2023, following an apology to LGBT veterans from the UK Government, Defence Secretary Ben Wallace suggested Turing should be honoured with a permanent statue on the fourth plinth of Trafalgar Square, describing Turing as "probably the greatest war hero, in my book, of the Second World War, [whose] achievements shortened the war, saved thousands of lives, helped defeat the Nazis. And his story is a sad story of a society and how it treated him."
Further reading
Articles
Books
Turing's mother, who survived him by many years, wrote this 157-page biography of her son, glorifying his life. It was published in 1959, and so could not cover his war work. Scarcely 300 copies were sold (Sara Turing to Lyn Newman, 1967, Library of St John's College, Cambridge). The six-page foreword by Lyn Irvine includes reminiscences and is more frequently quoted. It was re-published by Cambridge University Press in 2012, to honour the centenary of his birth, and included a new foreword by Martin Davis, as well as a never-before-published memoir by Turing's older brother John F. Turing.
(originally published in 1959 by W. Heffer & Sons, Ltd)
This 1986 Hugh Whitemore play tells the story of Turing's life and death. In the original West End and Broadway runs, Derek Jacobi played Turing and he recreated the role in a 1997 television film based on the play made jointly by the BBC and WGBH, Boston. The play is published by Amber Lane Press, Oxford, ASIN: B000B7TM0Q
See also
Legacy of Alan Turing
List of things named after Alan Turing
List of suicides of LGBTQ people
List of pioneers in computer science
Works cited
in
; basis of the film The Imitation Game
In 1996, a copy was released to the National Archives and Records Administration. Then a copy was released to National Archives, Kew. Mahon's account is first-hand from October 1941. Also published as a book chapter (), and a volume in a series.
Notes
References
External links
Alan Turing archive on New Scientist
Alan Turing plaques on openplaques.org
Papers
Alan Turing Papers – University of Manchester Library
Science in the Making Alan Turing's papers in the Royal Societys archives
The Turing Digital Archive – contains scans of some unpublished documents and material - King's College, Cambridge
Interviews
Oral history interview with Nicholas C. Metropolis, Charles Babbage Institute, University of Minnesota. Metropolis was the first director of computing services at Los Alamos National Laboratory; topics include the relationship between Turing and John von Neumann
Articles
How Alan Turing Cracked The Enigma Code Imperial War Museums
Websites
AlanTuring.net – Turing Archive for the History of Computing by Jack Copeland
Alan Turing site maintained by Andrew Hodges including a short biography
Events
CiE 2012: Turing Centenary Conference
Alan Turing Year
Sherborne School
Sherborne School Archives – holds papers relating to Turing's time at Sherborne School
Alan Turing and the ‘Nature of Spirit’ (Old Shirburnian Society)
Alan Turing OBE, PhD, FRS (1912-1954) (Old Shirburnian Society)
Category:1912 births
Category:1954 deaths
Category:1954 suicides
Category:20th-century atheists
Category:20th-century English LGBTQ people
Category:20th-century English mathematicians
Category:20th-century English philosophers
Category:Academics of the University of Manchester Institute of Science and Technology
Category:Academics of the University of Manchester
Category:Alumni of King's College, Cambridge
Category:Bayesian statisticians
Category:Bletchley Park people
Category:British anti-fascists
Category:British artificial intelligence researchers
Category:British cryptographers
Category:British people of World War II
Category:Castrated people
Category:Computability theorists
Category:Computer chess people
Category:Computer designers
Category:Early history of video games
Category:English atheists
Category:English computer scientists
Category:English gay sportsmen
Category:English inventors
Category:English LGBTQ scientists
Category:English logicians
Category:English men long-distance runners
Category:British men long-distance runners
Category:English people of Irish descent
Category:English people of Scottish descent
Category:Enigma machine
Category:Fellows of King's College, Cambridge
Category:Fellows of the Royal Society
Category:Foreign Office personnel of World War II
Category:Former Protestants
Category:Gay academics
Category:Gay scientists
Category:GCHQ people
Category:History of computing in the United Kingdom
Category:LGBTQ mathematicians
Category:LGBTQ philosophers
Category:LGBTQ track and field athletes
Category:LGBTQ people who died by suicide
Category:Officers of the Order of the British Empire
Category:People convicted for homosexuality in the United Kingdom
Category:People educated at Sherborne School
Category:People from Maida Vale
Category:People from Wilmslow
Category:People who have received posthumous pardons
Category:Princeton University alumni
Category:Recipients of British royal pardons
Category:Scientists of the National Physical Laboratory (United Kingdom)
Category:Suicides by cyanide poisoning
Category:Suicides in England
Category:Theoretical biologists
Category:British theoretical computer scientists
Category:People from St Leonards-on-Sea
|
biographies
| 8,250
|
1750
|
Andaman Islands
|
https://en.wikipedia.org/wiki/Andaman_Islands
|
The Andaman Islands () are an archipelago, made up of 200 islands, in the northeastern Indian Ocean about southwest off the coasts of Myanmar's Ayeyarwady Region. Together with the Nicobar Islands to their south, the Andamans serve as a maritime boundary between the Bay of Bengal to the west and the Andaman Sea to the east. Most of the islands are part of the Andaman and Nicobar Islands, a Union Territory of India, while the Coco Islands and Preparis Island are part of the Yangon Region of Myanmar.
The Andaman Islands are home to the Andamanese, a group of indigenous people made up of a number of tribes, including the Jarawa and Sentinelese. While some of the islands can be visited with permits, entry to others, including North Sentinel Island, is banned by law. The Sentinelese are generally hostile to visitors and have had little contact with any other people. The Indian government and coast guard protect their right to privacy.
History
Etymology
In the 13th century, the name of Andaman appears in Late Middle Chinese as ʔˠanH dɑ mˠan (, pronounced yàntuómán in modern Mandarin Chinese) in the book Zhu Fan Zhi by Zhao Rukuo. In Chapter 38 of the book, Countries in the Sea, Zhao Rukuo specifies that going from Lambri (Sumatra) to Ceylon, an unfavourable wind makes ships drift towards the Andaman Islands. In the 15th century, Andaman was recorded as "Andeman Mountain" (安得蠻山, pronounced āndémán shān in modern Mandarin Chinese) during the voyages of Zheng He in the Mao Kun map of the Wu Bei Zhi.
Early inhabitants
The oldest archaeological evidence for the habitation of the islands dates to the 1st millennium BC. Genetic evidence suggests that the indigenous Andamanese peoples share a common origin, and that the islands were settled sometime after 26,000 years ago, possibly at the end of the Last Glacial Period, when sea levels were much lower reducing the distance between the Andaman Islands and the Asian mainland, with genetic estimates suggesting that the two main linguistic groups diverged around 16,000 years ago. Andamanese peoples are a genetically distinct group highly divergent from other Asians.thumb|left|The Andaman Islands in the Bay of Bengal were said to be inhabited by wolf-headed people, who were depicted in a "book of wonders" produced in Paris in the early 15th century.
Chola empire
Rajendra I took over the Andaman and Nicobar Islands. He used the Andaman and Nicobar Islands as a strategic naval base to launch an expedition against the Sriwijaya Empire. The Cholas called the island Ma-Nakkavaram ("great open/naked land"), found in the Thanjavur inscription of 1050 CE. European traveller Marco Polo (12th–13th century) also referred to this island as 'Necuverann' and a corrupted form of the Tamil name Nakkavaram would have led to the modern name Nicobar during the British colonial period.
British colonial era
In 1789, the Bengal Presidency established a naval base and penal colony on Chatham Island in the southeast bay of Great Andaman. The settlement is now known as Port Blair (after the Bombay Marine lieutenant Archibald Blair who founded it). After two years, the colony was moved to the northeast part of Great Andaman and was named Port Cornwallis after Admiral William Cornwallis. However, there was much disease and death in the penal colony and the government ceased operating it in May 1796.
In 1824, Port Cornwallis was the rendezvous of the fleet carrying the army to the First Burmese War. In the 1830s and 1840s, shipwrecked crews who landed on the Andamans were often attacked and killed by the natives and the islands had a reputation for cannibalism. The loss of the Runnymede and the Briton in 1844 during the same storm, while transporting goods and passengers between India and Australia, and the continuous attacks launched by the natives, which the survivors fought off, alarmed the British government.Kingston, W.H.G. (1873) Shipwrecks and Disasters at Sea. George Routledge and Sons, London. In 1855, the government proposed another settlement on the islands, including a convict establishment, but the Indian Rebellion of 1857 forced a delay in its construction. However, because the rebellion led to the British holding a large number of prisoners, it made the new Andaman settlement and prison urgently necessary. Construction began in November 1857 at Port Blair using inmates' labour, avoiding the vicinity of a salt swamp that seemed to have been the source of many of the earlier problems at Port Cornwallis.
The Battle of Aberdeen was fought on 17 May 1859 between the Great Andamanese tribe and the British. Today, a memorial stands in Andaman water sports complex as a tribute to the people who died in the battle. Fearful of British intentions and with help from an escaped convict from Cellular Jail, the Great Andamanese attacked the British settlement, but they were outnumbered and soon suffered heavy casualties. Later, it was identified that an escaped convict named Dudhnath Tewari had changed sides and informed the British about the tribe's plans.
In 1867, the merchantman Nineveh was wrecked on the reef of North Sentinel Island. The 86 survivors reached the beach in the ship's boats. On the third day, they were attacked with iron-tipped spears by naked islanders. One person from the ship escaped in a boat and the others were later rescued by a British Royal Navy ship.
For some time, sickness and mortality were high, but swamp reclamation and extensive forest clearance continued. The Andaman colony became notorious with the murder of the Viceroy Richard Southwell Bourke, 6th Earl of Mayo, on a visit to the settlement (8 February 1872), by a Pathan from Afghanistan, Sher Ali Afridi. In the same year, the two island groups Andaman and Nicobar, were united under a chief commissioner residing at Port Blair.
From the time of its development in 1858 under the direction of James Pattison Walker, and in response to the mutiny and rebellion of the previous year, the settlement was first and foremost a repository for political prisoners. The Cellular Jail at Port Blair, when completed in 1910, included 698 cells designed for solitary confinement; each cell measured with a single ventilation window above the floor.
The Indians imprisoned here referred to the island and its prison as Kala Pani ("black water"), named for kala pani, the Hindu proscription against travelling across the open sea. Incarceration on the Andamans thus threatened prisoners with the loss of their caste, and resultant social exclusion; a 1996 film set on the island took that term as its title, Kaalapani. The number of prisoners who died in this camp is estimated to be in the thousands. Many more died of harsh treatment and the strenuous living and working conditions in this camp.
The Viper Chain Gang Jail on Viper Island was reserved for extraordinarily troublesome prisoners and was also the site of hangings. In the 20th century, it became a convenient place to house prominent members of India's independence movement.
Japanese occupation
The Andaman and Nicobar Islands were occupied by Japan during World War II. The islands were nominally put under the authority of the Arzi Hukumat-e-Azad Hind (Provisional Government of Free India) headed by Subhas Chandra Bose, who visited the islands during the war, and renamed them as Shaheed (Martyr) & Swaraj (Self-rule). On 30 December 1943, during the Japanese occupation, Bose, who was allied with the Japanese, first raised the flag of Indian independence. General Loganathan, of the Indian National Army, was Governor of the Andaman and Nicobar Islands, which had been annexed to the Provisional Government. According to Werner Gruhl: "Before leaving the islands, the Japanese rounded up and executed 750 innocents."Gruhl, Werner (2007) Imperial Japan's World War Two, 1931–1945 , Transaction Publishers. . p. 102.
Post-World War II
At the close of World War II, the British government announced its intention to shut down the penal settlement. The government proposed to employ former inmates in an initiative to develop the island's fisheries, timber, and agricultural resources. In exchange, inmates would be granted return passage to the Indian mainland, or the right to settle on the islands. J H Williams, one of the Bombay Burma Company's senior officials, was dispatched to perform a timber survey of the islands using convict labor. He recorded his findings in 'The Spotted Deer' (published in 1957 by Rupert Hart-Davis).
The penal colony was eventually closed on 15 August 1947 when India gained independence. It has since served as a museum to the independence movement.
Most of the Andaman Islands became part of the Republic of India in 1950 and was declared as a union territory of the nation in 1956, while the Preparis Island and Coco Islands became part of the Yangon Region of Myanmar in 1948.
Late 20th Century – 21st century
Outside visits
In April 1998, American photographer John S. Callahan organised the first surfing project in the Andamans, starting from Phuket in Thailand with the assistance of Southeast Asia Liveaboards (SEAL), a UK owned dive charter company. With a crew of international professional surfers, they crossed the Andaman Sea on the yacht Crescent and cleared formalities in Port Blair. The group proceeded to Little Andaman Island, where they spent ten days surfing several spots for the first time, including Jarawa Point near Hut Bay and the long right reef point at the southwest tip of the island, named Kumari Point. The resulting article in Surfer Magazine, "Quest for Fire" by journalist Sam George, put the Andaman Islands on the surfing map for the first time. Footage of the waves of the Andaman Islands also appeared in the film Thicker than Water, shot by documentary filmmaker Jack Johnson. Callahan went on to make several more surfing projects in the Andamans, including a trip to the Nicobar Islands in 1999.
In November 2018, John Allen Chau, an American missionary, traveled illegally with the help of local fishermen to the North Sentinel Island off the Andaman Islands chain group on several occasions, despite a travel ban to the island. He is reported to have been killed. Despite some relaxation introduced earlier in 2018 to the stringent visit permit system for the islands, North Sentinel Island was still highly protected from outside contact. Special permission to allow researchers and anthropologists to visit could be sought. Chau had no special clearance and knew that his visit was illegal.
Although a less restrictive system of approval to visit some of the islands now applies, with non-Indian nationals no longer required to obtain pre-approval with a Restricted Area Permit (RAP), foreign visitors must still show their passport at Immigration at Port Blair Airport and Seaport for verification. Citizens of Afghanistan, China and Pakistan, or other foreign nationals whose origin is any of these countries, still required to obtain a RAP to visit Andaman and Nicobar Islands. Similarly, citizens of Myanmar who wish to visit Mayabunder or Diglipur must also apply for a RAP. In these cases, the permits must be pre-approved prior to arrival in Port Blair.
Natural disasters
On 26 December 2004, the coast of the Andaman Islands was devastated by a tsunami following the 2004 Indian Ocean earthquake, which is the longest recorded earthquake, lasting for between 500 and 600 seconds. Strong oral tradition in the area warned of the importance of moving inland after a quake and is credited with saving many lives. In the aftermath, more than 2,000 people were confirmed dead and more than 4,000 children were orphaned or had lost one parent. At least 40,000 residents were rendered homeless and were moved to relief camps. On 11 August 2009, a magnitude 7 earthquake struck near the Andaman Islands, causing a tsunami warning to go into effect. On 30 March 2010, a magnitude 6.9 earthquake struck near the Andaman Islands.
Geography and geology
The Andaman Archipelago is an oceanic continuation of the Burmese Arakan Yoma range in the north and of the Indonesian Archipelago in the south. It has 325 islands which cover an area of , with the Andaman Sea to the east between the islands and the coast of Burma. North Andaman Island is south of Burma, although a few smaller Burmese islands are closer, including the three Coco Islands.
The Ten Degree Channel separates the Andamans from the Nicobar Islands to the south. The highest point is located in North Andaman Island (Saddle Peak at ).
The geology of the Andaman islands consists essentially of Late Jurassic to Early Eocene ophiolites and sedimentary rocks (argillaceous and algal limestones), deformed by numerous deep faults and thrusts with ultramafic igneous intrusions. There are at least 11 mud volcanoes on the islands.Chakrabarti, P.; Nag, A.; Dutta, S. B.; Dasgupta, S. and Gupta, N. (2006) S & T Input: Earthquake and Tsunami Effects... , page 43. Chapter 5 in S. M. Ramasamy et al. (eds.), Geomatics in Tsunami, New India Publishing. There are two volcanic islands, Narcondam Island and Barren Island, which have produced basalt and andesite. Barren Island is the only active volcano in the Indian sub-continent, with the latest eruption reported in December 2022, leading to the potential for geotourism.
Climate
The climate is typical of tropical islands of similar latitude. It is always warm, but with sea breezes. Rainfall is irregular, usually dry during the north-east monsoons, and very wet during the south-west monsoons.
Flora
The Middle Andamans harbour mostly moist deciduous forests. North Andamans is characterised by the wet evergreen type, with plenty of woody climbers.
The natural vegetation of the Andamans is tropical forest, with mangroves on the coast. The rainforests are similar in composition to those of the west coast of Burma. Most of the forests are evergreen, but there are areas of deciduous forest on North Andaman, Middle Andaman, Baratang and parts of South Andaman Island. The South Andaman forests have a profuse growth of epiphytic vegetation, mostly ferns and orchids.
The Andaman forests are largely unspoiled, despite logging and the demands of the fast-growing population driven by immigration from the Indian mainland. There are protected areas on Little Andaman, Narcondam, North Andaman and South Andaman, but these are mainly aimed at preserving the coast and the marine wildlife rather than the rainforests. Threats to wildlife come from introduced species including rats, dogs, cats and the elephants of Interview Island and North Andaman.
Scientists discovered a new species of green algae species in the Andaman archipelago, naming it Acetabularia jalakanyakae. "Jalakanyaka" is a Sanskrit word that means "mermaid".
Timber
Andaman forests contain 200 or more timber producing species of trees, out of which about 30 varieties are considered to be commercial. Major commercial timber species are Gurjan (Dipterocarpus spp.) and Padauk (Pterocarpus dalbergioides). The following ornamental woods are noted for their pronounced grain formation:
Marble wood (Diospyros marmorata)
Andaman Padauk (Pterocarpus dalbergioides)
Silver grey (a special formation of wood in white utkarsh)
Chooi (Sageraea elliptica)
Kokko (Albizzia lebbeck)
Padauk wood is sturdier than teak and is widely used for furniture making.
There are burr wood and buttress root formations in Andaman Padauk. The largest piece of buttress known from Andaman was a dining table of . The largest piece of burr wood was made into a dining table for eight.
The Rudraksha (Elaeocarps sphaericus) and aromatic Dhoop-resin trees also are found here.
Fauna
The Andaman Islands are home to a number of animals, many of them endemic. Andaman & Nicobar islands are home to 10% of all Indian fauna species. The islands are only 0.25% of the country's geographical area, but has 11,009 species, according to a publication by the Zoological Survey of India.
Mammals
The island's endemic mammals include
Andaman spiny shrew (Crocidura hispida)
Andaman shrew (Crocidura andamanensis)
Jenkins's shrew (Crocidura jenkinsi)
Andaman horseshoe bat (Rhinolophus cognatus)
Andaman rat (Rattus stoicus)
The banded pig (Sus scrofa vittatus), also known as the Andaman wild boar and once thought to be an endemic subspecies, is protected by the Wildlife Protection Act 1972 (Sch I). The spotted deer (Axis axis), the Indian muntjac (Muntiacus muntjak) and the sambar (Rusa unicolor) were all introduced to the Andaman islands, though the sambar did not survive.
Interview Island (the largest wildlife sanctuary in the territory) in Middle Andaman holds a population of feral elephants, which were brought in for forest work by a timber company and released when the company went bankrupt. This population has been subject to research studies.
Birds
Endemic or near endemic birds include
Spilornis elgini, a serpent-eagle
Rallina canningi, a crake (endemic; data-deficient per IUCN 2000)
Columba palumboides, a wood-pigeon
Macropygia rufipennis, a cuckoo dove
Centropus andamanensis, a subspecies of brown coucal (endemic)
Otus balli, a scops owl
Ninox affinis, a hawk-owl
Rhyticeros narcondami, the Narcondam hornbill
Dryocopus hodgei, a woodpecker
Dicrurus andamanensis, a drongo
Dendrocitta bayleyii, a treepie
Sturnus erythropygius, the white-headed starling
Collocalia affinis, the plume-toed swiftlet
Aerodramus fuciphagus, the edible-nest swiftlet
The islands' many caves, such as those at Chalis Ek are nesting grounds for the edible-nest swiftlet, whose nests are prized in China for bird's nest soup.Sankaran, R. (1998), The impact of nest collection on the Edible-nest Swiftlet in the Andaman and Nicobar Islands . Sálim Ali Centre for Ornithology and Natural History, Coimbatore, India.
Reptiles and amphibians
The islands also have a number of endemic reptiles, toads and frogs, such as the Andaman cobra (Naja sagittifera), South Andaman krait (Bungarus andamanensis) and Andaman water monitor (Varanus salvator andamanensis).
There is a sanctuary from Havelock Island for saltwater crocodiles. Over the past 25 years there have been 24 crocodile attacks with four fatalities, including the death of American tourist Lauren Failla. The government has been criticised for failing to inform tourists of the crocodile sanctuary and danger, while simultaneously promoting tourism. Crocodiles are not only found within the sanctuary, but throughout the island chain in varying densities. They are habitat restricted, so the population is stable but not large. Populations occur throughout available mangrove habitat on all major islands, including a few creeks on Havelock. The species uses the ocean as a means of travel between different rivers and estuaries, thus they are not as commonly observed in open ocean. It is best to avoid swimming near mangrove areas or the mouths of creeks; swimming in the open ocean should be safe, but it is best to have a spotter around.
Demographics
, the population of the Andaman was 343,125, having grown from 50,000 in 1960. The bulk of the population originates from immigrants who came to the island since the colonial times, mainly of Bengali, Hindustani, Telugu, Tamil, Malayalam backgrounds.
A small minority of the population are the Andamanese — the aboriginal inhabitants (adivasi) of the islands. When they first came into sustained contact with outside groups in the 1850s, there were an estimated 7,000 Andamanese, divided into the Great Andamanese, Jarawa, Jangil (or Rutland Jarawa), Onge, and the Sentinelese. The Great Andamanese formed 10 tribes of 5,000 people total. As the numbers of settlers from the mainland increased (at first mostly prisoners and involuntary indentured labourers, later purposely recruited farmers), the Andamanese suffered a population decline due to the introduction of outside infectious diseases, land encroachment from settlers and conflict.
The Andaman Islands are home to the Sentinelese people, an uncontacted tribe.
Due to their isolated island location, the Andaman people have mostly avoided contact with the outside world. Their languages are a great reflection of this, with distinct linguistics that have strong morphological features – root words, prefix, suffixes – with very little relation to surrounding geographic regions.
Figures from the end of the 20th century estimate there remain only approximately 400–450 ethnic Andamanese still on the island, and as few as 50 speakers The Jangil are extinct. Most of the Great Andamanese tribes are extinct, and the survivors, now just 52, speak mostly Hindi. The Onge are reduced to less than 100 people. Only the Jarawa and Sentinelese still maintain a steadfast independence and refuse most attempts at contact; their numbers are uncertain but estimated to be in the low hundreds.
The indigenous languages are collectively referred to as the Andamanese languages, but they make up at least two independent families, and the dozen or so attested languages are either extinct or endangered.
Religion
Most of the tribal people in Andaman and Nicobar Islands believe in a religion that can be described as a form of monotheistic animism. The tribal people of these islands believe that Puluga is the only deity and is responsible for everything happening on Earth. The faith of the Andamanese teaches that Puluga resides on the Andaman and Nicobar Islands' Saddle Peak. People try to avoid any action that might displease Puluga. People belonging to this religion believe in the presence of souls, ghosts, and spirits. They put a lot of emphasis on dreams. They let dreams decide different courses of action in their lives.
Andamanese mythology held that human males emerged from split bamboo, whereas women were fashioned from clay.Radcliffe-Brown, Alfred Reginald. The Andaman Islanders: A study in social anthropology. 2nd printing (enlarged). Cambridge: Cambridge University Press, 1933 [1906]. p. 192 One version found by Alfred Reginald Radcliffe-Brown held that the first man died and went to heaven, a pleasurable world, but this blissful period ended due to breaking a food taboo, specifically eating the forbidden vegetables in the Puluga's garden.Radcliffe-Brown, Alfred Reginald. The Andaman Islanders: A study in social anthropology. 2nd printing (enlarged). Cambridge: Cambridge University Press, 1933 [1906]. p. 220 Thus catastrophe ensued, and eventually the people grew overpopulated and didn't follow Puluga's laws. Hence, there was a Great Flood that left four survivors, who lost their fire.Radcliffe-Brown, Alfred Reginald. The Andaman Islanders: A study in social anthropology. 2nd printing (enlarged). Cambridge: Cambridge University Press, 1933 [1906]. p. 216Witzel, Michael E.J. (2012). The Origin of The World's Mythologies. Oxford: Oxford University Press. p. 309–312
Other religions practiced in the Andaman and Nicobar Islands are, in order of size, Hinduism, Christianity, Islam, Sikhism, Buddhism, Jainism and Baháʼí Faith.
Government
Port Blair is the chief community on the islands, and the administrative centre of the Union Territory. The Andaman Islands form a single administrative district within the Union Territory, the Andaman district (the Nicobar Islands were separated and established as the new Nicobar district in 1974).
Transportation
The only commercial airport is Veer Savarkar International Airport in Port Blair. The airport is under the control of the Indian Navy. Prior to 2016 only daylight operations were allowed; since 2016 night flights have also operated. A small airstrip, about long, is located near the eastern shore of North Andaman near Diglipur.
There are also ships from Chennai, Visakhapatnam and Kolkata.
Cultural references
Literature
The islands are prominently featured in Arthur Conan Doyle's Sherlock Holmes 1890 mystery The Sign of the Four. The magistrate in Lady Gregory's play Spreading the News had formerly served in the islands.
M. M. Kaye's 1985 novel Death in the Andamans and Marianne Wiggins' 1989 novel John Dollar are set in the islands. The latter begins with an expedition from Burma to celebrate King George's birthday, but turns into a grim survival story after an earthquake and tsunami.
A principal character in the novel Six Suspects by Vikas Swarup is from the Andaman Islands. The main protagonist of William Boyd's 2018 novel Love is Blind, spends time in the Andaman Islands at the turn of the 20th century. The Andaman Islands in the period before, during and just after the Second World War are the setting for Uzma Aslan Khan's The Miraculous True History of Nomi Ali.
Film and television
Priyadarshan's 1996 film Kaalapani (Malayalam; Sirai Chaalai in Tamil) depicts the Indian freedom struggle and the lives of prisoners in the Cellular Jail in Port Blair.
In 2023, Andaman islands were featured in a Netflix series named Kaala Paani based on a fictional disease outbreak in 2027.
See also
Andaman and Nicobar Islands
List of endemic birds of the Andaman and Nicobar Islands
List of trees of the Andaman Islands
Lists of islands
References
Notes
Sources
History & Culture. The Andaman Islands with destination quide
External links
Official Andaman and Nicobar Tourism Website
Andaman
Category:Archipelagoes of the Andaman and Nicobar Islands
Category:Archipelagoes of India
Category:Archipelagoes of the Indian Ocean
Category:Archipelagoes of Southeast Asia
Category:Maritime Southeast Asia
Category:Volcanoes of India
Category:Pleistocene volcanoes
Category:Pleistocene Asia
Category:Lands inhabited by indigenous peoples
|
geography
| 4,031
|
1914
|
Antimicrobial resistance
|
https://en.wikipedia.org/wiki/Antimicrobial_resistance
|
Antimicrobial resistance (AMR or AR) occurs when microbes evolve mechanisms that protect them from antimicrobials, which are drugs used to treat infections. This resistance affects all classes of microbes, including bacteria (antibiotic resistance), viruses (antiviral resistance), parasites (antiparasitic resistance), and fungi (antifungal resistance). Together, these adaptations fall under the AMR umbrella, posing significant challenges to healthcare worldwide. Misuse and improper management of antimicrobials are primary drivers of this resistance, though it can also occur naturally through genetic mutations and the spread of resistant genes.
Antibiotic resistance, a significant AMR subset, enables bacteria to survive antibiotic treatment, complicating infection management and treatment options. Resistance arises through spontaneous mutation, horizontal gene transfer, and increased selective pressure from antibiotic overuse, both in medicine and agriculture, which accelerates resistance development.
The burden of AMR is immense, with nearly 5 million annual deaths associated with resistant infections. Infections from AMR microbes are more challenging to treat and often require costly alternative therapies that may have more severe side effects. Preventive measures, such as using narrow-spectrum antibiotics and improving hygiene practices, aim to reduce the spread of resistance. Microbes resistant to multiple drugs are termed multidrug-resistant (MDR) and are sometimes called superbugs.
The World Health Organization (WHO) claims that AMR is one of the top global public health and development threats, estimating that bacterial AMR was directly responsible for 1.27 million global deaths in 2019 and contributed to 4.95 million deaths. Moreover, the WHO and other international bodies warn that AMR could lead to up to 10 million deaths annually by 2050 unless actions are taken. Global initiatives, such as calls for international AMR treaties, emphasize coordinated efforts to limit misuse, fund research, and provide access to necessary antimicrobials in developing nations. However, the COVID-19 pandemic redirected resources and scientific attention away from AMR, intensifying the challenge.
Definition
Antimicrobial resistance means that a microorganism is resistant to an antimicrobial drug that was once able to treat an infection by that microorganism. A person cannot become resistant to antibiotics. Resistance is a property of the microbe, not a person or other organism infected by a microbe. All types of microbes can develop drug resistance. Thus, there are antibiotic, antifungal, antiviral and antiparasitic resistance.
Antibiotic resistance is a subset of antimicrobial resistance. This more specific resistance is linked to bacteria and thus broken down into two further subsets, microbiological and clinical. Microbiological resistance is the most common and occurs from genes, mutated or inherited, that allow the bacteria to resist the mechanism to kill the microbe associated with certain antibiotics. Clinical resistance is shown through the failure of many therapeutic techniques where the bacteria that are normally susceptible to a treatment become resistant after surviving the outcome of the treatment. In both cases of acquired resistance, the bacteria can pass the genetic catalyst for resistance through horizontal gene transfer: conjugation, transduction, or transformation. This allows the resistance to spread across the same species of pathogen or even similar bacterial pathogens.
Overview
WHO report released April 2014 stated, "this serious threat is no longer a prediction for the future, it is happening right now in every region of the world and has the potential to affect anyone, of any age, in any country. Antibiotic resistance—when bacteria change so antibiotics no longer work in people who need them to treat infections—is now a major threat to public health.""WHO's first global report on antibiotic resistance reveals serious, worldwide threat to public health" Retrieved 2 May 2014.
Each year, nearly 5 million deaths are associated with AMR globally. In 2019, global deaths attributable to AMR numbered 1.27 million in 2019. That same year, AMR may have contributed to 5 million deaths and one in five people who died due to AMR were children under five years old.
In 2018, WHO considered antibiotic resistance to be one of the biggest threats to global health, food security and development. Deaths attributable to AMR vary by area:
+PlaceDeaths per 100,000 attributable to AMRNorth Africa and Middle East11.2Southeast and East Asia, and Oceania11.7Latin America and Caribbean14.4Central and Eastern Europe and Central Asia17.6South Asia21.5Sub-Saharan Africa23.7
The European Centre for Disease Prevention and Control calculated that in 2015 there were 671,689 infections in the EU and European Economic Area caused by antibiotic-resistant bacteria, resulting in 33,110 deaths. Most were acquired in healthcare settings. In 2019 there were 133,000 deaths caused by AMR.
Causes
AMR is driven largely by the misuse and overuse of antimicrobials. Yet, at the same time, many people around the world do not have access to essential antimicrobials. This leads to microbes either evolving a defense against drugs used to treat them, or certain strains of microbes that have a natural resistance to antimicrobials becoming much more prevalent than the ones that are easily defeated with medication. While antimicrobial resistance does occur naturally over time, the use of antimicrobial agents in a variety of settings both within the healthcare industry and outside of has led to antimicrobial resistance becoming increasingly more prevalent.
Although many microbes develop resistance to antibiotics over time through natural mutation, overprescribing and inappropriate prescription of antibiotics have accelerated the problem. It is possible that as many as 1 in 3 prescriptions written for antibiotics are unnecessary. Every year, approximately 154 million prescriptions for antibiotics are written. Of these, up to 46 million are unnecessary or inappropriate for the condition that the patient has. Microbes may naturally develop resistance through genetic mutations that occur during cell division, and although random mutations are rare, many microbes reproduce frequently and rapidly, increasing the chances of members of the population acquiring a mutation that increases resistance. Many individuals stop taking antibiotics when they begin to feel better. When this occurs, it is possible that the microbes that are less susceptible to treatment still remain in the body. If these microbes are able to continue to reproduce, this can lead to an infection by bacteria that are less susceptible or even resistant to an antibiotic.
Natural occurrence
AMR is a naturally occurring process. Antimicrobial resistance can evolve naturally due to continued exposure to antimicrobials. Natural selection means that organisms that are able to adapt to their environment, survive, and continue to produce offspring. As a result, the types of microorganisms that are able to survive over time with continued attack by certain antimicrobial agents will naturally become more prevalent in the environment, and those without this resistance will become obsolete.
Some contemporary antimicrobial resistances have also evolved naturally before the use of antimicrobials of human clinical uses. For instance, methicillin-resistance evolved as a pathogen of hedgehogs, possibly as a co-evolutionary adaptation of the pathogen to hedgehogs that are infected by a dermatophyte that naturally produces antibiotics. Also, many soil fungi and bacteria are natural competitors and the original antibiotic penicillin discovered by Alexander Fleming rapidly lost clinical effectiveness in treating humans and, furthermore, none of the other natural penicillins (F, K, N, X, O, U1 or U6) are currently in clinical use.
Antimicrobial resistance can be acquired from other microbes through swapping genes in a process termed horizontal gene transfer. This means that once a gene for resistance to an antibiotic appears in a microbial community, it can then spread to other microbes in the community, potentially moving from a non-disease causing microbe to a disease-causing microbe. This process is heavily driven by the natural selection processes that happen during antibiotic use or misuse.
Over time, most of the strains of bacteria and infections present will be the type resistant to the antimicrobial agent being used to treat them, making this agent now ineffective to defeat most microbes. With the increased use of antimicrobial agents, there is a speeding up of this natural process.
Self-medication
In the vast majority of countries, antibiotics can only be prescribed by a doctor and supplied by a pharmacy. Self-medication by consumers is defined as "the taking of medicines on one's own initiative or on another person's suggestion, who is not a certified medical professional", and it has been identified as one of the primary reasons for the evolution of antimicrobial resistance. Self-medication with antibiotics is an unsuitable way of using them but a common practice in resource-constrained countries. The practice exposes individuals to the risk of bacteria that have developed antimicrobial resistance. Many people resort to this out of necessity, when access to a physician is unavailable, or when patients have a limited amount of time or money to see a doctor. This increased access makes it extremely easy to obtain antimicrobials. An example is India, where in the state of Punjab 73% of the population resorted to treating their minor health issues and chronic illnesses through self-medication.
Self-medication is higher outside the hospital environment, and this is linked to higher use of antibiotics, with the majority of antibiotics being used in the community rather than hospitals. The prevalence of self-medication in low- and middle-income countries (LMICs) ranges from 8.1% to 93%. Accessibility, affordability, and conditions of health facilities, as well as the health-seeking behavior, are factors that influence self-medication in low- and middle-income countries. Two significant issues with self-medication are the lack of knowledge of the public on, firstly, the dangerous effects of certain antimicrobials (for example ciprofloxacin which can cause tendonitis, tendon rupture and aortic dissection) and, secondly, broad microbial resistance and when to seek medical care if the infection is not clearing. In order to determine the public's knowledge and preconceived notions on antibiotic resistance, a screening of 3,537 articles published in Europe, Asia, and North America was done. Of the 55,225 total people surveyed in the articles, 70% had heard of antibiotic resistance previously, but 88% of those people thought it referred to some type of physical change in the human body.
Clinical misuse
Clinical misuse by healthcare professionals is another contributor to increased antimicrobial resistance. Studies done in the US show that the indication for treatment of antibiotics, choice of the agent used, and the duration of therapy was incorrect in up to 50% of the cases studied. In 2010 and 2011 about a third of antibiotic prescriptions in outpatient settings in the United States were not necessary. Another study in an intensive care unit in a major hospital in France has shown that 30% to 60% of prescribed antibiotics were unnecessary. These inappropriate uses of antimicrobial agents promote the evolution of antimicrobial resistance by supporting the bacteria in developing genetic alterations that lead to resistance.
According to research conducted in the US that aimed to evaluate physicians' attitudes and knowledge on antimicrobial resistance in ambulatory settings, only 63% of those surveyed reported antibiotic resistance as a problem in their local practices, while 23% reported the aggressive prescription of antibiotics as necessary to avoid failing to provide adequate care. This demonstrates that many doctors underestimate the impact that their own prescribing habits have on antimicrobial resistance as a whole. It also confirms that some physicians may be overly cautious and prescribe antibiotics for both medical or legal reasons, even when clinical indications for use of these medications are not always confirmed. This can lead to unnecessary antimicrobial use, a pattern which may have worsened during the COVID-19 pandemic.
Studies have shown that common misconceptions about the effectiveness and necessity of antibiotics to treat common mild illnesses contribute to their overuse.
Important to the conversation of antibiotic use is the veterinary medical system. Veterinary oversight is required by law for all medically important antibiotics. Veterinarians use the Pharmacokinetic/pharmacodynamic model (PK/PD) approach to ensuring that the correct dose of the drug is delivered to the correct place at the correct timing.
Pandemics, disinfectants and healthcare systems
Increased antibiotic use during the early waves of the COVID-19 pandemic may exacerbate this global health challenge. Moreover, pandemic burdens on some healthcare systems may contribute to antibiotic-resistant infections. The use of disinfectants such as alcohol-based hand sanitizers, and antiseptic hand wash may also have the potential to increase antimicrobial resistance. Extensive use of disinfectants can lead to mutations that induce antimicrobial resistance. On the other hand, "increased hand hygiene, decreased international travel, and decreased elective hospital procedures may have reduced AMR pathogen selection and spread in the short term" during the COVID-19 pandemic.
A 2024 United Nations High-Level Meeting on AMR has pledged to reduce deaths associated with bacterial AMR by 10% over the next six years. In their first major declaration on the issue since 2016, global leaders also committed to raising $100 million to update and implement AMR action plans. However, the final draft of the declaration omitted an earlier target to reduce antibiotic use in animals by 30% by 2030, due to opposition from meat-producing countries and the farming industry. Critics argue this omission is a major weakness, as livestock accounts for around 73% of global sales of antimicrobial agents, including antibiotics, antivirals, and antiparasitics.
Environmental pollution
Considering the complex interactions between humans, animals and the environment, it is also important to consider the environmental aspects and contributors to antimicrobial resistance. Although there are still some knowledge gaps in understanding the mechanisms and transmission pathways, environmental pollution is considered a significant contributor to antimicrobial resistance. Important contributing factors are through "antibiotic residues", "industrial effluents", " agricultural runoffs", "heavy metals", "biocides and pesticides" and "sewage and wastewater" that create reservoirs for resistant genes and bacteria that facilitates the transfer of human pathogens. Unused or expired antibiotics, if not disposed of properly, can enter water systems and soil. Discharge from pharmaceutical manufacturing and other industrial companies can also introduce antibiotics and other chemicals into the environment. These factors allow for creating selective pressure for resistant bacteria. Antibiotics used in livestock and aquaculture can contaminate soil and water, which promotes resistance in environmental microbes. Heavy metals such as zinc, copper and mercury, and also biocides and pesticides, can co- select for antibiotic resistance, enhancing their speed. Inadequate treatment of sewage and wastewater allows resistant bacteria and genes to spread through water systems.
Food production
Livestock
The antimicrobial resistance crisis also extends to the food industry, specifically with food producing animals. With an ever-increasing human population, there is constant pressure to intensify productivity in many agricultural sectors, including the production of meat as a source of protein. Antibiotics are fed to livestock to act as growth supplements, and a preventive measure to decrease the likelihood of infections.
Farmers typically use antibiotics in animal feed to improve growth rates and prevent infections. However, this is illogical as antibiotics are used to treat infections and not prevent infections. 80% of antibiotic use in the U.S. is for agricultural purposes and about 70% of these are medically important. Overusing antibiotics gives the bacteria time to adapt leaving higher doses or even stronger antibiotics needed to combat the infection. Though antibiotics for growth promotion were banned throughout the EU in 2006, 40 countries worldwide still use antibiotics to promote growth.
This can result in the transfer of resistant bacterial strains into the food that humans eat, causing potentially fatal transfer of disease. While the practice of using antibiotics as growth promoters does result in better yields and meat products, it is a major issue and needs to be decreased in order to prevent antimicrobial resistance. Though the evidence linking antimicrobial usage in livestock to antimicrobial resistance is limited, the World Health Organization Advisory Group on Integrated Surveillance of Antimicrobial Resistance strongly recommended the reduction of use of medically important antimicrobials in livestock. Additionally, the Advisory Group stated that such antimicrobials should be expressly prohibited for both growth promotion and disease prevention in food producing animals.
By mapping antimicrobial consumption in livestock globally, it was predicted that in 228 countries there would be a total 67% increase in consumption of antibiotics by livestock by 2030. In some countries such as Brazil, Russia, India, China, and South Africa it is predicted that a 99% increase will occur. Several countries have restricted the use of antibiotics in livestock, including Canada, China, Japan, and the US. These restrictions are sometimes associated with a reduction of the prevalence of antimicrobial resistance in humans.
In the United States the Veterinary Feed Directive went into practice in 2017 dictating that All medically important antibiotics to be used in feed or water for food animal species require a veterinary feed directive (VFD) or a prescription.
Pesticides
Most pesticides protect crops against insects and plants, but in some cases antimicrobial pesticides are used to protect against various microorganisms such as bacteria, viruses, fungi, algae, and protozoa. The overuse of many pesticides in an effort to have a higher yield of crops has resulted in many of these microbes evolving a tolerance against these antimicrobial agents. Currently there are over 4000 antimicrobial pesticides registered with the US Environmental Protection Agency (EPA) and sold to market, showing the widespread use of these agents. It is estimated that for every single meal a person consumes, 0.3 g of pesticides is used, as 90% of all pesticide use is in agriculture. A majority of these products are used to help defend against the spread of infectious diseases, and hopefully protect public health. But out of the large amount of pesticides used, it is also estimated that less than 0.1% of those antimicrobial agents, actually reach their targets. That leaves over 99% of all pesticides used available to contaminate other resources. In soil, air, and water these antimicrobial agents are able to spread, coming in contact with more microorganisms and leading to these microbes evolving mechanisms to tolerate and further resist pesticides. The use of antifungal azole pesticides that drive environmental azole resistance have been linked to azole resistance cases in the clinical setting. The same issues confront the novel antifungal classes (e.g. orotomides) which are again being used in both the clinic and agriculture.
Wild birds
Wildlife, including wild and migratory birds, serve as a reservoir for zoonotic disease and antimicrobial-resistant organisms. Birds are a key link between the transmission of zoonotic diseases to human populations. By the same token, increased contact between wild birds and human populations (including domesticated animals), has increased the amount of anti-microbial resistance (AMR) to the bird population. The introduction of AMR to wild birds positively correlates with human pollution and increased human contact. Additionally, wild birds can participate in horizontal gene transfer with bacteria, leading to the transmission of antibiotic-resistant genes (ARG).
For simplicity, wild bird populations can be divided into two major categories, wild sedentary birds and wild migrating birds. Wild sedentary bird exposure to AMR is through increased contact with densely populated areas, human waste, domestic animals, and domestic animal/livestock waste. Wild migrating birds interact with sedentary birds in different environments along their migration route. This increases the rate and diversity of AMR across varying ecosystems.
Neglect of wildlife in the global discussions surrounding health security and AMR, creates large barriers to true AMR surveillance. The surveillance of anti-microbial resistant organisms in wild birds is a potential metric for the rate of AMR in the environment. This surveillance also allows for further investigation into the transmission routs between different ecosystems and human populations (including domesticated animals and livestock). Such information gathered from wild bird biomes, can help identify patterns of diseased transmission and better target interventions. These targeted interventions can inform the use of antimicrobial agents and reduce the persistence of multi-drug resistant organisms.
Gene transfer from ancient microorganisms
Permafrost is a term used to refer to any ground that remained frozen for two years or more, with the oldest known examples continuously frozen for around 700,000 years. In the recent decades, permafrost has been rapidly thawing due to climate change.Fox-Kemper, B., H.T. Hewitt, C. Xiao, G. Aðalgeirsdóttir, S.S. Drijfhout, T.L. Edwards, N.R. Golledge, M. Hemer, R.E. Kopp, G. Krinner, A. Mix, D. Notz, S. Nowicki, I.S. Nurhati, L. Ruiz, J.-B. Sallée, A.B.A. Slangen, and Y. Yu, 2021: Chapter 9: Ocean, Cryosphere and Sea Level Change . In Climate Change 2021: The Physical Science Basis. Contribution of Working Group I to the Sixth Assessment Report of the Intergovernmental Panel on Climate Change [Masson-Delmotte, V., P. Zhai, A. Pirani, S.L. Connors, C. Péan, S. Berger, N. Caud, Y. Chen, L. Goldfarb, M.I. Gomis, M. Huang, K. Leitzell, E. Lonnoy, J.B.R. Matthews, T.K. Maycock, T. Waterfield, O. Yelekçi, R. Yu, and B. Zhou (eds.)]. Cambridge University Press, Cambridge, United Kingdom and New York, NY, USA, pp. 1211–1362, doi:10.1017/9781009157896.011. The cold preserves any organic matter inside the permafrost, and it is possible for microorganisms to resume their life functions once it thaws. While some common pathogens such as influenza, smallpox or the bacteria associated with pneumonia have failed to survive intentional attempts to revive them, more cold-adapted microorganisms such as anthrax, or several ancient plant and amoeba viruses, have successfully survived prolonged thaw.
Some scientists have argued that the inability of known causative agents of contagious diseases to survive being frozen and thawed makes this threat unlikely. Instead, there have been suggestions that when modern pathogenic bacteria interact with the ancient ones, they may, through horizontal gene transfer, pick up genetic sequences which are associated with antimicrobial resistance, exacerbating an already difficult issue. Antibiotics to which permafrost bacteria have displayed at least some resistance include chloramphenicol, streptomycin, kanamycin, gentamicin, tetracycline, spectinomycin and neomycin. However, other studies show that resistance levels in ancient bacteria to modern antibiotics remain lower than in the contemporary bacteria from the active layer of thawed ground above them, which may mean that this risk is "no greater" than from any other soil.
Prevention
There have been increasing public calls for global collective action to address the threat, including a proposal for an international treaty on antimicrobial resistance. Further detail and attention is still needed in order to recognize and measure trends in resistance on the international level; the idea of a global tracking system has been suggested but implementation has yet to occur. A system of this nature would provide insight to areas of high resistance as well as information necessary for evaluating programs, introducing interventions and other changes made to fight or reverse antibiotic resistance.
Duration of antimicrobials
Delaying or minimizing the use of antibiotics for certain conditions may help safely reduce their use. Antimicrobial treatment duration should be based on the infection and other health problems a person may have. For many infections once a person has improved there is little evidence that stopping treatment causes more resistance. Some, therefore, feel that stopping early may be reasonable in some cases. Other infections, however, do require long courses regardless of whether a person feels better.
Delaying antibiotics for ailments such as a sore throat and otitis media may have no difference in the rate of complications compared with immediate antibiotics, for example. When treating respiratory tract infections, clinical judgement is required as to the appropriate treatment (delayed or immediate antibiotic use).
Monitoring and mapping
There are multiple national and international monitoring programs for drug-resistant threats, including methicillin-resistant Staphylococcus aureus (MRSA), vancomycin-resistant S. aureus (VRSA), extended spectrum beta-lactamase (ESBL) producing Enterobacterales, vancomycin-resistant Enterococcus (VRE), and multidrug-resistant Acinetobacter baumannii (MRAB).
ResistanceOpen is an online global map of antimicrobial resistance developed by HealthMap which displays aggregated data on antimicrobial resistance from publicly available and user submitted data. The website can display data for a radius from a location. Users may submit data from antibiograms for individual hospitals or laboratories. European data is from the EARS-Net (European Antimicrobial Resistance Surveillance Network), part of the ECDC. ResistanceMap is a website by the Center for Disease Dynamics, Economics & Policy and provides data on antimicrobial resistance on a global level.
The WHO's AMR global action plan also recommends antimicrobial resistance surveillance in animals. Initial steps in the EU for establishing the veterinary counterpart EARS-Vet (EARS-Net for veterinary medicine) have been made. AMR data from pets in particular is scarce, but needed to support antibiotic stewardship in veterinary medicine.
By comparison there is a lack of national and international monitoring programs for antifungal resistance.
Limiting antimicrobial use in humans
Antimicrobial stewardship programmes appear useful in reducing rates of antimicrobial resistance. The antimicrobial stewardship program will also provide pharmacists with the knowledge to educate patients that antibiotics will not work for a virus for example.
Excessive antimicrobial use has become one of the top contributors to the evolution of antimicrobial resistance. Since the beginning of the antimicrobial era, antimicrobials have been used to treat a wide range of infectious diseases. Overuse of antimicrobials has become the primary cause of rising levels of antimicrobial resistance. The main problem is that doctors are willing to prescribe antimicrobials to ill-informed individuals who believe that antimicrobials can cure nearly all illnesses, including viral infections like the common cold. In an analysis of drug prescriptions, 36% of individuals with a cold or an upper respiratory infection (both usually viral in origin) were given prescriptions for antibiotics. These prescriptions accomplished nothing other than increasing the risk of further evolution of antibiotic resistant bacteria. Using antimicrobials without prescription is another driving force leading to the overuse of antibiotics to self-treat diseases like the common cold, cough, fever, and dysentery resulting in an epidemic of antibiotic resistance in countries like Bangladesh, risking its spread around the globe. Introducing strict antibiotic stewardship in the outpatient setting to reduce inappropriate prescribing of antibiotics may reduce the emerging bacterial resistance.
The WHO AWaRe (Access, Watch, Reserve) guidance and antibiotic book has been introduced to guide antibiotic choice for the 30 most common infections in adults and children to reduce inappropriate prescribing in primary care and hospitals. Narrow-spectrum antibiotics are preferred due to their lower resistance potential, and broad-spectrum antibiotics are only recommended for people with more severe symptoms. Some antibiotics are more likely to confer resistance, so are kept as reserve antibiotics in the AWaRe book.
Various diagnostic strategies have been employed to prevent the overuse of antifungal therapy in the clinic, proving a safe alternative to empirical antifungal therapy, and thus underpinning antifungal stewardship schemes.
At the hospital level
Antimicrobial stewardship teams in hospitals are encouraging optimal use of antimicrobials. The goals of antimicrobial stewardship are to help practitioners pick the right drug at the right dose and duration of therapy while preventing misuse and minimizing the development of resistance. Stewardship interventions may reduce the length of stay by an average of slightly over 1 day while not increasing the risk of death. Dispensing, to discharged in-house patients, the exact number of antibiotic pharmaceutical units necessary to complete an ongoing treatment can reduce antibiotic leftovers within the community as community pharmacies can have antibiotic package inefficiencies.
A 2025 cross-sectional study of 125 pharmacists at a UK NHS Foundation Trust examined knowledge, attitudes, and perceptions regarding antimicrobial stewardship following the COVID-19 pandemic. The study found that 85.2% of pharmacists recognized antimicrobial resistance as a public health concern, while 85.6% supported antimicrobial stewardship for prudent antibiotic use. However, the pandemic created challenges, with 80% reporting that COVID-19 patient conditions influenced antibiotic prescribing and 79.2% noting that time pressure affected antibiotic decision-making. The research highlighted the critical role of communication, with 79.2% of respondents valuing enhanced communication with microbiologists and stewardship teams during the pandemic period.
At the primary care level
Given the volume of care provided in primary care (general practice), recent strategies have focused on reducing unnecessary antimicrobial prescribing in this setting. Simple interventions, such as written information explaining when taking antibiotics is not necessary, for example in common infections of the upper respiratory tract, have been shown to reduce antibiotic prescribing. Various tools are also available to help professionals decide if prescribing antimicrobials is necessary.
Parental expectations, driven by the worry for their children's health, can influence how often children are prescribed antibiotics. Parents often rely on their clinician for advice and reassurance. However a lack of plain language information and not having adequate time for consultation negatively impacts this relationship. In effect parents often rely on past experiences in their expectations rather than reassurance from the clinician. Adequate time for consultation and plain language information can help parents make informed decisions and avoid unnecessary antibiotic use.
The prescriber should closely adhere to the five rights of drug administration: the right patient, the right drug, the right dose, the right route, and the right time. Microbiological samples should be taken for culture and sensitivity testing before treatment when indicated and treatment potentially changed based on the susceptibility report. Health workers and pharmacists can help tackle antibiotic resistance by: enhancing infection prevention and control; only prescribing and dispensing antibiotics when they are truly needed; prescribing and dispensing the right antibiotic(s) to treat the illness. A unit dose system implemented in community pharmacies can also reduce antibiotic leftovers at households. Despite these, written guideline intervention for prescriber to do history taking and provision of advice and knowledge of pharmacists and non‐pharmacists may not reduce the sales of non‐prescription antimicrobial drugs in community pharmacies, drugstores, and other medicine outlets.
At the individual level
People can help tackle resistance by using antibiotics only when infected with a bacterial infection and prescribed by a doctor; completing the full prescription even if the user is feeling better, never sharing antibiotics with others, or using leftover prescriptions. Taking antibiotics when not needed won't help the user, but instead give bacteria the option to adapt and leave the user with the side effects that come with the certain type of antibiotic. The CDC recommends that you follow these behaviors so that you avoid these negative side effects and keep the community safe from spreading drug-resistant bacteria. Practicing basic bacterial infection prevention courses, such as hygiene, also helps to prevent the spread of antibiotic-resistant bacteria.
Country examples
The Netherlands has the lowest rate of antibiotic prescribing in the OECD, at a rate of 11.4 defined daily doses (DDD) per 1,000 people per day in 2011. The defined daily dose (DDD) is a statistical measure of drug consumption, defined by the World Health Organization (WHO).
Germany and Sweden also have lower prescribing rates, with Sweden's rate having been declining since 2007.
Greece, France and Belgium have high prescribing rates for antibiotics of more than 28 DDD.
Water, sanitation, hygiene
Infectious disease control through improved water, sanitation and hygiene (WASH) infrastructure needs to be included in the antimicrobial resistance (AMR) agenda. The "Interagency Coordination Group on Antimicrobial Resistance" stated in 2018 that "the spread of pathogens through unsafe water results in a high burden of gastrointestinal disease, increasing even further the need for antibiotic treatment."IACG (2018) Reduce unintentional exposure and the need for antimicrobials, and optimize their use IACG Discussion Paper , Interagency Coordination Group on Antimicrobial Resistance, public consultation process at WHO, Geneva, Switzerland This is particularly a problem in developing countries where the spread of infectious diseases caused by inadequate WASH standards is a major driver of antibiotic demand. Growing usage of antibiotics together with persistent infectious disease levels have led to a dangerous cycle in which reliance on antimicrobials increases while the efficacy of drugs diminishes. The proper use of infrastructure for water, sanitation and hygiene (WASH) can result in a 47–72 percent decrease of diarrhea cases treated with antibiotics depending on the type of intervention and its effectiveness. A reduction of the diarrhea disease burden through improved infrastructure would result in large decreases in the number of diarrhea cases treated with antibiotics. This was estimated as ranging from 5 million in Brazil to up to 590 million in India by the year 2030. The strong link between increased consumption and resistance indicates that this will directly mitigate the accelerating spread of AMR. Sanitation and water for all by 2030 is Goal Number 6 of the Sustainable Development Goals.
An increase in hand washing compliance by hospital staff results in decreased rates of resistant organisms.
Water supply and sanitation infrastructure in health facilities offer significant co-benefits for combatting AMR, and investment should be increased. There is much room for improvement: WHO and UNICEF estimated in 2015 that globally 38% of health facilities did not have a source of water, nearly 19% had no toilets and 35% had no water and soap or alcohol-based hand rub for handwashing.WHO, UNICEF (2015). Water, sanitation and hygiene in health care facilities – Status in low and middle income countries and way forward . World Health Organization (WHO), Geneva, Switzerland,
Industrial wastewater treatment
Manufacturers of antimicrobials need to improve the treatment of their wastewater (by using industrial wastewater treatment processes) to reduce the release of residues into the environment.
Limiting antimicrobial use in animals and farming
It is established that the use of antibiotics in animal husbandry can give rise to AMR resistances in bacteria found in food animals to the antibiotics being administered (through injections or medicated feeds). For this reason only antimicrobials that are deemed "not-clinically relevant" are used in these practices.
Unlike resistance to antibacterials, antifungal resistance can be driven by arable farming, currently there is no regulation on the use of similar antifungal classes in agriculture and the clinic.
Recent studies have shown that the prophylactic use of "non-priority" or "non-clinically relevant" antimicrobials in feeds can potentially, under certain conditions, lead to co-selection of environmental AMR bacteria with resistance to medically important antibiotics. The possibility for co-selection of AMR resistances in the food chain pipeline may have far-reaching implications for human health.
Country examples
Europe
In 1997, European Union health ministers voted to ban avoparcin and four additional antibiotics used to promote animal growth in 1999. In 2006 a ban on the use of antibiotics in European feed, with the exception of two antibiotics in poultry feeds, became effective. In Scandinavia, there is evidence that the ban has led to a lower prevalence of antibiotic resistance in (nonhazardous) animal bacterial populations. As of 2004, several European countries established a decline of antimicrobial resistance in humans through limiting the use of antimicrobials in agriculture and food industries without jeopardizing animal health or economic cost.
United States
The United States Department of Agriculture (USDA) and the Food and Drug Administration (FDA) collect data on antibiotic use in humans and in a more limited fashion in animals. About 80% of antibiotic use in the U.S. is for agriculture purposes, and about 70% of these are medically important. This gives reason for concern about the antibiotic resistance crisis in the U.S. and more reason to monitor it. The FDA first determined in 1977 that there is evidence of emergence of antibiotic-resistant bacterial strains in livestock. The long-established practice of permitting OTC sales of antibiotics (including penicillin and other drugs) to lay animal owners for administration to their own animals nonetheless continued in all states.
In 2000, the FDA announced their intention to revoke approval of fluoroquinolone use in poultry production because of substantial evidence linking it to the emergence of fluoroquinolone-resistant Campylobacter infections in humans. Legal challenges from the food animal and pharmaceutical industries delayed the final decision to do so until 2006. Fluroquinolones have been banned from extra-label use in food animals in the USA since 2007. However, they remain widely used in companion and exotic animals.
Global action plans and awareness
At the 79th United Nations General Assembly High-Level Meeting on AMR on 26 September 2024, world leaders approved a political declaration committing to a clear set of targets and actions, including reducing the estimated 4.95 million human deaths associated with bacterial AMR annually by 10% by 2030.
The increasing interconnectedness of the world and the fact that new classes of antibiotics have not been developed and approved for more than 25 years highlight the extent to which antimicrobial resistance is a global health challenge. A global action plan to tackle the growing problem of resistance to antibiotics and other antimicrobial medicines was endorsed at the Sixty-eighth World Health Assembly in May 2015. One of the key objectives of the plan is to improve awareness and understanding of antimicrobial resistance through effective communication, education and training. This global action plan developed by the World Health Organization was created to combat the issue of antimicrobial resistance and was guided by the advice of countries and key stakeholders. The WHO's global action plan is composed of five key objectives that can be targeted through different means, and represents countries coming together to solve a major problem that can have future health consequences. These objectives are as follows:
improve awareness and understanding of antimicrobial resistance through effective communication, education and training.
strengthen the knowledge and evidence base through surveillance and research.
reduce the incidence of infection through effective sanitation, hygiene and infection prevention measures.
optimize the use of antimicrobial medicines in human and animal health.
develop the economic case for sustainable investment that takes account of the needs of all countries and to increase investment in new medicines, diagnostic tools, vaccines and other interventions.
Steps towards progress
React based in Sweden has produced informative material on AMR for the general public.
Videos are being produced for the general public to generate interest and awareness.
The Irish Department of Health published a National Action Plan on Antimicrobial Resistance in October 2017. The Strategy for the Control of Antimicrobial Resistance in Ireland (SARI), Iaunched in 2001 developed Guidelines for Antimicrobial Stewardship in Hospitals in Ireland in conjunction with the Health Protection Surveillance Centre, these were published in 2009. Following their publication a public information campaign 'Action on Antibiotics' was launched to highlight the need for a change in antibiotic prescribing. Despite this, antibiotic prescribing remains high with variance in adherence to guidelines.
The United Kingdom published a 20-year vision for antimicrobial resistance that sets out the goal of containing and controlling AMR by 2040. The vision is supplemented by a 5-year action plan running from 2019 to 2024, building on the previous action plan (2013–2018).
The World Health Organization has published the 2024 Bacterial Priority Pathogens List which covers 15 families of antibiotic-resistant bacterial pathogens. Notable among these are gram-negative bacteria resistant to last-resort antibiotics, drug-resistant mycobacterium tuberculosis, and other high-burden resistant pathogens such as Salmonella, Shigella, Neisseria gonorrhoeae, Pseudomonas aeruginosa, and Staphylococcus aureus. The inclusion of these pathogens in the list underscores their global impact in terms of burden, as well as issues related to transmissibility, treatability, and prevention options. It also reflects the R&D pipeline of new treatments and emerging resistance trends.
Antibiotic Awareness Week
The World Health Organization has promoted the first World Antibiotic Awareness Week running from 16 to 22 November 2015. The aim of the week is to increase global awareness of antibiotic resistance. It also wants to promote the correct usage of antibiotics across all fields in order to prevent further instances of antibiotic resistance.
World Antibiotic Awareness Week has been held every November since 2015. For 2017, the Food and Agriculture Organization of the United Nations (FAO), the World Health Organization (WHO) and the World Organisation for Animal Health (OIE) are together calling for responsible use of antibiotics in humans and animals to reduce the emergence of antibiotic resistance.
United Nations
In 2016 the Secretary-General of the United Nations convened the Interagency Coordination Group (IACG) on Antimicrobial Resistance. The IACG worked with international organizations and experts in human, animal, and plant health to create a plan to fight antimicrobial resistance. Their report released in April 2019 highlights the seriousness of antimicrobial resistance and the threat it poses to world health. It suggests five recommendations for member states to follow in order to tackle this increasing threat. The IACG recommendations are as follows:
Accelerate progress in countries
Innovate to secure the future
Collaborate for more effective action
Invest for a sustainable response
Strengthen accountability and global governance
One Health Approach
The One Health approach recognizes that human, animal, and environmental health are interconnected in the development and spread of antimicrobial resistance (AMR). Key strategies include:
Integrated Surveillance
Monitoring antibiotic use and resistance trends across human medicine, agriculture, and environmental sectors.
For example, 73% of the world's antibiotics are used in livestock, often for non-therapeutic purposes like growth promotion.
Policy Interventions
Banning non-therapeutic antibiotics in agriculture (e.g., European Union's 2006 growth promoter ban).
Incentivizing development of new antibiotics and alternatives (e.g., vaccines, bacteriophages).
Environmental Mitigation
Reducing pharmaceutical waste in water systems and soil through improved waste management.
Addressing resistance genes in wastewater from hospitals, farms, and drug manufacturing sites.
Mechanisms and organisms
Bacteria
The five main mechanisms by which bacteria exhibit resistance to antibiotics are:
Drug inactivation or modification: for example, enzymatic deactivation of penicillin G in some penicillin-resistant bacteria through the production of β-lactamases. Drugs may also be chemically modified through the addition of functional groups by transferase enzymes; for example, acetylation, phosphorylation, or adenylation are common resistance mechanisms to aminoglycosides. Acetylation is the most widely used mechanism and can affect a number of drug classes.
Alteration of target- or binding site: for example, alteration of PBP—the binding target site of penicillins—in MRSA and other penicillin-resistant bacteria. Another protective mechanism found among bacterial species is ribosomal protection proteins. These proteins protect the bacterial cell from antibiotics that target the cell's ribosomes to inhibit protein synthesis. The mechanism involves the binding of the ribosomal protection proteins to the ribosomes of the bacterial cell, which in turn changes its conformational shape. This allows the ribosomes to continue synthesizing proteins essential to the cell while preventing antibiotics from binding to the ribosome to inhibit protein synthesis.
Alteration of metabolic pathway: for example, some sulfonamide-resistant bacteria do not require para-aminobenzoic acid (PABA), an important precursor for the synthesis of folic acid and nucleic acids in bacteria inhibited by sulfonamides, instead, like mammalian cells, they turn to using preformed folic acid.
Reduced drug accumulation: by decreasing drug permeability or increasing active efflux (pumping out) of the drugs across the cell surface. These multidrug efflux pumps within the cellular membrane of certain bacterial species are used to pump antibiotics out of the cell before they are able to do any damage. They are often activated by a specific substrate associated with an antibiotic, as in fluoroquinolone resistance.
Ribosome splitting and recycling: for example, drug-mediated stalling of the ribosome by lincomycin and erythromycin unstalled by a heat shock protein found in Listeria monocytogenes, which is a homologue of HflX from other bacteria. Liberation of the ribosome from the drug allows further translation and consequent resistance to the drug.
There are several different types of germs that have developed a resistance over time.
The six pathogens causing most deaths associated with resistance are Escherichia coli, Staphylococcus aureus, Klebsiella pneumoniae, Streptococcus pneumoniae, Acinetobacter baumannii, and Pseudomonas aeruginosa. They were responsible for 929,000 deaths attributable to resistance and 3.57 million deaths associated with resistance in 2019.
Penicillinase-producing Neisseria gonorrhoeae developed a resistance to penicillin in 1976. Another example is Azithromycin-resistant Neisseria gonorrhoeae, which developed a resistance to azithromycin in 2011.
In gram-negative bacteria, plasmid-mediated resistance genes produce proteins that can bind to DNA gyrase, protecting it from the action of quinolones. Finally, mutations at key sites in DNA gyrase or topoisomerase IV can decrease their binding affinity to quinolones, decreasing the drug's effectiveness.
Some bacteria are naturally resistant to certain antibiotics; for example, gram-negative bacteria are resistant to most β-lactam antibiotics due to the presence of β-lactamase. Antibiotic resistance can also be acquired as a result of either genetic mutation or horizontal gene transfer. Although mutations are rare, with spontaneous mutations in the pathogen genome occurring at a rate of about 1 in 105 to 1 in 108 per chromosomal replication, the fact that bacteria reproduce at a high rate allows for the effect to be significant. Given that lifespans and production of new generations can be on a timescale of mere hours, a new (de novo) mutation in a parent cell can quickly become an inherited mutation of widespread prevalence, resulting in the microevolution of a fully resistant colony. However, chromosomal mutations also confer a cost of fitness. For example, a ribosomal mutation may protect a bacterial cell by changing the binding site of an antibiotic but may result in slower growth rate. Moreover, some adaptive mutations can propagate not only through inheritance but also through horizontal gene transfer. The most common mechanism of horizontal gene transfer is the transferring of plasmids carrying antibiotic resistance genes between bacteria of the same or different species via conjugation. However, bacteria can also acquire resistance through transformation, as in Streptococcus pneumoniae uptaking of naked fragments of extracellular DNA that contain antibiotic resistance genes to streptomycin, through transduction, as in the bacteriophage-mediated transfer of tetracycline resistance genes between strains of S. pyogenes, or through gene transfer agents, which are particles produced by the host cell that resemble bacteriophage structures and are capable of transferring DNA.
Antibiotic resistance can be introduced artificially into a microorganism through laboratory protocols, sometimes used as a selectable marker to examine the mechanisms of gene transfer or to identify individuals that absorbed a piece of DNA that included the resistance gene and another gene of interest.
Recent findings show no necessity of large populations of bacteria for the appearance of antibiotic resistance. Small populations of Escherichia coli in an antibiotic gradient can become resistant. Any heterogeneous environment with respect to nutrient and antibiotic gradients may facilitate antibiotic resistance in small bacterial populations. Researchers hypothesize that the mechanism of resistance evolution is based on four SNP mutations in the genome of E. coli produced by the gradient of antibiotic.
In one study, which has implications for space microbiology, a non-pathogenic strain E. coli MG1655 was exposed to trace levels of the broad spectrum antibiotic chloramphenicol, under simulated microgravity (LSMMG, or Low Shear Modeled Microgravity) over 1000 generations. The adapted strain acquired resistance to not only chloramphenicol, but also cross-resistance to other antibiotics; this was in contrast to the observation on the same strain, which was adapted to over 1000 generations under LSMMG, but without any antibiotic exposure; the strain in this case did not acquire any such resistance. Thus, irrespective of where they are used, the use of an antibiotic would likely result in persistent resistance to that antibiotic, as well as cross-resistance to other antimicrobials.
In recent years, the emergence and spread of β-lactamases called carbapenemases has become a major health crisis. One such carbapenemase is New Delhi metallo-beta-lactamase 1 (NDM-1), an enzyme that makes bacteria resistant to a broad range of beta-lactam antibiotics. The most common bacteria that make this enzyme are gram-negative such as E. coli and Klebsiella pneumoniae, but the gene for NDM-1 can spread from one strain of bacteria to another by horizontal gene transfer.
Viruses
Specific antiviral drugs are used to treat some viral infections. These drugs prevent viruses from reproducing by inhibiting essential stages of the virus's replication cycle in infected cells. Antivirals are used to treat HIV, hepatitis B, hepatitis C, influenza, herpes viruses including varicella zoster virus, cytomegalovirus and Epstein–Barr virus. With each virus, some strains have become resistant to the administered drugs.
Antiviral drugs typically target key components of viral reproduction; for example, oseltamivir targets influenza neuraminidase, while guanosine analogs inhibit viral DNA polymerase. Resistance to antivirals is thus acquired through mutations in the genes that encode the protein targets of the drugs.
Resistance to HIV antivirals is problematic, and even multi-drug resistant strains have evolved. One source of resistance is that many current HIV drugs, including NRTIs and NNRTIs, target reverse transcriptase; however, HIV-1 reverse transcriptase is highly error prone and thus mutations conferring resistance arise rapidly. Resistant strains of the HIV virus emerge rapidly if only one antiviral drug is used. Using three or more drugs together, termed combination therapy, has helped to control this problem, but new drugs are needed because of the continuing emergence of drug-resistant HIV strains.
Fungi
Infections by fungi are a cause of high morbidity and mortality in immunocompromised persons, such as those with HIV/AIDS, tuberculosis or receiving chemotherapy. The fungi Candida, Cryptococcus neoformans and Aspergillus fumigatus cause most of these infections and antifungal resistance occurs in all of them. Multidrug resistance in fungi is increasing because of the widespread use of antifungal drugs to treat infections in immunocompromised individuals and the use of some agricultural antifungals. Antifungal resistant disease is associated with increased mortality.
Some fungi (e.g. Candida krusei and fluconazole) exhibit intrinsic resistance to certain antifungal drugs or classes, whereas some species develop antifungal resistance to external pressures. Antifungal resistance is a One Health concern, driven by multiple extrinsic factors, including extensive fungicidal use, overuse of clinical antifungals, environmental change and host factors.
In the USA fluconazole-resistant Candida species and azole resistance in Aspergillus fumigatus have been highlighted as a growing threat.
More than 20 species of Candida can cause candidiasis infection, the most common of which is Candida albicans. Candida yeasts normally inhabit the skin and mucous membranes without causing infection. However, overgrowth of Candida can lead to candidiasis. Some Candida species (e.g. Candida glabrata) are becoming resistant to first-line and second-line antifungal agents such as echinocandins and azoles.
The emergence of Candida auris as a potential human pathogen that sometimes exhibits multi-class antifungal drug resistance is concerning and has been associated with several outbreaks globally. The WHO has released a priority fungal pathogen list, including pathogens with antifungal resistance.
The identification of antifungal resistance is undermined by limited classical diagnosis of infection, where a culture is lacking, preventing susceptibility testing. National and international surveillance schemes for fungal disease and antifungal resistance are limited, hampering the understanding of the disease burden and associated resistance. The application of molecular testing to identify genetic markers associating with resistance may improve the identification of antifungal resistance, but the diversity of mutations associated with resistance is increasing across the fungal species causing infection. In addition, a number of resistance mechanisms depend on up-regulation of selected genes (for instance reflux pumps) rather than defined mutations that are amenable to molecular detection.
Due to the limited number of antifungals in clinical use and the increasing global incidence of antifungal resistance, using the existing antifungals in combination might be beneficial in some cases but further research is needed. Similarly, other approaches that might help to combat the emergence of antifungal resistance could rely on the development of host-directed therapies such as immunotherapy or vaccines.
Parasites
The protozoan parasites that cause the diseases malaria, trypanosomiasis, toxoplasmosis, cryptosporidiosis and leishmaniasis are important human pathogens.
Malarial parasites that are resistant to the drugs that are currently available to infections are common and this has led to increased efforts to develop new drugs. Resistance to recently developed drugs such as artemisinin has also been reported. The problem of drug resistance in malaria has driven efforts to develop vaccines.
Trypanosomes are parasitic protozoa that cause African trypanosomiasis and Chagas disease (American trypanosomiasis). There are no vaccines to prevent these infections so drugs such as pentamidine and suramin, benznidazole and nifurtimox are used to treat infections. These drugs are effective but infections caused by resistant parasites have been reported.
Leishmaniasis is caused by protozoa and is an important public health problem worldwide, especially in sub-tropical and tropical countries. Drug resistance has "become a major concern".
Global and genomic data
In 2022, genomic epidemiologists reported results from a global survey of antimicrobial resistance via genomic wastewater-based epidemiology, finding large regional variations, providing maps, and suggesting resistance genes are also passed on between microbial species that are not closely related. The WHO provides the Global Antimicrobial Resistance and Use Surveillance System (GLASS) reports which summarize annual (e.g. 2020's) data on international AMR, also including an interactive dashboard.
Epidemiology
United Kingdom
Public Health England reported that the total number of antibiotic resistant infections in England rose by 9% from 55,812 in 2017 to 60,788 in 2018, but antibiotic consumption had fallen by 9% from 20.0 to 18.2 defined daily doses per 1,000 inhabitants per day between 2014 and 2018.
United States
The Centers for Disease Control and Prevention reported that more than 2.8 million cases of antibiotic resistance have been reported. However, in 2019 overall deaths from antibiotic-resistant infections decreased by 18% and deaths in hospitals decreased by 30%.
The COVID pandemic caused a reversal of much of the progress made on attenuating the effects of antibiotic resistance, resulting in more antibiotic use, more resistant infections, and less data on preventive action. Hospital-onset infections and deaths both increased by 15% in 2020, and significantly higher rates of infections were reported for 4 out of 6 types of healthcare associated infections.
India
Owing to rampant misuse, over-prescription, and unregulated over-the-counter access to antibiotics, the growth of AMR superbugs has proliferated in India. Pharmacies surveyed by the Karnataka government revealed that 80% of drugs were being sold without a prescription.
A National Centre for Disease Control survey revealed that more than half of inpatients were using "antibiotics from the ‘Watch’ category of WHO’s AWaRe classification, which should be reserved for severe infections."
In 2019, India reported 300,000 deaths because of infections relating to AMR, India also reports the most number of tuberculosis cases resistant to antibiotics.
History
The 1950s to 1970s represented the golden age of antibiotic discovery, where countless new classes of antibiotics were discovered to treat previously incurable diseases such as tuberculosis and syphilis. However, since that time the discovery of new classes of antibiotics has been almost nonexistent, and represents a situation that is especially problematic considering the resiliency of bacteria shown over time and the continued misuse and overuse of antibiotics in treatment.
Already in 1940, in their letter to the editor of Nature journal, Abraham and Chain identified the enzyme penicillinase as responsible for the deactivation of penicillin in penicillin-resistant bacteria. This discovery was the first step in understanding the mechanisms of microbial resistance to β-lactam antibiotics. The phenomenon of antimicrobial resistance caused by overuse of antibiotics was predicted as early as 1945 by Alexander Fleming who said "The time may come when penicillin can be bought by anyone in the shops. Then there is the danger that the ignorant man may easily under-dose himself and by exposing his microbes to nonlethal quantities of the drug make them resistant." Without the creation of new and stronger antibiotics an era where common infections and minor injuries can kill, and where complex procedures such as surgery and chemotherapy become too risky, is a very real possibility. Antimicrobial resistance can lead to epidemics of enormous proportions if preventive actions are not taken. In this day and age current antimicrobial resistance leads to longer hospital stays, higher medical costs, and increased mortality.
Society and culture
Innovation policy
Since the mid-1980s pharmaceutical companies have invested in medications for cancer or chronic disease that have greater potential to make money and have "de-emphasized or dropped development of antibiotics". On 20 January 2016 at the World Economic Forum in Davos, Switzerland, more than "80 pharmaceutical and diagnostic companies" from around the world called for "transformational commercial models" at a global level to spur research and development on antibiotics and on the "enhanced use of diagnostic tests that can rapidly identify the infecting organism". A number of countries are considering or implementing delinked payment models for new antimicrobials whereby payment is based on value rather than volume of drug sales. This offers the opportunity to pay for valuable new drugs even if they are reserved for use in relatively rare drug resistant infections.
Legal frameworks
Some global health scholars have argued that a global, legal framework is needed to prevent and control antimicrobial resistance. For instance, binding global policies could be used to create antimicrobial use standards, regulate antibiotic marketing, and strengthen global surveillance systems. Ensuring compliance of involved parties is a challenge. Global antimicrobial resistance policies could take lessons from the environmental sector by adopting strategies that have made international environmental agreements successful in the past such as: sanctions for non-compliance, assistance for implementation, majority vote decision-making rules, an independent scientific panel, and specific commitments.
United States
For the United States 2016 budget, U.S. president Barack Obama proposed to nearly double the amount of federal funding to "combat and prevent" antibiotic resistance to more than $1.2 billion.President's 2016 Budget Proposes Historic Investment to Combat Antibiotic-Resistant Bacteria to Protect Public Health The White House, Office of the Press Secretary, 27 January 2015 Many international funding agencies like USAID, DFID, Sida and Gates Foundation have pledged money for developing strategies to counter antimicrobial resistance.
On 27 March 2015, the White House released a comprehensive plan to address the increasing need for agencies to combat the rise of antibiotic-resistant bacteria. The Task Force for Combating Antibiotic-Resistant Bacteria developed The National Action Plan for Combating Antibiotic-Resistant Bacteria with the intent of providing a roadmap to guide the US in the antibiotic resistance challenge and with hopes of saving many lives. This plan outlines steps taken by the Federal government over the next five years needed in order to prevent and contain outbreaks of antibiotic-resistant infections; maintain the efficacy of antibiotics already on the market; and to help to develop future diagnostics, antibiotics, and vaccines.
The Action Plan was developed around five goals with focuses on strengthening health care, public health veterinary medicine, agriculture, food safety and research, and manufacturing. These goals, as listed by the White House, are as follows:
Slow the Emergence of Resistant Bacteria and Prevent the Spread of Resistant Infections
Strengthen National One-Health Surveillance Efforts to Combat Resistance
Advance Development and use of Rapid and Innovative Diagnostic Tests for Identification and Characterization of Resistant Bacteria
Accelerate Basic and Applied Research and Development for New Antibiotics, Other Therapeutics, and Vaccines
Improve International Collaboration and Capacities for Antibiotic Resistance Prevention, Surveillance, Control and Antibiotic Research and Development
The following are goals set to meet by 2020:
Establishment of antimicrobial programs within acute care hospital settings
Reduction of inappropriate antibiotic prescription and use by at least 50% in outpatient settings and 20% inpatient settings
Establishment of State Antibiotic Resistance (AR) Prevention Programs in all 50 states
Elimination of the use of medically important antibiotics for growth promotion in food-producing animals.
Current Status of AMR in the U.S.
As of 2023, antimicrobial resistance (AMR) remains a significant public health threat in the United States. According to the Centers for Disease Control and Prevention's 2023 Report on Antibiotic Resistance Threats, over 2.8 million antibiotic-resistant infections occur in the U.S. each year, leading to at least 35,000 deaths annually. Among the most concerning resistant pathogens are Carbapenem-resistant Enterobacteriaceae (CRE), Methicillin-resistant Staphylococcus aureus (MRSA), and Clostridioides difficile (C. diff), all of which continue to be responsible for severe healthcare-associated infections (HAIs).
The COVID-19 pandemic led to a significant disruption in healthcare, with an increase in the use of antibiotics during the treatment of viral infections. This rise in antibiotic prescribing, coupled with overwhelmed healthcare systems, contributed to a resurgence in AMR during the pandemic years. A 2021 CDC report identified a sharp increase in HAIs caused by resistant pathogens in COVID-19 patients, a trend that has persisted into 2023. Recent data suggest that although antibiotic use has decreased since the pandemic, some resistant pathogens remain prevalent in healthcare settings.
The CDC has also expanded its Get Ahead of Sepsis campaign in 2023, focusing on raising awareness of AMR's role in sepsis and promoting the judicious use of antibiotics in both healthcare and community settings. This initiative has reached millions through social media, healthcare facilities, and public health outreach, aiming to educate the public on the importance of preventing infections and reducing antibiotic misuse.
Under the administration of Donald J. Trump, severe federal funding cuts in 2025 at the National Institutes of Health (NIH) and many other research entities in the United States are expected to impede progress in combating antimicrobial resistance. Public health experts believe these budget reductions will adversely affect the development of new antibiotics as well as advances in diagnostics and treatment strategies, thus undermining efforts currently underway to address AMR. The Infectious Diseases Society of America (IDSA) is but one organization that has expressed grave concern regarding the negative consequences of such extreme budget measures on antimicrobial stewardship, and, as a result, on global health security overall.
Policies
According to World Health Organization, policymakers can help tackle resistance by strengthening resistance-tracking and laboratory capacity and by regulating and promoting the appropriate use of medicines. Policymakers and industry can help tackle resistance by: fostering innovation and research and development of new tools; and promoting cooperation and information sharing among all stakeholders.
The U.S. government continues to prioritize AMR mitigation through policy and legislation. In 2023, the National Action Plan for Combating Antibiotic-Resistant Bacteria (CARB) 2023-2028 was released, outlining strategic objectives for reducing antibiotic-resistant infections, advancing infection prevention, and accelerating research on new antibiotics. The plan also emphasizes the importance of improving antibiotic stewardship across healthcare, agriculture, and veterinary settings.
Furthermore, the PASTEUR Act (Pioneering Antimicrobial Subscriptions to End Upsurging Resistance) has gained momentum in Congress. If passed, the bill would create a subscription-based payment model to incentivize the development of new antimicrobial drugs, while supporting antimicrobial stewardship programs to reduce the misuse of existing antibiotics. This legislation is considered a critical step toward addressing the economic barriers to developing new antimicrobials.
Policy evaluation
Measuring the costs and benefits of strategies to combat AMR is difficult and policies may only have effects in the distant future. In other infectious diseases this problem has been addressed by using mathematical models. More research is needed to understand how AMR develops and spreads so that mathematical modelling can be used to anticipate the likely effects of different policies.
Further research
Rapid testing and diagnostics
Distinguishing infections requiring antibiotics from self-limiting ones is clinically challenging. In order to guide appropriate use of antibiotics and prevent the evolution and spread of antimicrobial resistance, diagnostic tests that provide clinicians with timely, actionable results are needed.
Acute febrile illness is a common reason for seeking medical care worldwide and a major cause of morbidity and mortality. In areas with decreasing malaria incidence, many febrile patients are inappropriately treated for malaria, and in the absence of a simple diagnostic test to identify alternative causes of fever, clinicians presume that a non-malarial febrile illness is most likely a bacterial infection, leading to inappropriate use of antibiotics. Multiple studies have shown that the use of malaria rapid diagnostic tests without reliable tools to distinguish other fever causes has resulted in increased antibiotic use.
Antimicrobial susceptibility testing (AST) can facilitate a precision medicine approach to treatment by helping clinicians to prescribe more effective and targeted antimicrobial therapy. At the same time with traditional phenotypic AST it can take 12 to 48 hours to obtain a result due to the time taken for organisms to grow on/in culture media. Rapid testing, possible from molecular diagnostics innovations, is defined as "being feasible within an 8-h working shift". There are several commercial Food and Drug Administration-approved assays available which can detect AMR genes from a variety of specimen types. Progress has been slow due to a range of reasons including cost and regulation. Genotypic AMR characterisation methods are, however, being increasingly used in combination with machine learning algorithms in research to help better predict phenotypic AMR from organism genotype.
Optical techniques such as phase contrast microscopy in combination with single-cell analysis are another powerful method to monitor bacterial growth. In 2017, scientists from Uppsala University in Sweden published a method that applies principles of microfluidics and cell tracking, to monitor bacterial response to antibiotics in less than 30 minutes overall manipulation time. This invention was awarded the 8M£ Longitude Prize on AMR in 2024. Recently, this platform has been advanced by coupling microfluidic chip with optical tweezing in order to isolate bacteria with altered phenotype directly from the analytical matrix.
Rapid diagnostic methods have also been trialled as antimicrobial stewardship interventions to influence the healthcare drivers of AMR. Serum procalcitonin measurement has been shown to reduce mortality rate, antimicrobial consumption and antimicrobial-related side-effects in patients with respiratory infections, but impact on AMR has not yet been demonstrated. Similarly, point of care serum testing of the inflammatory biomarker C-reactive protein has been shown to influence antimicrobial prescribing rates in this patient cohort, but further research is required to demonstrate an effect on rates of AMR. Clinical investigation to rule out bacterial infections are often done for patients with pediatric acute respiratory infections. Currently it is unclear if rapid viral testing affects antibiotic use in children.
Vaccines
Vaccines are an essential part of the response to reduce AMR as they prevent infections, reduce the use and overuse of antimicrobials, and slow the emergence and spread of drug-resistant pathogens.
Microorganisms usually do not develop resistance to vaccines because vaccines reduce the spread of the infection and target the pathogen in multiple ways in the same host and possibly in different ways between different hosts. Furthermore, if the use of vaccines increases, there is evidence that antibiotic resistant strains of pathogens will decrease; the need for antibiotics will naturally decrease as vaccines prevent infection before it occurs. A 2024 report by WHO finds that vaccines against 24 pathogens could reduce the number of antibiotics needed by 22% or 2.5 billion defined daily doses globally every year. If vaccines could be rolled out against all the evaluated pathogens, they could save a third of the hospital costs associated with AMR. Vaccinated people have fewer infections and are protected against potential complications from secondary infections that may need antimicrobial medicines or require admission to hospital. However, there are well documented cases of vaccine resistance, although these are usually much less of a problem than antimicrobial resistance.
While theoretically promising, antistaphylococcal vaccines have shown limited efficacy, because of immunological variation between Staphylococcus species, and the limited duration of effectiveness of the antibodies produced. Development and testing of more effective vaccines is underway.
Two registrational trials have evaluated vaccine candidates in active immunization strategies against S. aureus infection. In a phase II trial, a bivalent vaccine of capsular proteins 5 & 8 was tested in 1804 hemodialysis patients with a primary fistula or synthetic graft vascular access. After 40 weeks following vaccination a protective effect was seen against S. aureus bacteremia, but not at 54 weeks following vaccination. Based on these results, a second trial was conducted which failed to show efficacy.
Merck tested V710, a vaccine targeting IsdB, in a blinded randomized trial in patients undergoing median sternotomy. The trial was terminated after a higher rate of multiorgan system failure–related deaths was found in the V710 recipients. Vaccine recipients who developed S. aureus infection were five times more likely to die than control recipients who developed S. aureus infection.
Numerous investigators have suggested that a multiple-antigen vaccine would be more effective, but a lack of biomarkers defining human protective immunity keep these proposals in the logical, but strictly hypothetical arena.
Antibody therapy
Antibodies are promising against antimicrobial resistance. Monoclonal antibodies (mAbs) target bacterial virulence factors, aiding in bacterial destruction through various mechanisms. Three FDA-approved antibodies target B. anthracis and C. difficile toxins.
Alternating therapy
Alternating therapy is a proposed method in which two or three antibiotics are taken in a rotation versus taking just one antibiotic such that bacteria resistant to one antibiotic are killed when the next antibiotic is taken. Studies have found that this method reduces the rate at which antibiotic resistant bacteria emerge in vitro relative to a single drug for the entire duration.
Studies have found that bacteria that evolve antibiotic resistance towards one group of antibiotic may become more sensitive to others. This phenomenon can be used to select against resistant bacteria using an approach termed collateral sensitivity cycling, which has recently been found to be relevant in developing treatment strategies for chronic infections caused by Pseudomonas aeruginosa. Despite its promise, large-scale clinical and experimental studies revealed limited evidence of susceptibility to antibiotic cycling across various pathogens.
Development of new drugs
Since the discovery of antibiotics, research and development (R&D) efforts have provided new drugs in time to treat bacteria that became resistant to older antibiotics, but in the 2000s there has been concern that development has slowed enough that seriously ill people may run out of treatment options. Another concern is that practitioners may become reluctant to perform routine surgeries because of the increased risk of harmful infection. Backup treatments can have serious side-effects; for example, antibiotics like aminoglycosides (such as amikacin, gentamicin, kanamycin, streptomycin, etc.) used for the treatment of drug-resistant tuberculosis and cystic fibrosis can cause respiratory disorders, deafness and kidney failure.
The potential crisis at hand is the result of a marked decrease in industry research and development. Poor financial investment in antibiotic research has exacerbated the situation. The pharmaceutical industry has little incentive to invest in antibiotics because of the high risk and because the potential financial returns are less likely to cover the cost of development than for other pharmaceuticals. In 2011, Pfizer, one of the last major pharmaceutical companies developing new antibiotics, shut down its primary research effort, citing poor shareholder returns relative to drugs for chronic illnesses. However, small and medium-sized pharmaceutical companies are still active in antibiotic drug research. In particular, apart from classical synthetic chemistry methodologies, researchers have developed a combinatorial synthetic biology platform on single cell level in a high-throughput screening manner to diversify novel lanthipeptides.
In the 5–10 years since 2010, there has been a significant change in the ways new antimicrobial agents are discovered and developed – principally via the formation of public-private funding initiatives. These include CARB-X, which focuses on nonclinical and early phase development of novel antibiotics, vaccines, rapid diagnostics; Novel Gram Negative Antibiotic (GNA-NOW), which is part of the EU's Innovative Medicines Initiative; and Replenishing and Enabling the Pipeline for Anti-infective Resistance Impact Fund (REPAIR). Later stage clinical development is supported by the AMR Action Fund, which in turn is supported by multiple investors with the aim of developing 2–4 new antimicrobial agents by 2030. The delivery of these trials is facilitated by national and international networks supported by the Clinical Research Network of the National Institute for Health and Care Research (NIHR), European Clinical Research Alliance in Infectious Diseases (ECRAID) and the recently formed ADVANCE-ID, which is a clinical research network based in Asia. The Global Antibiotic Research and Development Partnership (GARDP) is generating new evidence for global AMR threats such as neonatal sepsis, treatment of serious bacterial infections and sexually transmitted infections as well as addressing global access to new and strategically important antibacterial drugs.
The discovery and development of new antimicrobial agents has been facilitated by regulatory advances, which have been principally led by the European Medicines Agency (EMA) and the Food and Drug Administration (FDA). These processes are increasingly aligned although important differences remain and drug developers must prepare separate documents. New development pathways have been developed to help with the approval of new antimicrobial agents that address unmet needs such as the Limited Population Pathway for Antibacterial and Antifungal Drugs (LPAD). These new pathways are required because of difficulties in conducting large definitive phase III clinical trials in a timely way.
Some of the economic impediments to the development of new antimicrobial agents have been addressed by innovative reimbursement schemes that delink payment of antimicrobials from volume-based sales. In the UK, a market entry reward scheme has been pioneered by the National Institute for Clinical Excellence (NICE) whereby an annual subscription fee is paid for use of strategically valuable antimicrobial agents – cefiderocol and ceftazidime-aviabactam are the first agents to be used in this manner and the scheme is potential blueprint for comparable programs in other countries.
The available classes of antifungal drugs are still limited but as of 2021 novel classes of antifungals are being developed and are undergoing various stages of clinical trials to assess performance.
Scientists have started using advanced computational approaches with supercomputers for the development of new antibiotic derivatives to deal with antimicrobial resistance.
Biomaterials
Using antibiotic-free alternatives in bone infection treatment may help decrease the use of antibiotics and thus antimicrobial resistance. The bone regeneration material bioactive glass S53P4 has shown to effectively inhibit the bacterial growth of up to 50 clinically relevant bacteria including MRSA and MRSE.
Nanomaterials
During the last decades, copper and silver nanomaterials have demonstrated appealing features for the development of a new family of antimicrobial agents. Nanoparticles (1–100 nm) show unique properties and promise as antimicrobial agents against resistant bacteria. Silver (AgNPs) and gold nanoparticles (AuNPs) are extensively studied, disrupting bacterial cell membranes and interfering with protein synthesis. Zinc oxide (ZnO NPs), copper (CuNPs), and silica (SiNPs) nanoparticles also exhibit antimicrobial properties. However, high synthesis costs, potential toxicity, and instability pose challenges. To overcome these, biological synthesis methods and combination therapies with other antimicrobials are explored.
Rediscovery of ancient treatments
Similar to the situation in malaria therapy, where successful treatments based on ancient recipes have been found, there has already been some success in finding and testing ancient drugs and other treatments that are effective against AMR bacteria.
Computational community surveillance
One of the key tools identified by the WHO and others for the fight against rising antimicrobial resistance is improved surveillance of the spread and movement of AMR genes through different communities and regions. Recent advances in high-throughput DNA sequencing as a result of the Human Genome Project have resulted in the ability to determine the individual microbial genes in a sample. Along with the availability of databases of known antimicrobial resistance genes, such as the Comprehensive Antimicrobial Resistance Database (CARD) and ResFinder, this allows the identification of all the antimicrobial resistance genes within the sample – the so-called "resistome". In doing so, a profile of these genes within a community or environment can be determined, providing insights into how antimicrobial resistance is spreading through a population and allowing for the identification of resistance that is of concern.
Phage therapy
Phage therapy is the therapeutic use of bacteriophages to treat pathogenic bacterial infections. Phage therapy has many potential applications in human medicine as well as dentistry, veterinary science, and agriculture.
Phage therapy relies on the use of naturally occurring bacteriophages to infect and lyse bacteria at the site of infection in a host. Due to current advances in genetics and biotechnology these bacteriophages can possibly be manufactured to treat specific infections. Phages can be bioengineered to target multidrug-resistant bacterial infections, and their use involves the added benefit of preventing the elimination of beneficial bacteria in the human body. Phages destroy bacterial cell walls and membrane through the use of lytic proteins which kill bacteria by making many holes from the inside out. Bacteriophages can even possess the ability to digest the biofilm that many bacteria develop that protect them from antibiotics in order to effectively infect and kill bacteria. Bioengineering can play a role in creating successful bacteriophages.
Understanding the mutual interactions and evolutions of bacterial and phage populations in the environment of a human or animal body is essential for rational phage therapy.
Bacteriophagics are used against antibiotic resistant bacteria in Georgia (George Eliava Institute) and in one institute in Wrocław, Poland. Bacteriophage cocktails are common drugs sold over the counter in pharmacies in eastern countries. In Belgium, four patients with severe musculoskeletal infections received bacteriophage therapy with concomitant antibiotics. After a single course of phage therapy, no recurrence of infection occurred and no severe side-effects related to the therapy were detected.
See also
References
Books
Journals
16-minute film about a post-antibiotic world. Review:
Further reading
External links
WHO fact sheet on antimicrobial resistance
Animation of Antibiotic Resistance
Bracing for Superbugs: Strengthening environmental action in the One Health response to antimicrobial resistance UNEP, 2023.
CDC Guideline "Management of Multidrug-Resistant Organisms in Healthcare Settings, 2006"
Category:Evolutionary biology
Category:Health disasters
Category:Pharmaceuticals policy
Category:Veterinary medicine
Category:Global issues
|
biology
| 12,539
|
2844
|
History of atomic theory
|
https://en.wikipedia.org/wiki/History_of_atomic_theory
|
Atomic theory is the scientific theory that matter is composed of particles called atoms. The definition of the word "atom" has changed over the years in response to scientific discoveries. Initially, it referred to a hypothetical concept of there being some fundamental particle of matter, too small to be seen by the naked eye, that could not be divided. Then the definition was refined to being the basic particles of the chemical elements, when chemists observed that elements seemed to combine with each other in ratios of small whole numbers. Then physicists discovered that these particles had an internal structure of their own and therefore perhaps did not deserve to be called "atoms", but renaming atoms would have been impractical by that point.
Atomic theory is one of the most important scientific developments in history, crucial to all the physical sciences. At the start of The Feynman Lectures on Physics, physicist and Nobel laureate Richard Feynman offers the atomic hypothesis as the single most prolific scientific concept. "If, in some cataclysm, all [] scientific knowledge were to be destroyed [save] one sentence [...] what statement would contain the most information in the fewest words? I believe it is [...] that all things are made up of atoms – little particles that move around in perpetual motion, attracting each other when they are a little distance apart, but repelling upon being squeezed into one another ..."
Philosophical atomism
The basic idea that matter is made up of tiny indivisible particles is an old idea that appeared in many ancient cultures. For example, the Vaisheshika philosophy developed by Kaṇāda was based on a 'paramanu' (lit. the smallest particle of matter), an atom that was eternal and indivisible, forming the basis of the entire physical world. The word atom derived from the ancient Greek word atomos, a combination of the negative term "a-" and "τομή," the term for "cut", means "uncuttable". Despite these similarities the Vaisheshikan and Greek concepts of atoms differed in many ways. For examples, the Vaisheshikan atom came in four types, named for air, water, fire and earth, each with different properties but none were inherently active while the single type in the Greek view had inherent motion.
These ancient ideas were based in philosophical reasoning rather than scientific reasoning. Modern atomic theory is not based on these old concepts.Melsen (1952). From Atomos to Atom, pp. 18–19
Pre-atomic chemistry
Working in the late 17th century, Robert Boyle developed the concept of a chemical element as substance different from a compound.
Near the end of the 18th century, a number of important developments in chemistry emerged without referring to the notion of an atomic theory. The first was Antoine Lavoisier who showed that compounds consist of elements in constant proportion, redefining an element as a substance which scientists could not decompose into simpler substances by experimentation. This brought an end to the ancient idea of the elements of matter being fire, earth, air, and water, which had no experimental support. Lavoisier showed that water can be decomposed into hydrogen and oxygen, which in turn he could not decompose into anything simpler, thereby proving these are elements.Pullman (1998). The Atom in the History of Human Thought. p. 197 Lavoisier also defined the law of conservation of mass, which states that in a chemical reaction, matter does not appear nor disappear into thin air; the total mass remains the same even if the substances involved were transformed. Finally, there was the law of definite proportions, established by the French chemist Joseph Proust in 1797, which states that if a compound is broken down into its constituent chemical elements, then the masses of those constituents will always have the same proportions by weight, regardless of the quantity or source of the original compound. This definition distinguished compounds from mixtures.
Dalton's law of multiple proportions
In the early 19th century, the scientist John Dalton noticed that chemical substances seemed to combine with each other by discrete and consistent units of weight, and he decided to use the word atom to refer to these units.Pullman (1998). The Atom in the History of Human Thought, p. 201 John Dalton studied data gathered by himself and by other scientists. He noticed a pattern that later came to be known as the law of multiple proportions: in compounds which contain two particular elements, the amount of Element A per measure of Element B will differ across these compounds by ratios of small whole numbers. This suggested that each element combines with other elements in multiples of a basic quantity.Pullman (1998). The Atom in the History of Human Thought, p. 199: "The constant ratios, expressible in terms of integers, of the weights of the constituents in composite bodies could be construed as evidence on a macroscopic scale of interactions at the microscopic level between basic units with fixed weights. For Dalton, this agreement strongly suggested a corpuscular structure of matter, even though it did not constitute definite proof."
In 1804, Dalton explained his atomic theory to his friend and fellow chemist Thomas Thomson, who published the first full explanation chemical atomic theory in his book A System of Chemistry in 1807. According to Thomson, Dalton's idea first occurred to him when experimenting with "olefiant gas" (ethylene) and "carburetted hydrogen gas" (methane). Dalton found that "carburetted hydrogen gas" contains twice as much hydrogen per measure of carbon as "olefiant gas", and concluded that a molecule of "olefiant gas" is one carbon atom and one hydrogen atom, and a molecule of "carburetted hydrogen gas" is one carbon atom and two hydrogen atoms.Thomas Thomson (1831). A History of Chemistry, Volume 2. p. 291 In reality, an ethylene molecule has two carbon atoms and four hydrogen atoms (C2H4), and a methane molecule has one carbon atom and four hydrogen atoms (CH4). In this particular case, Dalton was mistaken about the formulas of these compounds, but he got the oxides of tin, iron, and nitrogen correct.Millington (1906). John Dalton, p. 113
Dalton defined an atom as being the "ultimate particle" of a chemical substance, and he used the term "compound atom" to refer to "ultimate particles" which contain two or more elements. This is inconsistent with the modern definition, wherein an atom is the basic particle of a chemical element and a molecule is an agglomeration of atoms. The term "compound atom" was confusing to some of Dalton's contemporaries as the word "atom" implies indivisibility, but he responded that if a carbon dioxide "atom" is divided, it ceases to be carbon dioxide. The carbon dioxide "atom" is indivisible in the sense that it cannot be divided into smaller carbon dioxide particles.Dalton, quoted in Freund (1904). The Study of Chemical Composition. p. 288: "I have chosen the word atom to signify these ultimate particles in preference to particle, molecule, or any other diminiutive term, because I conceive it is much more expressive; it includes in itself the notion of indivisible, which the other terms do not. It may, perhaps, be said that I extend the application of it too far when I speak of compound atoms; for instance, I call an ultimate particle of carbonic acid a compound atom. Now, though this atom may be divided, yet it ceases to become carbonic acid, being resolved by such division into charcoal and oxygen. Hence I conceive there is no inconsistency in speaking of compound atoms and that my meaning cannot be misunderstood."
Dalton made the following assumptions on how "elementary atoms" combined to form "compound atoms" (what we today refer to as molecules). When two elements can only form one compound, he assumed it was one atom of each, which he called a "binary compound". If two elements can form two compounds, the first compound is a binary compound and the second is a "ternary compound" consisting of one atom of the first element and two of the second. If two elements can form three compounds between them, then the third compound is a "quaternary" compound containing one atom of the first element and three of the second.Dalton (1817). A New System of Chemical Philosophy vol. 1, pp. 213–214 Dalton thought that water was a "binary compound", i.e. one hydrogen atom and one oxygen atom. Dalton did not know that in their natural gaseous state, the ultimate particles of oxygen, nitrogen, and hydrogen exist in pairs (O2, N2, and H2). Nor was he aware of valencies. These properties of atoms were discovered later in the 19th century.
Because atoms were too small to be directly weighed using the methods of the 19th century, Dalton instead expressed the weights of the myriad atoms as multiples of the hydrogen atom's weight, which Dalton knew was the lightest element. By his measurements, 7 grams of oxygen will combine with 1 gram of hydrogen to make 8 grams of water with nothing left over, and assuming a water molecule to be one oxygen atom and one hydrogen atom, he concluded that oxygen's atomic weight is 7. In reality it is 16. Aside from the crudity of early 19th century measurement tools, the main reason for this error was that Dalton didn't know that the water molecule in fact has two hydrogen atoms, not one. Had he known, he would have doubled his estimate to a more accurate 14. This error was corrected in 1811 by Amedeo Avogadro. Avogadro proposed that equal volumes of any two gases, at equal temperature and pressure, contain equal numbers of molecules (in other words, the mass of a gas's particles does not affect the volume that it occupies). Avogadro's hypothesis, now usually called Avogadro's law, provided a method for deducing the relative weights of the molecules of gaseous elements, for if the hypothesis is correct relative gas densities directly indicate the relative weights of the particles that compose the gases. This way of thinking led directly to a second hypothesis: the particles of certain elemental gases were pairs of atoms, and when reacting chemically these molecules often split in two. For instance, the fact that two liters of hydrogen will react with just one liter of oxygen to produce two liters of water vapor (at constant pressure and temperature) suggested that a single oxygen molecule splits in two in order to form two molecules of water. The formula of water is H2O, not HO. Avogadro measured oxygen's atomic weight to be 15.074. English translation
Opposition to atomic theory
Dalton's atomic theory attracted widespread interest but not everyone accepted it at first. The law of multiple proportions was shown not to be a universal law when it came to organic substances, whose molecules can be quite large. For instance, in oleic acid there is 34 g of hydrogen for every 216 g of carbon, and in methane there is 72 g of hydrogen for every 216 g of carbon. 34 and 72 form a ratio of 17:36, which is not a ratio of small whole numbers. We know now that carbon-based substances can have very large molecules, larger than any the other elements can form. Oleic acid's formula is C18H34O2 and methane's is CH4.Trusted (1999). The Mystery of Matter, p. 73 The law of multiple proportions by itself was not complete proof, and atomic theory was not universally accepted until the end of the 19th century.
One problem was the lack of uniform nomenclature. The word "atom" implied indivisibility, but Dalton defined an atom as being the ultimate particle of any chemical substance, not just the elements or even matter per se. This meant that "compound atoms" such as carbon dioxide could be divided, as opposed to "elementary atoms". Dalton disliked the word "molecule", regarding it as "diminutive".Freund (1904). The Study of Chemical Composition. p. 288 Amedeo Avogadro did the opposite: he exclusively used the word "molecule" in his writings, eschewing the word "atom", instead using the term "elementary molecule".Pullman (1998). The Atom in the History of Human Thought, p. 202 Jöns Jacob Berzelius used the term "organic atoms" to refer to particles containing three or more elements, because he thought this only existed in organic compounds. Jean-Baptiste Dumas used the terms "physical atoms" and "chemical atoms"; a "physical atom" was a particle that cannot be divided by physical means such as temperature and pressure, and a "chemical atom" was a particle that could not be divided by chemical reactions.Jean-Baptiste Dumas (1836). Leçons sur la philosophie chimique [Lessons on Chemical Philosophy]. 285–287
The modern definitions of atom and molecule—an atom being the basic particle of an element, and a molecule being an agglomeration of atoms—were established in the latter half of the 19th century. A key event was the Karlsruhe Congress in Germany in 1860. As the first international congress of chemists, its goal was to establish some standards in the community. A major proponent of the modern distinction between atoms and molecules was Stanislao Cannizzaro.
Cannizzaro criticized past chemists such as Berzelius for not accepting that the particles of certain gaseous elements are actually pairs of atoms, which led to mistakes in their formulation of certain compounds. Berzelius believed that hydrogen gas and chlorine gas particles are solitary atoms. But he observed that when one liter of hydrogen reacts with one liter of chlorine, they form two liters of hydrogen chloride instead of one. Berzelius decided that Avogadro's law does not apply to compounds. Cannizzaro preached that if scientists just accepted the existence of single-element molecules, such discrepancies in their findings would be easily resolved. But Berzelius did not even have a word for that. Berzelius used the term "elementary atom" for a gas particle which contained just one element and "compound atom" for particles which contained two or more elements, but there was nothing to distinguish H2 from H since Berzelius did not believe in H2. So Cannizzaro called for a redefinition so that scientists could understand that a hydrogen molecule can split into two hydrogen atoms in the course of a chemical reaction.
A second objection to atomic theory was philosophical. Scientists in the 19th century had no way of directly observing atoms. They inferred the existence of atoms through indirect observations, such as Dalton's law of multiple proportions. Some scientists adopted positions aligned with the philosophy of positivism, arguing that scientists should not attempt to deduce the deeper reality of the universe, but only systemize what patterns they could directly observe.
This generation of anti-atomists can be grouped in two camps.
The "equivalentists", like Marcellin Berthelot, believed the theory of equivalent weights was adequate for scientific purposes. This generalization of Proust's law of definite proportions summarized observations. For example, 1 gram of hydrogen will combine with 8 grams of oxygen to form 9 grams of water, therefore the "equivalent weight" of oxygen is 8 grams. The "energeticist", like Ernst Mach and Wilhelm Ostwald, were philosophically opposed to hypothesis about reality altogether. In their view, only energy as part of thermodynamics should be the basis of physical models.
These positions were eventually quashed by two important advancements that happened later in the 19th century: the development of the periodic table and the discovery that molecules have an internal architecture that determines their properties.Pullman (1998). The Atom in the History of Human Thought, p. 226: "The first development is the establishment of the periodic classification of the elements, marking the successful climax of concerted efforts to arrange the chemical properties of elements according to their atomic weight. The second is the emergence of structural chemistry, which ousted what was a simple and primitive verbal description of the elemental composition, be it atomic or equivalentist, of substances and replaced it with a systematic determination of their internal architecture."
Isomerism
Scientists discovered some substances have the exact same chemical content but different properties. For instance, in 1827, Friedrich Wöhler discovered that silver fulminate and silver cyanate are both 107 parts silver, 12 parts carbon, 14 parts nitrogen, and 16 parts oxygen (we now know their formulas as both AgCNO). In 1830 Jöns Jacob Berzelius introduced the term isomerism to describe the phenomenon. In 1860, Louis Pasteur hypothesized that the molecules of isomers might have the same set of atoms but in different arrangements.Pullman (1998). The Atom in the History of Human Thought, p. 230
In 1874, Jacobus Henricus van 't Hoff proposed that the carbon atom bonds to other atoms in a tetrahedral arrangement. Working from this, he explained the structures of organic molecules in such a way that he could predict how many isomers a compound could have. Consider, for example, pentane (C5H12). In van 't Hoff's way of modelling molecules, there are three possible configurations for pentane, and scientists did go on to discover three and only three isomers of pentane.Melsen (1952). From Atomos to Atom, pp. 147–148Henry Enfield Roscoe, Carl Schorlemmer (1895). A Treatise on Chemistry, Volume 3, Part 1, pp. 121–122
Isomerism was not something that could be fully explained by alternative theories to atomic theory, such as radical theory and the theory of types.Henry Enfield Roscoe, Carl Schorlemmer (1895). A Treatise on Chemistry, Volume 3, Part 1, pp. 121: "The radical theory and the theory of types are capable of explaining many cases of isomerism, but it was not until the doctrine of the linking of atoms was established that a clear light was thrown on this subject."Adolphe Wurtz (1880). The Atomic Theory, p. 291: "It is in this manner that the theory of atomicity predicts, interprets, and limits the number of isomers; it has furnished the elements of one of the greatest advances which science has accomplished in the last twenty years. [...] The theory of atomicity has successfully attacked the problem by introducing into the discussion exact data, which have been in a great number of cases confirmed by experiment."
Mendeleev's periodic table
Dmitrii Mendeleev noticed that when he arranged the elements in a row according to their atomic weights, there was a certain periodicity to them. For instance, the second element, lithium, had similar properties to the ninth element, sodium, and the sixteenth element, potassium — a period of seven. Likewise, beryllium, magnesium, and calcium were similar and all were seven places apart from each other on Mendeleev's table. Using these patterns, Mendeleev predicted the existence and properties of new elements, which were later discovered in nature: scandium, gallium, and germanium. Moreover, the periodic table could predict how many atoms of other elements that an atom could bond with — e.g., germanium and carbon are in the same group on the table and their atoms both combine with two oxygen atoms each (GeO2 and CO2). Mendeleev found these patterns validated atomic theory because it showed that the elements could be categorized by their atomic weight. Inserting a new element into the middle of a period would break the parallel between that period and the next, and would also violate Dalton's law of multiple proportions.
The elements on the periodic table were originally arranged in order of increasing atomic weight. However, in a number of places chemists chose to swap the positions of certain adjacent elements so that they appeared in a group with other elements with similar properties. For instance, tellurium is placed before iodine even though tellurium is heavier (127.6 vs 126.9) so that iodine can be in the same column as the other halogens. The modern periodic table is based on atomic number, which is equivalent to the nuclear charge, a change that had to wait for the discovery of the nucleus.
In addition, an entire row of the table was not shown
because the noble gases had not been discovered when Mendeleev devised his table.
Kinetic theory of gases
In 1738, Swiss physicist and mathematician Daniel Bernoulli postulated that the pressure of gases and heat were both caused by the underlying motion of molecules. Using his model he could predict the ideal gas law at constant temperature and suggested that the temperature was proportional to the velocity of the particles. This success was not followed up, in part because the then new tools of calculus allowed more progress using continuous models for gases.
James Clerk Maxwell, a vocal proponent of atomism, revived the kinetic theory in 1860 and 1867. His key insight was that the velocity of particles in a gas would vary around an average value, introducing the concept of a distribution function.See:
Maxwell, J.C. (1860) "Illustrations of the dynamical theory of gases. Part I. On the motions and collisions of perfectly elastic spheres," Philosophical Magazine, 4th series, 19 : 19–32.
Maxwell, J.C. (1860) "Illustrations of the dynamical theory of gases. Part II. On the process of diffusion of two or more kinds of moving particles among one another," Philosophical Magazine, 4th series, 20 : 21–37. In the late 1800s, Ludwig Boltzmann used atomic models to apply kinetic theory to thermodynamics especially the second law relating to entropy. Boltzmann defended the atomistic hypothesis against major detractors from the time like Ernst Mach or energeticists like Wilhelm Ostwald, who considered that energy was the elementary quantity of reality. However an atomic model was not essential for the development of theory of thermodynamics. This became clear when Josiah Willard Gibbs introduced statistical mechanics in his 1902 book Elementary Principles in Statistical Mechanics. His logical and formal development of a new approach specifically avoided requiring an atomic hypothesis. Albert Einstein independently developed an approach similar to Gibbs, but with a completely different aim: Einstein set out to find a way to verify the atomic hypothesis through the kinetic theory. He would eventually succeed with a paper on Brownian motion.
Brownian motion
In 1827, the British botanist Robert Brown observed that dust particles inside pollen grains floating in water constantly jiggled about for no apparent reason. In 1905, Einstein theorized that this motion was caused by the water molecules continuously knocking the grains about, and developed a mathematical model to describe it. This model was validated experimentally in 1908 by French physicist Jean Perrin, who used Einstein's equations to measure the size of atoms.
+ Kinetic diameters of simple molecules Molecule Perrin's 1909 measurementsPerrin (1909). Brownian Movement and Molecular Reality, p. 50 Modern measurements Source Helium 1.7 × 10−10 m 2.6 × 10−10 m Argon 2.7 × 10−10 m 3.4 × 10−10 m Mercury 2.8 × 10−10 m 3 × 10−10 m Hydrogen 2 × 10−10 m 2.89 × 10−10 m Oxygen 2.6 × 10−10 m 3.46 × 10−10 m Nitrogen 2.7 × 10−10 m 3.64 × 10−10 m Chlorine 4 × 10−10 m 3.20 × 10−10 m
Plum pudding model
Atoms were thought to be the smallest possible division of matter until 1899 when J. J. Thomson discovered the electron through his work on cathode rays.
A Crookes tube is a sealed glass container in which two electrodes are separated by a vacuum. When a voltage is applied across the electrodes, cathode rays are generated, creating a glowing patch where they strike the glass at the opposite end of the tube. Through experimentation, Thomson discovered that the rays could be deflected by electric fields and magnetic fields, which meant that these rays were not a form of light but were composed of very light charged particles, and their charge was negative. Thomson called these particles "corpuscles". He measured their mass-to-charge ratio to be several orders of magnitude smaller than that of the hydrogen atom, the smallest atom. This ratio was the same regardless of what the electrodes were made of and what the trace gas in the tube was."From these determinations we see that the value of m/e is independent of the nature of the gas, and that its value 10−7 is very small compared with the value 10−4, which is the smallest value of this quantity previously known, and which is the value for the hydrogen ion in electrolysis."
In contrast to those corpuscles, positive ions created by electrolysis or X-ray radiation had mass-to-charge ratios that varied depending on the material of the electrodes and the type of gas in the reaction chamber, indicating they were different kinds of particles.
In 1898, Thomson measured the charge on ions to be roughly 6 × 10−10 electrostatic units (2 × 10−19 Coulombs). In 1899, he showed that negative electricity created by ultraviolet light landing on a metal (known now as the photoelectric effect) has the same mass-to-charge ratio as cathode rays; then he applied his previous method for determining the charge on ions to the negative electric particles created by ultraviolet light. By this combination he showed that electron's mass was 0.0014 times that of hydrogen ions."...the magnitude of this negative charge is about 6 × 10−10 electrostatic units, and is equal to the positive charge carried by the hydrogen atom in the electrolysis of solutions. [...] In gases at low pressures these units of negative electric charge are always associated with carriers of a definite mass. This mass is exceedingly small, being only about 1.4 × 10−3 of that of the hydrogen ion, the smallest mass hitherto recognized as capable of a separate existence. The production of negative electrification thus involves the splitting up of an atom, as from a collection of atoms something is detached whose mass is less than that of a single atom." These "corpuscles" were so light yet carried so much charge that Thomson concluded they must be the basic particles of electricity, and for that reason other scientists decided that these "corpuscles" should instead be called electrons following an 1894 suggestion by George Johnstone Stoney for naming the basic unit of electrical charge.
In 1904, Thomson published a paper describing a new model of the atom. Electrons reside within atoms, and they transplant themselves from one atom to the next in a chain in the action of an electrical current. When electrons do not flow, their negative charge logically must be balanced out by some source of positive charge within the atom so as to render the atom electrically neutral. Having no clue as to the source of this positive charge, Thomson tentatively proposed that the positive charge was everywhere in the atom, the atom being shaped like a sphere—this was the mathematically simplest model to fit the available evidence (or lack of it).J. J. Thomson (1907). The Corpuscular Theory of Matter, p. 103: "In default of exact knowledge of the nature of the way in which positive electricity occurs in the atom, we shall consider a case in which the positive electricity is distributed in the way most amenable to mathematical calculation, i.e., when it occurs as a sphere of uniform density, throughout which the corpuscles are distributed." The balance of electrostatic forces would distribute the electrons throughout this sphere in a more or less even manner. Thomson further explained that ions are atoms that have a surplus or shortage of electrons.J. J. Thomson (1907). On the Corpuscular Theory of Matter, p. 26: "The simplest interpretation of these results is that the positive ions are the atoms or groups of atoms of various elements from which one or more corpuscles have been removed. That, in fact, the corpuscles are the vehicles by which electricity is carried from one body to another, a positively electrified body different from the same body when unelectrified in having lost some of its corpuscles while the negative electrified body is one with more corpuscles than the unelectrified one."
Thomson's model is popularly known as the plum pudding model, based on the idea that the electrons are distributed throughout the sphere of positive charge with the same density as raisins in a plum pudding. Neither Thomson nor his colleagues ever used this analogy. It seems to have been a conceit of popular science writers. The analogy suggests that the positive sphere is like a solid, but Thomson likened it to a jelly, as he proposed that the electrons moved around in it in patterns governed by the electrostatic forces.J. J. Thomson, in a letter to Oliver Lodge dated 11 April 1904, quoted in Davis & Falconer (1997):
"With regard to positive electrification I have been in the habit of using the crude analogy of a liquid with a certain amount of cohesion, enough to keep it from flying to bits under its own repulsion. I have however always tried to keep the physical conception of the positive electricity in the background because I have always had hopes (not yet realised) of being able to do without positive electrification as a separate entity and to replace it by some property of the corpuscles." The positive electrification in Thomson's model was a temporary concept, which he hoped would ultimately be explained by some phenomena of the electrons. Like all atomic models of that time, Thomson's model was incomplete, it could not predict any of the known properties of the atom such as emission spectra.
In 1910, Robert A. Millikan and Harvey Fletcher reported the results of their oil drop experiment in which they isolated and measured the charge of an electron. Careful measurements over several years gave the charge -4.774 × 10−10esu.
Planetary models
In the late 1800s speculations on the possible structure of the atom included planetary models with orbiting charged electrons.Helge Kragh (Oct. 2010). Before Bohr: Theories of atomic structure 1850-1913. RePoSS: Research Publications on Science Studies 10. Aarhus: Centre for Science Studies, University of Aarhus.
These models faced a significant constraint.
In 1897, Joseph Larmor showed that an accelerating charge would radiate power according to classical electrodynamics, a result known as the Larmor formula. Since electrons forced to remain in orbit are continuously accelerating, they would be mechanically unstable. Larmor noted that electromagnetic effect of multiple electrons, suitably arranged, would cancel each other. Thus subsequent atomic models based on classical electrodynamics needed to adopt such special multi-electron arrangements.
Haas atomic model
In 1910, Arthur Erich Haas proposed a model of the hydrogen atom with an electron circulating on the surface of a sphere of positive charge. The model resembled Thomson's plum pudding model, but Haas added a radical new twist: he constrained the electron's potential energy, , on a sphere of radius to equal the frequency, , of the electron's orbit on the sphere times the Planck constant:
where represents the charge on the electron and the sphere. Haas combined this constraint with the balance-of-forces equation. The attractive force between the electron and the sphere balances the centrifugal force:
where is the mass of the electron. This combination relates the radius of the sphere to the Planck constant:
Haas solved for the Planck constant using the then-current value for the radius of the hydrogen atom.
Three years later, Bohr would use similar equations with different interpretation. Bohr took the Planck constant as given value and used the equations to predict, , the radius of the electron orbiting in the ground state of the hydrogen atom. This value is now called the Bohr radius.
Nicholson atom theory
In 1911 John William Nicholson published a model of the atom similar to JJ Thomson's but with a positive charge radius shrunk from atomic to below the radius of his electrons. Nicholson developed his model based on the analysis of astrophysical spectroscopy. He connected the observed spectral line frequencies with the orbits of electrons in his atoms. The connection he adopted associated the atomic electron orbital angular momentum with the Planck constant.
Whereas Planck focused on a quantum of energy, Nicholson's angular momentum quantum relates to orbital frequency.
This new concept gave Planck constant an atomic meaning for the first time. In his 1913 paper Bohr cites Nicholson as finding quantized angular momentum important for the atom.
Nicholson's model was based on classical electrodynamics along the lines of J.J. Thomson's plum pudding model but his negative electrons orbiting a positive nucleus rather than circulating in a sphere. To avoid immediate collapse of this system he required that electrons come in pairs so the rotational acceleration of each electron was matched across the orbit. By 1913 Bohr had already shown, from the analysis of alpha particle energy loss, that hydrogen had only a single electron not a matched pair. Bohr's atomic model would abandon classical electrodynamics.
Nicholson's model of radiation was quantum but was attached to the orbits of the electrons. Bohr quantization would associate it with differences in energy levels of his model of hydrogen rather than the orbital frequency.
Discovery of the nucleus
Thomson's plum pudding model was challenged in 1911 by one of his former students, Ernest Rutherford, who presented a new model to explain new experimental data. The new model proposed a concentrated center of charge and mass that was later dubbed the atomic nucleus.
Ernest Rutherford and his colleagues Hans Geiger and Ernest Marsden came to have doubts about the Thomson model after they encountered difficulties when they tried to build an instrument to measure the charge-to-mass ratio of alpha particles (these are positively-charged particles emitted by certain radioactive substances such as radium). The alpha particles were being scattered by the air in the detection chamber, which made the measurements unreliable. Thomson had encountered a similar problem in his work on cathode rays, which he solved by creating a near-perfect vacuum in his instruments. Rutherford didn't think he'd run into this same problem because alpha particles usually have much more momentum than electrons. According to Thomson's model of the atom, the positive charge in the atom is not concentrated enough to produce an electric field strong enough to deflect an alpha particle. Yet there was scattering, so Rutherford and his colleagues decided to investigate this scattering carefully.Heilbron (2003). Ernest Rutherford and the Explosion of Atoms, pp. 64–68
Between 1908 and 1913, Rutherford and his colleagues performed a series of experiments in which they bombarded thin foils of metal with a beam of alpha particles. They spotted alpha particles being deflected by angles greater than 90°. According to Thomson's model, all of the alpha particles should have passed through with negligible deflection. Rutherford deduced that the positive charge of the atom is not distributed throughout the atom's volume as Thomson believed, but is concentrated in a tiny nucleus at the center. This nucleus also carries most of the atom's mass. Only such an intense concentration of charge, anchored by its high mass, could produce an electric field strong enough to deflect the alpha particles as observed. Rutherford's model, being supported primarily by scattering data unfamiliar to many scientists, did not catch on until Niels Bohr joined Rutherford's lab and developed a new model for the electrons.
Rutherford model predicted that the scattering of alpha particles would be proportional to the square of the atomic charge. Geiger and Marsden's based their analysis on setting the charge to half of the atomic weight of the foil's material (gold, aluminium, etc.). Amateur physicist Antonius van den Broek noted that there was a more precise relation between the charge and the element's numeric sequence in the order of atomic weights. The sequence number came be called the atomic number and it replaced atomic weight in organizing the periodic table.
Discovery of isotopes
Concurrent with the work of Rutherford, Geiger, and Marsden, the radiochemist Frederick Soddy at the University of Glasgow was studying chemistry-related problems on radioactive materials. Soddy had worked with Rutherford on radioactivity at McGill University. By 1910, about 40 different radioactive elements, referred to as radioelements, had been identified between uranium and lead, although the periodic table only allowed for 11 elements. Every attempt to chemically isolate the radioelements mesothorium or thorium X from radium failed. Soddy concluded that these element were chemically the same element. At the suggestion of Margaret Todd, Soddy called these chemically identical elements isotopes. In 1913, Soddy and theorist Kazimierz Fajans independently found the displacement law, that an element undergoing alpha decay will produce an element two places to the left in the periodic system and an element undergoing beta decay will produce an element one place to the right in the periodic system. For his study of radioactivity and the discovery of isotopes, Soddy was awarded the 1921 Nobel Prize in Chemistry.
Prior to 1919 only atomic weights averaged over a very large number of atoms was available. In that year, Francis Aston built the first mass spectrograph, an improved form of a device built by J. J. Thomson to measure the deflection of positively charged atoms by electric and magnetic fields. Aston was then able to separate the isotopes of many light elements including neon, and . Aston discovered the isotopes matched William Prout's whole number rule: the mass of every isotope is a whole number multiple of hydrogen.Aston, Francis William. Mass spectra and isotopes. London: Edward Arnold, 1942.
Significantly, the one exception to this whole number rule was hydrogen itself, which had a mass value of 1.008. The excess mass was small, but well outside the limits of experimental uncertainty. Aston and others realized this difference was due to the binding energy of atoms. When a number of hydrogen atoms are bound into a atom, that atom's energy must be less than the sum of the energies of the separate hydrogen atoms. That lost energy, according to the mass-energy equivalence principle, means the atomic mass will be slightly less than the sum of the masses of its components. Aston's work on isotopes won him the 1922 Nobel Prize in Chemistry for the discovery of isotopes in a large number of non-radioactive elements, and for his enunciation of the whole number rule.
Atomic number
Before 1913, chemists adhered to Mendeleev's principle that chemical properties derived from atomic weight. However, several places in the periodic table were inconsistent with this concept. For example cobalt and nickel seemed reversed. There were also attempts to understand the relationship between the atomic mass and nuclear charge. Rutherford knew from experiments in his lab that helium must have a nuclear charge of 2 and a mass of 4; this 1:2 ratio was expected to hold for all elements. In 1913 Antonius van den Broek hypothesized that the periodic table should be organized by charge, denoted by Z, not atomic mass and that Z was not exactly half of the atomic weight for elements. This solved the cobalt-nickel issue. Placing cobalt (Z=27, mass of 58.97), before the heavier nickel (Z=28, mass of 58.68) gave the ordering expected by chemical behavior.
In 1913–1914 Moseley tested Broek's hypothesis experimentally by using X-ray spectroscopy. He found that the most intense short-wavelength line in the X-ray spectrum of a particular element, known as the K-alpha line, was related to the element's charge its atomic number, Z. Moseley found that the frequencies of the radiation were related in a simple way to the atomic number of the elements for a large number of elements.
Bohr model
Rutherford deduced the existence of the atomic nucleus through his experiments but he had nothing to say about how the electrons were arranged around it. In 1912, Niels Bohr joined Rutherford's lab and began his work on a quantum model of the atom.
Max Planck in 1900 and Albert Einstein in 1905 had postulated that light energy is emitted or absorbed in discrete amounts known as quanta (singular, quantum). This led to a series of atomic models with some quantum aspects, such as that of Arthur Erich Haas in 1910 and the 1912 John William Nicholson atomic model with quantized angular momentum as h/2.J. W. Nicholson, Month. Not. Roy. Astr. Soc. lxxii. pp. 49,130, 677, 693, 729 (1912).The Atomic Theory of John William Nicholson, Russell McCormmach, Archive for History of Exact Sciences, Vol. 3, No. 2 (25.8.1966), pp. 160–184 (25 pages), Springer. The dynamical structure of these models was still classical.
In 1913, Bohr abandon the classical approach. He started his Bohr model of the atom with a quantum hypothesis: an electron could only orbit the nucleus in particular circular orbits with fixed angular momentum and energy, its distance from the nucleus (i.e., their radii) being proportional to its energy. Under this model an electron could not lose energy in a continuous manner; instead, it could only make instantaneous "quantum leaps" between the fixed energy levels. When this occurred, light was emitted or absorbed at a frequency proportional to the change in energy (hence the absorption and emission of light in discrete spectra).
In comparing his work to Nicholson's, Bohr came to understand the spectral data and their value. When he then learned from a friend about Balmer's compact formula for the spectral line data, Bohr quickly realized his model would match it in detail.
In a trilogy of papers Bohr described and applied his model to derive the Balmer series of lines in the atomic spectrum of hydrogen and the related spectrum of He+. He also used he model to describe the structure of the periodic table and aspects of chemical bonding. Together these results lead to Bohr's model being widely accepted by the end of 1915.
Bohr's model was not perfect. It could only predict the spectral lines of hydrogen, not those of multielectron atoms. Worse still, it could not even account for all features of the hydrogen spectrum: as spectrographic technology improved, it was discovered that applying a magnetic field caused spectral lines to multiply in a way that Bohr's model couldn't explain. In 1916, Arnold Sommerfeld added elliptical orbits to the Bohr model to explain the extra emission lines, but this made the model very difficult to use, and it still couldn't explain more complex atoms.
Discovery of the proton
Back in 1815, William Prout observed that the atomic weights of the known elements were multiples of hydrogen's atomic weight, so he hypothesized that all atoms are agglomerations of hydrogen, a particle which he dubbed "the protyle". Prout's hypothesis was put into doubt when some elements were found to deviate from this pattern—e.g. chlorine atoms on average weigh 35.45 daltons—but when isotopes were discovered in 1913, Prout's observation gained renewed attention.
In 1917 Rutherford bombarded nitrogen gas with alpha particles and observed hydrogen ions being emitted from the gas. Rutherford concluded that the alpha particles struck the nuclei of the nitrogen atoms, causing hydrogen ions to split off.The Development of the Theory of Atomic Structure (Rutherford 1936). Reprinted in Background to Modern Science: Ten Lectures at Cambridge arranged by the History of Science Committee 1936:"In 1919 I showed that when light atoms were bombarded by α-particles they could be broken up with the emission of a proton, or hydrogen nucleus. We therefore presumed that a proton must be one of the units of which the nuclei of other atoms were composed..."
These observations led Rutherford to conclude that the hydrogen nucleus was a singular particle with a positive charge equal to that of the electron's negative charge. The name "proton" was suggested by Rutherford at an informal meeting of fellow physicists in Cardiff in 1920.Footnote by Ernest Rutherford: 'At the time of writing this paper in Australia, Professor Orme Masson was not aware that the name "proton" had already been suggested as a suitable name for the unit of mass nearly 1, in terms of oxygen 16, that appears to enter into the nuclear structure of atoms. The question of a suitable name for this unit was discussed at an informal meeting of a number of members of Section A of the British Association [for the Advancement of Science] at Cardiff this year. The name "baron" suggested by Professor Masson was mentioned, but was considered unsuitable on account of the existing variety of meanings. Finally the name "proton" met with general approval, particularly as it suggests the original term "protyle" given by Prout in his well-known hypothesis that all atoms are built up of hydrogen. The need of a special name for the nuclear unit of mass 1 was drawn attention to by Sir Oliver Lodge at the Sectional meeting, and the writer then suggested the name "proton."'
The charge number of an atomic nucleus was found to be equal to the element's ordinal position on the periodic table. The nuclear charge number thus provided a simple and clear-cut way of distinguishing the chemical elements from each other, as opposed to Lavoisier's classic definition of a chemical element being a substance that cannot be broken down into simpler substances by chemical reactions. The charge number or proton number was thereafter referred to as the atomic number of the element. In 1923, the International Committee on Chemical Elements officially declared the atomic number to be the distinguishing quality of a chemical element.
During the 1920s, some writers defined the atomic number as being the number of "excess protons" in a nucleus. Before the discovery of the neutron, scientists believed that the atomic nucleus contained a number of "nuclear electrons" which cancelled out the positive charge of some of its protons. This explained why the atomic weights of most atoms were higher than their atomic numbers. Helium, for instance, was thought to have four protons and two nuclear electrons in the nucleus, leaving two excess protons and a net nuclear charge of 2+. After the neutron was discovered, scientists realized the helium nucleus in fact contained two protons and two neutrons.
Quantum mechanical models
In 1924, Louis de Broglie proposed that all particles—particularly subatomic particles such as electrons—have an associated wave. Erwin Schrödinger, fascinated by this idea, developed an equation that describes an electron as a wave function instead of a point. This approach predicted many of the spectral phenomena that Bohr's model failed to explain, but it was difficult to visualize, and faced opposition. One of its critics, Max Born, proposed instead that Schrödinger's wave function did not describe the physical extent of an electron (like a charge distribution in classical electromagnetism), but rather gave the probability that an electron would, when measured, be found at a particular point. This reconciled the ideas of wave-like and particle-like electrons: the behavior of an electron, or of any other subatomic entity, has both wave-like and particle-like aspects, and whether one aspect or the other is observed depend upon the experiment.
Schrödinger's wave model for hydrogen replaced Bohr's model, with its neat, clearly defined circular orbits. The modern model of the atom describes the positions of electrons in an atom in terms of probabilities. An electron can potentially be found at any distance from the nucleus, but, depending on its energy level and angular momentum, exists more frequently in certain regions around the nucleus than others; this pattern is referred to as its atomic orbital. The orbitals come in a variety of shapes—sphere, dumbbell, torus, etc.—with the nucleus in the middle. The shapes of atomic orbitals are found by solving the Schrödinger equation. Analytic solutions of the Schrödinger equation are known for very few relatively simple model Hamiltonians including the hydrogen atom and the hydrogen molecular ion. Beginning with the helium atom—which contains just two electrons—numerical methods are used to solve the Schrödinger equation.
Qualitatively the shape of the atomic orbitals of multi-electron atoms resemble the states of the hydrogen atom. The Pauli principle requires the distribution of these electrons within the atomic orbitals such that no more than two electrons are assigned to any one orbital; this requirement profoundly affects the atomic properties and ultimately the bonding of atoms into molecules.Karplus, Martin, and Richard Needham Porter. "Atoms and molecules; an introduction for students of physical chemistry." Atoms and molecules; an introduction for students of physical chemistry (1970).
Discovery of the neutron
Physicists in the 1920s believed that the atomic nucleus contained protons plus a number of "nuclear electrons" that reduced the overall charge. These "nuclear electrons" were distinct from the electrons that orbited the nucleus. This incorrect hypothesis would have explained why the atomic numbers of the elements were less than their atomic weights, and why radioactive elements emit electrons (beta radiation) in the process of nuclear decay. Rutherford even hypothesized that a proton and an electron could bind tightly together into a "neutral doublet". Rutherford wrote that the existence of such "neutral doublets" moving freely through space would provide a more plausible explanation for how the heavier elements could have formed in the genesis of the Universe, given that it is hard for a lone proton to fuse with a large atomic nucleus because of the repulsive electric field.: "Under some conditions, however, it may be possible for an electron to combine much more closely with the H nucleus, forming a kind of neutral doublet. [...] The existence of such atoms seems almost necessary to explain the building up of the nuclei of heavy elements; for unless we suppose the production of charged particles of very high velocities it is difficult to see how any positively charged particle can reach the nucleus of a heavy atom against its intense repulsive field."
In 1928, Walter Bothe observed that beryllium emitted a highly penetrating, electrically neutral radiation when bombarded with alpha particles. It was later discovered that this radiation could knock hydrogen atoms out of paraffin wax. Initially it was thought to be high-energy gamma radiation, since gamma radiation had a similar effect on electrons in metals, but James Chadwick found that the ionization effect was too strong for it to be due to electromagnetic radiation, so long as energy and momentum were conserved in the interaction. In 1932, Chadwick exposed various elements, such as hydrogen and nitrogen, to the mysterious "beryllium radiation", and by measuring the energies of the recoiling charged particles, he deduced that the radiation was actually composed of electrically neutral particles which could not be massless like the gamma ray, but instead were required to have a mass similar to that of a proton. Chadwick called this new particle "the neutron" and believed that it to be a proton and electron fused together because the neutron had about the same mass as a proton and an electron's mass is negligible by comparison. Neutrons are not in fact a fusion of a proton and an electron.
See also
Spectroscopy
Alchemy
Atom
History of molecular theory
Discovery of chemical elements
Introduction to quantum mechanics
Kinetic theory of gases
Footnotes
Bibliography
Further reading
Charles Adolphe Wurtz (1881) The Atomic Theory, D. Appleton and Company, New York.
External links
Atomism by S. Mark Cohen.
Atomic Theory – detailed information on atomic theory with respect to electrons and electricity.
The Feynman Lectures on Physics Vol. I Ch. 1: Atoms in Motion
*
Category:Amount of substance
Category:Chemistry theories
Category:Foundational quantum physics
Category:Statistical mechanics
|
physics
| 8,373
|
3335
|
Baltic Sea
|
https://en.wikipedia.org/wiki/Baltic_Sea
|
The Baltic Sea is an arm of the Atlantic Ocean that is enclosed by the countries of Denmark, Estonia, Finland, Germany, Latvia, Lithuania, Poland, Russia, Sweden, and the North and Central European Plain regions. It is the world's largest brackish water basin.
The sea stretches from 53°N to 66°N latitude and from 10°E to 30°E longitude. It is a shelf sea and marginal sea of the Atlantic with limited water exchange between the two, making it an inland sea. The Baltic Sea drains through the Danish straits into the Kattegat by way of the Øresund, Great Belt and Little Belt. It includes the Gulf of Bothnia (divided into the Bothnian Bay and the Bothnian Sea), the Gulf of Finland, the Gulf of Riga and the Bay of Gdańsk.
The "Baltic Proper" is bordered on its northern edge, at latitude 60°N, by Åland and the Gulf of Bothnia, on its northeastern edge by the Gulf of Finland, on its eastern edge by the Gulf of Riga, and in the west by the Swedish part of the southern Scandinavian Peninsula.
The Baltic Sea is connected by artificial waterways to the White Sea via the White Sea–Baltic Canal and to the German Bight of the North Sea via the Kiel Canal.
Definitions
Administration
The Helsinki Convention on the Protection of the Marine Environment of the Baltic Sea Area includes the Baltic Sea and the Kattegat, without calling Kattegat a part of the Baltic Sea, "For the purposes of this Convention the 'Baltic Sea Area' shall be the Baltic Sea and the Entrance to the Baltic Sea, bounded by the parallel of the Skaw in the Skagerrak at 57°44.43'N."
Traffic history
Historically, the Kingdom of Denmark collected Sound Dues from ships at the border between the ocean and the land-locked Baltic Sea, in tandem: in the Øresund at Kronborg castle near Helsingør; in the Great Belt at Nyborg; and in the Little Belt at its narrowest part then Fredericia, after that stronghold was built. The narrowest part of Little Belt is the "Middelfart Sund" near Middelfart.
Oceanography
Geographers widely agree that the preferred physical border between the Baltic and North Seas is the Langelandsbælt (the southern part of the Great Belt strait near Langeland) and the Drogden-Sill strait. The Drogden Sill is situated north of Køge Bugt and connects Dragør in the south of Copenhagen to Malmö; it is used by the Øresund Bridge, including the Drogden Tunnel. By this definition, the Danish straits is part of the entrance, but the Bay of Mecklenburg and the Bay of Kiel are parts of the Baltic Sea.
Another usual border is the line between Falsterbo, Sweden, and Stevns Klint, Denmark, as this is the southern border of Øresund. It is also the border between the shallow southern Øresund (with a typical depth of 5–10 meters only) and notably deeper water.
Hydrography and biology
Drogden Sill (depth of ) sets a limit to Øresund and Darss Sill (depth of ), and a limit to the Belt Sea. The shallow sills are obstacles to the flow of heavy salt water from the Kattegat into the basins around Bornholm and Gotland.
The Kattegat and the southwestern Baltic Sea are well oxygenated and have a rich biology. The remainder of the Sea is brackish, poor in oxygen, and in species. Thus, statistically, the more of the entrance that is included in its definition, the healthier the Baltic appears; conversely, the more narrowly it is defined, the more endangered its biology appears.
Etymology and nomenclature
The precise origin of the name is unknown. Tacitus called it the Suebic Sea, Latin: after the Germanic people of the Suebi,Tacitus, Germania(online text ): Ergo iam dextro Suebici maris litore Aestiorum gentes adluuntur, quibus ritus habitusque Sueborum, lingua Britannicae propior. – "Upon the right of the Suevian Sea the Æstyan nations reside, who use the same customs and attire with the Suevians; their language more resembles that of Britain." (English text online ) and Ptolemy Sarmatian Ocean after the Sarmatians,Ptolemy, Geography III, chapter 5: "Sarmatia in Europe is bounded on the north by the Sarmatian ocean at the Venedic gulf" (online text ). but the first to name it the Baltic Sea () was the eleventh-century German chronicler Adam of Bremen. It might be connected to the Germanic word belt, a name used for two of the Danish straits, the Belts, while others claim it to be directly derived from the source of the Germanic word, Latin "belt". Balteus in Nordisk familjebok. Adam of Bremen himself compared the sea with a belt, stating that it is so named because it stretches through the land as a belt (Balticus, eo quod in modum baltei longo tractu per Scithicas regiones tendatur usque in Greciam).
He might also have been influenced by the name of a legendary island mentioned in the Natural History of Pliny the Elder. Pliny mentions an island named Baltia (or Balcia) with reference to accounts of Pytheas and Xenophon. It is possible that Pliny refers to an island named Basilia ("the royal") in On the Ocean by Pytheas. Baltia also might be derived from "belt", and therein mean "near belt of sea, strait".
Others have suggested that the name of the island originates from the Proto-Indo-European root meaning "white, fair", which may echo the naming of seas after colours relating to the cardinal points (as per Black Sea and Red Sea). This root and basic meaning were retained in Lithuanian (as ), Latvian (as ) and Slavic (as ). On this basis, a related hypothesis holds that the name originated from this Indo-European root via a Baltic language such as Lithuanian. Another explanation is that, while derived from the aforementioned root, the name of the sea is related to names for various forms of water and related substances in several European languages, that might have been originally associated with colors found in swamps (compare Proto-Slavic "swamp"). Yet another explanation is that the name originally meant "enclosed sea, bay" as opposed to open sea.
In the Middle Ages the sea was known by a variety of names. The name Baltic Sea became dominant after 1600. Usage of Baltic and similar terms to denote the region east of the sea started only in the 19th century.
Name in other languages
The Baltic Sea was known in ancient Latin language sources as or even .Cfr. Hartmann Schedel's 1493 (map), where the Baltic Sea is called Mare Germanicum, whereas the Northern Sea is called Oceanus Germanicus. Older native names in languages that used to be spoken on the shores of the sea or near it usually indicate the geographical location of the sea (in Germanic languages), or its size in relation to smaller gulfs (in Old Latvian), or tribes associated with it (in Old Russian the sea was known as the Varangian Sea). In modern languages, it is known by the equivalents of "East Sea", "West Sea", or "Baltic Sea" in different languages:
"Baltic Sea" is used in Modern English; in the Baltic languages Latvian (; in Old Latvian it was referred to as "the Big Sea", while the present day Gulf of Riga was referred to as "the Little Sea") and Lithuanian (); in Latin () and the Romance languages French (), Italian (), Portuguese (), Romanian () and Spanish (); in Greek ( ); in Albanian (); in Welsh (); in the Slavic languages Polish ( or ), Czech ( or ), Slovene (), Bulgarian ( ), Kashubian (), Macedonian ( ), Ukrainian ( ), Belarusian ( ), Russian ( ) and Serbo-Croatian ( / ); in Hungarian ().
In Germanic languages, except English, "East Sea" is used, as in Afrikaans (), Danish ( ), Dutch (), German (), Low German (), Icelandic and Faroese (), Norwegian (Bokmål: ; Nynorsk: ), and Swedish (). In Old English it was known as ,The Old English Orosius which does not however mean 'east sea' and may be related to a people known in the same work as the Osti.Portham, 1880, p61 Also in Hungarian the former name was ("East-sea", due to German influence). In addition, Finnish, a Finnic language, uses the term "East Sea", possibly a calque from a Germanic language. As the Baltic is not particularly eastward in relation to Finland, the use of this term may be a leftover from the period of Swedish rule.
In another Finnic language, Estonian, it is called the "West Sea" (), with the correct geography relative to Estonia (the sea being to its west). In South Estonian, it has the meaning of both "West Sea" and "Evening Sea" (). In the endangered Livonian language of Latvia, it (and sometimes the Irbe Strait as well) is called the "Large Sea" ( or ).
History
Classical world
At the time of the Roman Empire, the Baltic Sea was known as the or Mare Sarmaticum. Tacitus in his AD 98 Agricola and Germania described the Mare Suebicum, named for the Suebi tribe, during the spring months, as a brackish sea where the ice broke apart and chunks floated about. The Suebi eventually migrated southwest to temporarily reside in the Rhineland area of modern Germany, where their name survives in the historic region known as Swabia. Jordanes called it the Germanic Sea in his work, the Getica.
Middle Ages
In the early Middle Ages, Norse (Scandinavian) merchants built a trade empire all around the Baltic. Later, the Norse fought for control of the Baltic against Wendish tribes dwelling on the southern shore. The Norse also used the rivers of Russia for trade routes, finding their way eventually to the Black Sea and southern Russia. This Norse-dominated period is referred to as the Viking Age.
Since the Viking Age, the Scandinavians have referred to the Baltic Sea as Austmarr ("Eastern Sea"). "Eastern Sea", appears in the Heimskringla and Eystra salt appears in Sörla þáttr. Saxo Grammaticus recorded in Gesta Danorum an older name, Gandvik, -vik being Old Norse for "bay", which implies that the Vikings correctly regarded it as an inlet of the sea. Another form of the name, "Grandvik", attested in at least one English translation of Gesta Danorum, is likely to be a misspelling.
In addition to fish the sea also provides amber, especially from its southern shores within today's borders of Poland, Russia and Lithuania. First mentions of amber deposits on the South Coast of the Baltic Sea date back to the 12th century."The History of Russian Amber, Part 1: The Beginning" , Leta.st The bordering countries have also traditionally exported lumber, wood tar, flax, hemp and furs by ship across the Baltic. Sweden had from early medieval times exported iron and silver mined there, while Poland had and still has extensive salt mines. Thus, the Baltic Sea has long been crossed by much merchant shipping.
The lands on the Baltic's eastern shore were among the last in Europe to be converted to Christianity. This finally happened during the Northern Crusades: Finland in the twelfth century by Swedes, and what are now Estonia and Latvia in the early thirteenth century by Danes and Germans (Livonian Brothers of the Sword). The Teutonic Order gained control over parts of the southern and eastern shore of the Baltic Sea, where they set up their monastic state. Lithuania was the last European state to convert to Christianity.
An arena of conflict
In the period between the 8th and 14th centuries, there was much piracy in the Baltic from the coasts of Pomerania and Prussia, and the Victual Brothers held Gotland.
Starting in the 11th century, the southern and eastern shores of the Baltic were settled by migrants mainly from Germany, a movement called the Ostsiedlung ("east settling"). Other settlers were from the Netherlands, Denmark, and Scotland. The Polabian Slavs were gradually assimilated by the Germans.Wend – West Wend . Britannica. Retrieved on 23 June 2011. Denmark gradually gained control over most of the Baltic coast, until she lost much of her possessions after being defeated in the 1227 Battle of Bornhöved.
In the 13th to 16th centuries, the strongest economic force in Northern Europe was the Hanseatic League, a federation of merchant cities around the Baltic Sea and the North Sea. In the sixteenth and early seventeenth centuries, Poland, Denmark, and Sweden fought wars for Dominium maris baltici ("Lordship over the Baltic Sea"). Eventually, it was Sweden that virtually encompassed the Baltic Sea. In Sweden, the sea was then referred to as Mare Nostrum Balticum ("Our Baltic Sea"). The goal of Swedish warfare during the 17th century was to make the Baltic Sea an all-Swedish sea (Ett Svenskt innanhav), something that was accomplished except the part between Riga in Latvia and Stettin in Pomerania. However, the Dutch dominated the Baltic trade in the seventeenth century.
In the eighteenth century, Russia and Prussia became the leading powers over the sea. Sweden's defeat in the Great Northern War brought Russia to the eastern coast. Russia became and remained a dominating power in the Baltic. Russia's Peter the Great saw the strategic importance of the Baltic and decided to found his new capital, Saint Petersburg, at the mouth of the Neva river at the east end of the Gulf of Finland. There was much trading not just within the Baltic region but also with the North Sea region, especially eastern England and the Netherlands: their fleets needed the Baltic timber, tar, flax, and hemp.
During the Crimean War, a joint British and French fleet attacked the Russian fortresses in the Baltic; the case is also known as the Åland War. They bombarded Sveaborg, which guards Helsinki; and Kronstadt, which guards Saint Petersburg; and they destroyed Bomarsund in Åland. After the unification of Germany in 1871, the whole southern coast became German. World War I was partly fought in the Baltic Sea. After 1920 Poland was granted access to the Baltic Sea at the expense of Germany by the Polish Corridor and enlarged the port of Gdynia in rivalry with the port of the Free City of Danzig.
After the Nazis' rise to power, Germany reclaimed the Memelland and after the outbreak of the Eastern Front (World War II) occupied the Baltic states. In 1945, the Baltic Sea became a mass grave for retreating soldiers and refugees on torpedoed troop transports. The sinking of the Wilhelm Gustloff remains the worst maritime disaster in history, killing (very roughly) 9,000 people. In 2005, a Russian group of scientists found over five thousand airplane wrecks, sunken warships, and other material, mainly from World War II, on the bottom of the sea.
Since World War II
Ammunition dumping
Since the end of World War II, various nations, including the Soviet Union, the United Kingdom and the United States have disposed of chemical weapons in the Baltic Sea, raising concerns of environmental contamination.Chemical Weapon Time Bomb Ticks in the Baltic Sea Deutsche Welle, 1 February 2008. Today, fishermen occasionally find some of these materials: the most recent available report from the Helsinki Commission notes that four small scale catches of chemical munitions representing approximately of material were reported in 2005. This is a reduction from the 25 incidents representing of material in 2003.Activities 2006: Overview Baltic Sea Environment Proceedings No. 112. Helsinki Commission. Until now, the U.S. Government refuses to disclose the exact coordinates of the wreck sites. Deteriorating bottles leak mustard gas and other substances, thus slowly poisoning a substantial part of the Baltic Sea.
In addition to chemicals, tons of German ammunition were dumped into the Baltic after the war at the behest of the Allies, who wanted to ensure that they would not start another war. By 2025, these were leaking contaminates into the water and the German government was piloting solutions.
Territorial changes
After 1945, the German population was expelled from all areas east of the Oder-Neisse line, making room for new Polish and Russian settlement. Poland gained most of the southern shore. The Soviet Union gained another access to the Baltic with the Kaliningrad Oblast, that had been part of German-settled East Prussia. The Baltic states on the eastern shore were annexed by the Soviet Union. The Baltic then separated opposing military blocs: NATO and the Warsaw Pact. Neutral Sweden developed incident weapons to defend its territorial waters after the Swedish submarine incidents. This border status restricted trade and travel. It ended only after the collapse of the Communist regimes in Central and Eastern Europe in the late 1980s.
Finland and Sweden joined NATO in 2023 and 2024, respectively, making the Baltic Sea almost entirely surrounded by the alliance's members, leading some commentators to label the sea a "NATO lake". However, the legal status of the sea has not changed and it is still open to all nations. Such an arrangement has also existed for the European Union (EU) since May 2004 following the accession of the Baltic states and Poland. The remaining non-NATO and non-EU shore areas are Russian: the Saint Petersburg area and the Kaliningrad Oblast exclave.
Infrastructure
The Baltic Sea today is of significant economic and security importance due to its dense network of submarine cables, energy pipelines, ports and offshore energy platforms. In recent years, there have been a number of incidents of sabotage in the Baltic Sea, resulting in damage to critical infrastructures.Storgard, J. et al. (2025) Scenarios for the development of maritime safety and security in the Baltic Sea region. Turku: Centre for Maritime Studies, University of Turku. Available at: https://mc.nato.int/media-centre/news/2025/nato-baltic-sentry-steps-up-patrols-in-the-baltic-sea-to-safeguard-critical-undersea-infrastructure.aspx. The most notable incidents include the Nord Stream pipelines sabotage in 2022, where a series of underwater explosions destroyed both Nord Stream 1 and 2. In 2023, there was another incident involving the Balticconnector gas pipeline and a nearby data cable, which were damaged by the Hong Kong-flagged container ship NewNew Polar Bear.
Other significant incidents include the recent damage to several undersea communication cables. The most recent incident of relevance was the rupture of the Estlink 2 cable in late 2024. It is suspected that the oil tanker Eagle S, believed to be part of a Russian shadow fleet, is responsible.
These events followed a series of responses from both NATO and the EU. In response, NATO Baltic Sea states have increased their naval presence in the Baltic Sea, and the NATO operation Baltic Sentry was established. Simultaneously, the EU has implemented a series of measures designed to enhance the protection of critical maritime infrastructure. The EU has also underscored the commitment to strengthening cooperation with NATO.
Storms and storm floods
Winter storms begin arriving in the region during October. These have caused numerous shipwrecks, and contributed to the extreme difficulties of rescuing passengers of the ferry MS Estonia en route from Tallinn, Estonia, to Stockholm, Sweden, in September 1994, which claimed the lives of 852 people. Older, wood-based shipwrecks such as the Vasa tend to remain well-preserved, as the Baltic's cold and brackish water does not suit the shipworm.
Storm surge floods are generally taken to occur when the water level is more than one metre above normal. In Warnemünde about 110 floods occurred from 1950 to 2000, an average of just over two per year.
Historic flood events were the All Saints' Flood of 1304 and other floods in the years 1320, 1449, 1625, 1694, 1784 and 1825. Little is known of their extent. Citing From 1872, there exist regular and reliable records of water levels in the Baltic Sea. The highest was the flood of 1872 when the water was an average of above sea level at Warnemünde and a maximum of above sea level in Warnemünde. In the last very heavy floods the average water levels reached above sea level in 1904, in 1913, in January 1954, on 2–4 November 1995 and on 21 February 2002.
Geography
Geophysical data
An arm of the North Atlantic Ocean, the Baltic Sea is enclosed by Sweden and Denmark to the west, Finland to the northeast, and the Baltic countries to the southeast.
It is about long, an average of wide, and an average of deep. The maximum depth is which is on the Swedish side of the center. The surface area is about and the volume is about . The periphery amounts to about of coastline. at envir.ee. (archived) (21 April 2006). Retrieved on 23 June 2011.
The Baltic Sea is one of the largest brackish inland seas by area, and occupies a basin (a Zungenbecken) formed by glacial erosion during the last few ice ages.
+Physical characteristics of the Baltic Sea, its main sub-regions, and the transition zone to the Skagerrak/North Sea area Sub-area Area Volume Maximum depth Average depth km2 sq mi km3 cu mi m ft m ftBaltic properGulf of BothniaGulf of FinlandGulf of Riga Belt Sea/Kattegat Total Baltic Sea
Extent
The International Hydrographic Organization defines the limits of the Baltic Sea as follows:
Bordered by the coasts of Germany, Denmark, Poland, Sweden, Finland, Russia, Estonia, Latvia, and Lithuania, it extends north-eastward of the following limits:
In the Little Belt. A line joining Falshöft () and Vejsnæs Nakke (Ærø: ).
In the Great Belt. A line joining Gulstav (South extreme of Langeland Island) and Kappel Kirke () on Island of Lolland.
In the Guldborg Sound. A line joining Flinthorne-Rev and Skjelby ().
In the Sound. A line joining Stevns Lighthouse () and Falsterbo Point ().
Subdivisions
The northern part of the Baltic Sea is known as the Gulf of Bothnia, of which the northernmost part is the Bay of Bothnia or Bothnian Bay. The more rounded southern basin of the gulf is called Bothnian Sea and immediately to the south of it lies the Sea of Åland. The Gulf of Finland connects the Baltic Sea with Saint Petersburg. The Gulf of Riga lies between the Latvian capital city of Riga and the Estonian island of Saaremaa.
The Northern Baltic Sea lies between the Stockholm area, southwestern Finland, and Estonia. The Western and Eastern Gotland basins form the major parts of the Central Baltic Sea or Baltic proper. The Bornholm Basin is the area east of Bornholm, and the shallower Arkona Basin extends from Bornholm to the Danish isles of Falster and Zealand.
In the south, the Bay of Gdańsk lies east of the Hel Peninsula on the Polish coast and west of the Sambia Peninsula in Kaliningrad Oblast. The Bay of Pomerania lies north of the islands of Usedom/Uznam and Wolin, east of Rügen. Between Falster and the German coast lie the Bay of Mecklenburg and Bay of Lübeck. The westernmost part of the Baltic Sea is the Bay of Kiel. The three Danish straits, the Great Belt, the Little Belt and The Sound (Öresund/Øresund), connect the Baltic Sea with the Kattegat and Skagerrak strait in the North Sea.
Temperature and ice
The water temperature of the Baltic Sea varies significantly depending on exact location, season and depth. At the Bornholm Basin, which is located directly east of the island of the same name, the surface temperature typically falls to during the peak of the winter and rises to during the peak of the summer, with an annual average of around . A similar pattern can be seen in the Gotland Basin, which is located between the island of Gotland and Latvia. In the deep of these basins the temperature variations are smaller. At the bottom of the Bornholm Basin, deeper than , the temperature typically is , and at the bottom of the Gotland Basin, at depths greater than , the temperature typically is . Generally, offshore locations, lower latitudes and islands maintain maritime climates, but adjacent to the water continental climates are common, especially on the Gulf of Finland. In the northern tributaries the climates transition from moderate continental to subarctic on the northernmost coastlines.
On the long-term average, the Baltic Sea is ice-covered at the annual maximum for about 45% of its surface area. The ice-covered area during such a typical winter includes the Gulf of Bothnia, the Gulf of Finland, the Gulf of Riga, the archipelago west of Estonia, the Stockholm archipelago, and the Archipelago Sea southwest of Finland. The remainder of the Baltic does not freeze during a normal winter, except sheltered bays and shallow lagoons such as the Curonian Lagoon. The ice reaches its maximum extent in February or March; typical ice thickness in the northernmost areas in the Bothnian Bay, the northern basin of the Gulf of Bothnia, is about for landfast sea ice. The thickness decreases farther south.
Freezing begins in the northern extremities of the Gulf of Bothnia typically in the middle of November, reaching the open waters of the Bothnian Bay in early January. The Bothnian Sea, the basin south of Kvarken, freezes on average in late February. The Gulf of Finland and the Gulf of Riga freeze typically in late January. In 2011, the Gulf of Finland was completely frozen on 15 February.Helsingin Sanomat, 16 February 2011, p. A8.
The ice extent depends on whether the winter is mild, moderate, or severe. In severe winters ice can form around southern Sweden and even in the Danish straits. According to the 18th-century natural historian William Derham, during the severe winters of 1703 and 1708, the ice cover reached as far as the Danish straits.Derham, William Physico-Theology: Or, A Demonstration of the Being and Attributes of God from His Works of Creation (London, 1713). Frequently, parts of the Gulf of Bothnia and the Gulf of Finland are frozen, in addition to coastal fringes in more southerly locations such as the Gulf of Riga. This description meant that the whole of the Baltic Sea was covered with ice.
Since 1720, the Baltic Sea has frozen over entirely 20 times, most recently in early 1987, which was the most severe winter in Scandinavia since 1720. The ice then covered . During the winter of 2010–11, which was quite severe compared to those of the last decades, the maximum ice cover was , which was reached on 25 February 2011. The ice then extended from the north down to the northern tip of Gotland, with small ice-free areas on either side, and the east coast of the Baltic Sea was covered by an ice sheet about wide all the way to Gdańsk. This was brought about by a stagnant high-pressure area that lingered over central and northern Scandinavia from around 10 to 24 February. After this, strong southern winds pushed the ice further into the north, and much of the waters north of Gotland were again free of ice, which had then packed against the shores of southern Finland.Helsingin Sanomat, 10 February 2011, p. A4; 25 February 2011, p. A5; 11 June 2011, p. A12. The effects of the aforementioned high-pressure area did not reach the southern parts of the Baltic Sea, and thus the entire sea did not freeze over. However, floating ice was additionally observed near Świnoujście harbor in January 2010.
In recent years before 2011, the Bothnian Bay and the Bothnian Sea were frozen with solid ice near the Baltic coast and dense floating ice far from it. In 2008, almost no ice formed except for a short period in March.Sea Ice Survey Space Science and Engineering Center, University of Wisconsin.
During winter, fast ice, which is attached to the shoreline, develops first, rendering ports unusable without the services of icebreakers. Level ice, ice sludge, pancake ice, and rafter ice form in the more open regions. The gleaming expanse of ice is similar to the Arctic, with wind-driven pack ice and ridges up to . Offshore of the landfast ice, the ice remains very dynamic all year, and it is relatively easily moved around by winds and therefore forms pack ice, made up of large piles and ridges pushed against the landfast ice and shores.
In spring, the Gulf of Finland and the Gulf of Bothnia normally thaw in late April, with some ice ridges persisting until May in the eastern extremities of the Gulf of Finland. In the northernmost reaches of the Bothnian Bay, ice usually stays until late May; by early June it is practically always gone. However, in the famine year of 1867 remnants of ice were observed as late as 17 July near Uddskär. Even as far south as Øresund, remnants of ice have been observed in May on several occasions; near Taarbaek on 15 May 1942 and near Copenhagen on 11 May 1771. Drift ice was also observed on 11 May 1799.
The ice cover is the main habitat for two large mammals, the grey seal (Halichoerus grypus) and the Baltic ringed seal (Pusa hispida botnica), both of which feed underneath the ice and breed on its surface. Of these two seals, only the Baltic ringed seal suffers when there is not adequate ice in the Baltic Sea, as it feeds its young only while on ice. The grey seal is adapted to reproducing also with no ice in the sea. The sea ice also harbors several species of algae that live in the bottom and inside unfrozen brine pockets in the ice.
Due to the often fluctuating winter temperatures between above and below freezing, the saltwater ice of the Baltic Sea can be treacherous and hazardous to walk on, in particular in comparison to the more stable fresh water-ice sheets in the interior lakes.
Hydrography
The Baltic Sea flows out through the Danish straits; however, the flow is complex. A surface layer of brackish water discharges per year into the North Sea. Due to the difference in salinity, by salinity permeation principle, a sub-surface layer of more saline water moving in the opposite direction brings in per year. It mixes very slowly with the upper waters, resulting in a salinity gradient from top to bottom, with most of the saltwater remaining below deep. The general circulation is anti-clockwise: northwards along its eastern boundary, and south along with the western one.Alhonen, p. 88
The difference between the outflow and the inflow comes entirely from fresh water. More than 250 streams drain a basin of about , contributing a volume of per year to the Baltic. They include the major rivers of north Europe, such as the Oder, the Vistula, the Neman, the Daugava and the Neva. Additional fresh water comes from the difference of precipitation less evaporation, which is positive.
An important source of salty water is infrequent inflows (also known as major Baltic inflows or MBIs) of North Sea water into the Baltic. Such inflows, important to the Baltic ecosystem because of the oxygen they transport into the Baltic deeps, happen on average once per year, but large pulses that can replace the anoxic deep water in the Gotland Deep occur about once in ten years. Previously, it was believed that the frequency of MBIs had declined since 1980, but recent studies have challenged this view and no longer display a clear change in the frequency or intensity of saline inflows. Instead, a decadal variability in the intensities of MBIs is observed with a main period of approximately 30 years.
The water level is generally far more dependent on the regional wind situation than on tidal effects. However, tidal currents occur in narrow passages in the western parts of the Baltic Sea. Tides can reach in the Gulf of Finland.
The significant wave height is generally much lower than that of the North Sea. Quite violent, sudden storms sweep the surface ten or more times a year, due to large transient temperature differences and a long reach of the wind. Seasonal winds also cause small changes in sea level, of the order of . According to the media, during a storm in January 2017, an extreme wave above has been measured and significant wave height of around has been measured by the FMI. A numerical study has shown the presence of events with significant wave heights. Those extreme waves events can play an important role in the coastal zone on erosion and sea dynamics.
Salinity
The Baltic Sea is the world's largest brackish sea. Only two other brackish waters are larger according to some measurements: The Black Sea is larger in both surface area and water volume, but most of it is located outside the continental shelf (only a small fraction is inland). The Caspian Sea is larger in water volume, but—despite its name—it is a lake rather than a sea.
The Baltic Sea's salinity is much lower than that of ocean water (which averages 3.5%), as a result of abundant freshwater runoff from the surrounding land (rivers, streams and alike), combined with the shallowness of the sea itself; runoff contributes roughly one-fortieth its total volume per year, as the volume of the basin is about and yearly runoff is about .
The open surface waters of the Baltic Sea "proper" generally have a salinity of 0.3 to 0.9%, which is border-line freshwater. The flow of freshwater into the sea from approximately two hundred rivers and the introduction of salt from the southwest builds up a gradient of salinity in the Baltic Sea. The highest surface salinities, generally 0.7–0.9%, are in the southwestern most part of the Baltic, in the Arkona and Bornholm basins (the former located roughly between southeast Zealand and Bornholm, and the latter directly east of Bornholm). It gradually falls further east and north, reaching the lowest in the Bothnian Bay at around 0.3%. Drinking the surface water of the Baltic as a means of survival would actually hydrate the body instead of dehydrating, as is the case with ocean water.A healthy serum concentration of sodium is around 0.8–0.85%, and healthy kidneys can concentrate salt in urine to at least 1.4%.
As saltwater is denser than freshwater, the bottom of the Baltic Sea is saltier than the surface. This creates a vertical stratification of the water column, a halocline, that represents a barrier to the exchange of oxygen and nutrients, and fosters completely separate maritime environments. , Jan Thulin and Andris Andrushaitis, Religion, Science and the Environment Symposium V on the Baltic Sea (2003). The difference between the bottom and surface salinities varies depending on location. Overall it follows the same southwest to east and north pattern as the surface. At the bottom of the Arkona Basin (equaling depths greater than ) and Bornholm Basin (depths greater than ) it is typically 1.4–1.8%. Further east and north the salinity at the bottom is consistently lower, being the lowest in Bothnian Bay (depths greater than ) where it is slightly below 0.4%, or only marginally higher than the surface in the same region.
In contrast, the salinity of the Danish straits, which connect the Baltic Sea and Kattegat, tends to be significantly higher, but with major variations from year to year. For example, the surface and bottom salinity in the Great Belt is typically around 2.0% and 2.8% respectively, which is only somewhat below that of the Kattegat. The water surplus caused by the continuous inflow of rivers and streams to the Baltic Sea means that there generally is a flow of brackish water out through the Danish straits to the Kattegat (and eventually the Atlantic). Significant flows in the opposite direction, salt water from the Kattegat through the Danish straits to the Baltic Sea, are less regular and are known as major Baltic inflows (MBIs).
Major tributaries
The rating of mean discharges differs from the ranking of hydrological lengths (from the most distant source to the sea) and the rating of the nominal lengths. Göta älv, a tributary of the Kattegat, is not listed, as due to the northward upper low-salinity-flow in the sea, its water hardly reaches the Baltic proper:
Name Mean discharge Length Basin area States sharing the basin Longest watercoursekmmiNeva (nominal) Russia, Finland (Ladoga-affluent Vuoksi) Suna () → Lake Onega () →Svir () → Lake Ladoga () → NevaNeva (hydrological) Vistula Poland, tributaries: Belarus, Ukraine, Slovakia Bug () → Narew () → Vistula () total 1 Daugava Russia (source), Belarus, Latvia Neman Belarus (source), Lithuania, Russia Kemijoki (main river) Finland, Norway (source of Ounasjoki) longer tributary Kitinen Kemijoki (river system) Oder Czech Republic (source), Poland, Germany Warta () → Oder () total: Lule älv Sweden Narva (nominal) Russia (source of Velikaya), Estonia Velikaya () → Lake Peipus () → Narva Narva (hydrological) Torne älv (nominal) Norway (source), Sweden, Finland Válfojohka → Kamajåkka → Abiskojaure → Abiskojokk(total ) → Torneträsk () → Torne älv Torne älv (hydrological)
Islands and archipelagoes
Åland (Finland, autonomous)
Archipelago Sea (Finland)
Pargas
Nagu
Korpo
Houtskär
Kustavi
Kimito
Blekinge archipelago (Sweden)
Bornholm, including Christiansø (Denmark)
Falster (Denmark)
Gotland (Sweden)
Hailuoto (Finland)
Kotlin (Russia)
Lolland (Denmark)
Kvarken archipelago, including Valsörarna (Finland)
Møn (Denmark)
Öland (Sweden)
Rügen (Germany)
Stockholm archipelago (Sweden)
Värmdön (Sweden)
Usedom or Uznam (split between Germany and Poland)
West Estonian archipelago (Estonia):
Hiiumaa
Muhu
Saaremaa
Vormsi
Wolin (Poland)
Zealand (Denmark)
Coastal countries
Countries that border the sea: Denmark, Estonia, Finland, Germany, Latvia, Lithuania, Poland, Russia, Sweden.
Countries lands in the outer drainage basin: Belarus, Czech Republic, Norway, Slovakia, Ukraine.
The Baltic Sea drainage basin is roughly four times the surface area of the sea itself. About 48% of the region is forested, with Sweden and Finland containing the majority of the forest, especially around the Gulfs of Bothnia and Finland.
About 20% of the land is used for agriculture and pasture, mainly in Poland and around the edge of the Baltic Proper, in Germany, Denmark, and Sweden. About 17% of the basin is unused open land with another 8% of wetlands. Most of the latter are in the Gulfs of Bothnia and Finland.
The rest of the land is heavily populated. About 85 million people live in the Baltic drainage basin, 15 million within of the coast and 29 million within of the coast. Around 22 million live in population centers of over 250,000. 90% of these are concentrated in the band around the coast. Of the nations containing all or part of the basin, Poland includes 45% of the 85 million, Russia 12%, Sweden 10% and the others less than 6% each.
Cities
The biggest coastal cities (by population):
Saint Petersburg (Russia) 5,392,992 (metropolitan area 6,000,000)
Stockholm (Sweden) 962,154 (metropolitan area 2,315,612)
Helsinki (Finland) 665,558 (metropolitan area 1,559,558)
Riga (Latvia) 614,618 (metropolitan area 1,070,000)
Gdańsk (Poland) 462,700 (metropolitan area 1,041,000)
Tallinn (Estonia) 458,398 (metropolitan area 542,983)
Kaliningrad (Russia) 431,500
Szczecin (Poland) 413,600 (metropolitan area 778,000)
Espoo (Finland) 306,792 (part of Helsinki metropolitan area)
Gdynia (Poland) 255,600 (metropolitan area 1,041,000)
Kiel (Germany) 247,000Statistische Kurzinformation (in German). Landeshauptstadt Kiel. Amt für Kommunikation, Standortmarketing und Wirtschaftsfragen Abteilung Statistik. Retrieved on 11 October 2012.
Lübeck (Germany) 216,100
Rostock (Germany) 212,700
Klaipėda (Lithuania) 194,400
Oulu (Finland) 191,050
Turku (Finland) 180,350
Other important ports:
Estonia:
Pärnu 44,568
Maardu 16,570
Sillamäe 16,567
Finland:
Pori 83,272
Kotka 54,887
Kokkola 46,809
Port of Naantali 18,789
Mariehamn 11,372
Hanko 9,270
Germany:
Flensburg 94,000
Stralsund 58,000
Greifswald 55,000
Wismar 44,000
Eckernförde 22,000
Neustadt in Holstein 16,000
Wolgast 12,000
Sassnitz 10,000
Latvia:
Liepāja 85,000
Ventspils 44,000
Lithuania:
Palanga 17,000
Poland:
Kołobrzeg 44,800
Świnoujście 41,500
Police 34,284
Władysławowo 15,000
Darłowo 14,000
Russia:
Vyborg 79,962
Baltiysk 34,000
Sweden:
Norrköping 144,932
Gävle 103,619
Trelleborg 30,818
Karlshamn 19,000
Oxelösund 11,000
Geology
The Baltic Sea somewhat resembles a riverbed, with two tributaries, the Gulf of Finland and Gulf of Bothnia. Geological surveys show that before the Pleistocene, instead of the Baltic Sea, there was a wide plain around a great river that paleontologists call the Eridanos. Several Pleistocene glacial episodes scooped out the river bed into the sea basin. By the time of the last, or Eemian Stage (MIS 5e), the Eemian Sea was in place. Sometimes the Baltic Sea is considered a very large estuary, with freshwater outflow from numerous rivers.
From that time the waters underwent a geologic history summarized under the names listed below. Many of the stages are named after marine animals (e.g. the Littorina mollusk) that are clear markers of changing water temperatures and salinity.
The factors that determined the sea's characteristics were the submergence or emergence of the region due to the weight of ice and subsequent isostatic readjustment, and the connecting channels it found to the North Sea-Atlantic, either through the straits of Denmark or at what are now the large lakes of Sweden, and the White Sea-Arctic Sea. There are a number of named and dated stages in the evolution of the Baltic Sea:
Eemian Sea, about 130,000–115,000 years BP
Baltic Ice Lake, 16,000–11,700 years BP
Yoldia Sea, 11,700–10,700 years cal. BP
Ancylus Lake, 10,700–9,800 years cal. BP
Mastogloia Sea, 9,800–8,500 years cal. BP
Littorina Sea, 8,500–4,000 years cal. BP
Post-Littorina Sea, 4,000–present
The land is still emerging isostatically from its depressed state, which was caused by the weight of ice during the last glaciation. The phenomenon is known as post-glacial rebound. Consequently, the surface area and the depth of the sea are diminishing. The uplift is about eight millimeters per year on the Finnish coast of the northernmost Gulf of Bothnia. In the area, the former seabed is only gently sloping, leading to large areas of land being reclaimed in what are, geologically speaking, relatively short periods (decades and centuries).
Biology
Fauna and flora
The fauna of the Baltic Sea is a mixture of marine and freshwater species. Among marine fishes are Atlantic cod, Atlantic herring, European hake, European plaice, European flounder, shorthorn sculpin and turbot, and examples of freshwater species include European perch, northern pike, whitefish and common roach. Freshwater species may occur at outflows of rivers or streams in all coastal sections of the Baltic Sea. Otherwise, marine species dominate in most sections of the Baltic, at least as far north as Gävle, where less than one-tenth are freshwater species. Further north the pattern is inverted. In the Bothnian Bay, roughly two-thirds of the species are freshwater. In the far north of this bay, saltwater species are almost entirely absent. For example, the common starfish and shore crab, two species that are very widespread along European coasts, are both unable to cope with the significantly lower salinity. Their range limit is west of Bornholm, meaning that they are absent from the vast majority of the Baltic Sea. Some marine species, like the Atlantic cod and European flounder, can survive at relatively low salinities but need higher salinities to breed, which therefore occurs in deeper parts of the Baltic Sea. The common blue mussel is the dominating animal species, and makes up more than 90% of the total animal biomass in the sea.
There is a decrease in species richness from the Danish belts to the Gulf of Bothnia. The decreasing salinity along this path causes restrictions in both physiology and habitats. At more than 600 species of invertebrates, fish, aquatic mammals, aquatic birds and macrophytes, the Arkona Basin (roughly between southeast Zealand and Bornholm) is far richer than other more eastern and northern basins in the Baltic Sea, which all have less than 400 species from these groups, with the exception of the Gulf of Finland with more than 750 species. However, even the most diverse sections of the Baltic Sea have far fewer species than the almost-full saltwater Kattegat, which is home to more than 1600 species from these groups. The lack of tides has affected the marine species as compared with the Atlantic.
Since the Baltic Sea is so young there are only two or three known endemic species: the brown alga Fucus radicans and the flounder Platichthys solemdali. Both appear to have evolved in the Baltic basin and were only recognized as species in 2005 and 2018 respectively, having formerly been confused with more widespread relatives. The tiny Copenhagen cockle (Parvicardium hauniense), a rare mussel, is sometimes considered endemic, but has now been recorded in the Mediterranean.Red List Benthic Invertebrate Expert Group (2013) . HELCOM. Accessed 27 July 2018. However, some consider non-Baltic records to be misidentifications of juvenile lagoon cockles (Cerastoderma glaucum). Several widespread marine species have distinctive subpopulations in the Baltic Sea adapted to the low salinity, such as the Baltic Sea forms of the Atlantic herring and lumpsucker, which are smaller than the widespread forms in the North Atlantic.
A peculiar feature of the fauna is that it contains a number of glacial relict species, isolated populations of arctic species which have remained in the Baltic Sea since the last glaciation, such as the large isopod Saduria entomon, the Baltic subspecies of ringed seal, and the fourhorn sculpin. Some of these relicts are derived from glacial lakes, such as Monoporeia affinis, which is a main element in the benthic fauna of the low-salinity Bothnian Bay.
Cetaceans in the Baltic Sea are monitored by the countries bordering the sea and data compiled by various intergovernmental bodies, such as ASCOBANS. A critically endangered population of harbor porpoise inhabit the Baltic proper, whereas the species is abundant in the outer Baltic (Western Baltic and Danish straits) and occasionally oceanic and out-of-range species such as minke whales,Minke whale (Balaenoptera acutorostrata) – MarLIN, The Marine Life Information Network bottlenose dolphins, beluga whales,About the beluga – Russian Geographical Society orcas, and beaked whales visit the waters. In recent years, very small, but with increasing rates, fin whalesJansson N.. 2007. "Vi såg valen i viken" . Aftonbladet. Retrieved on 7 September 2017. and humpback whales migrate into Baltic sea including mother and calf pair. Now extinct Atlantic grey whales (remains found from Gräsö along Bothnian Sea/southern Bothnian GulfJones L.M..Swartz L.S.. Leatherwood S.. The Gray Whale: Eschrichtius Robustus . "Eastern Atlantic Specimens". pp. 41–44. Academic Press. Retrieved on 5 September 2017 and YstadGlobal Biodiversity Information Facility. Occurrence Detail 1322462463 . Retrieved on 21 September 2017) and eastern population of North Atlantic right whales that is facing functional extinction once migrated into Baltic Sea.
Other notable megafauna include the basking sharks.
Environmental status
Satellite images taken in July 2010 revealed a massive algal bloom covering in the Baltic Sea. The area of the bloom extended from Germany and Poland to Finland. Researchers of the phenomenon have found that algal blooms have occurred every summer for decades. Fertilizer runoff from surrounding agricultural land has exacerbated the problem and led to increased eutrophication.
Approximately of the Baltic's seafloor (a quarter of its total area) is a variable dead zone. The more saline (and therefore denser) water remains on the bottom, isolating it from surface waters and the atmosphere. This leads to decreased oxygen concentrations within the zone. It is mainly bacteria that grow in it, digesting organic material and releasing hydrogen sulfide. Because of this large anaerobic zone, the seafloor ecology differs from that of the neighboring Atlantic.
Plans to artificially oxygenate areas of the Baltic that have experienced eutrophication have been proposed by the University of Gothenburg and Inocean AB. The proposal intends to use wind-driven pumps to pump oxygen-rich surface water to a depth of around 130 m.
After World War II, Germany had to be disarmed, and large quantities of ammunition stockpiles were disposed directly into the Baltic Sea and the North Sea. Environmental experts and marine biologists warn that these ammunition dumps pose an environmental threat, with potentially life-threatening consequences to the health and safety of humans on the coastlines of these seas.
Future change
Climate change, and pollution from agriculture and forestry, impose such strong effects on the ecosystems of the Baltic sea, that there is a concern the sea will turn from a carbon sink to a source of carbon dioxide and methane. Modelling climate change and the impact of well characterised factors such as post-glacial rebound before the year 2050, is complicated by the unique properties of the Baltic Sea area compared to say the adjacent North Sea and controversy as to the relative contributions of socio-economic factors such as land use to any warming component. These include its current brackish water, the southern subbasin tendency to have a vertical stratification of the halocline, and the northern subbasin seasonal sea ice cover. High confidence future projections include: air temperature warming, more heavy precipitation episodes, less snow with less perifrost and glacial ice mass in northern catchment areas, more mild winters, raised mean water temperature with more marine heatwaves, intensified seasonal thermoclines without change in the thermohaline circulation, and sea level rise. There are many more projections but these have lower confidence.All future projections have limits and make assumptions. The cause of the Younger Dryas which impacted on the Baltic area is unknown and such an event is not considered in most Baltic Sea future modelling.
Economy
Construction of the Great Belt Bridge in Denmark (completed 1997) and the Øresund Bridge-Tunnel (completed 1999), linking Denmark with Sweden, provided a highway and railroad connection between Sweden and the Danish mainland (the Jutland Peninsula, precisely the Zealand). The undersea tunnel of the Øresund Bridge-Tunnel provides for navigation of large ships into and out of the Baltic Sea. The Baltic Sea is the main trade route for the export of Russian petroleum. Countries neighboring the Baltic Sea have expressed concerns about this since a major oil leak in a seagoing tanker would be especially disastrous for the Baltic given the slow exchange of water in the ecosystem. The tourism industry surrounding the Baltic Sea is naturally concerned about oil pollution.
Much shipbuilding is carried out in the shipyards around the Baltic Sea. The largest shipyards are at Gdańsk, Gdynia, and Szczecin, Poland; Kiel, Germany; Karlskrona and Malmö, Sweden; Rauma, Turku, and Helsinki, Finland; Riga, Ventspils, and Liepāja, Latvia; Klaipėda, Lithuania; and Saint Petersburg, Russia.
Construction of the Fehmarn Belt Fixed Link between Denmark and Germany is due to finish in 2029. It will be a three-bore tunnel carrying four motorway lanes and two rail tracks.
Through the development of offshore wind power the Baltic Sea is expected to become a major source of energy for countries in the region. According to the Marienborg Declaration, signed in 2022, all EU Baltic Sea states have announced their intentions to have 19.6 gigawatts of offshore wind in operation by 2030.
Ferries
There are several cargo and passenger ferries that operate on the Baltic Sea, such as
Birka Gotland (cruises from Stockholm to Gotland and Åland Islands)
Destination Gotland (Gotland-mainland Sweden)
Eckerö Line (Estonia-Finland)
Eckerö Linjen (Sweden-Åland Islands)
Finnlines (Finland-Germany, Finland-Sweden, Germany-Sweden, Poland-Sweden)
Polferries (Poland-Sweden, Poland-Denmark)
Scandlines (Denmark-Germany)
Stena Line (Denmark-Sweden, Germany-Sweden, Latvia-Sweden, Poland-Sweden)
Tallink and Tallink Silja (Estonia-Finland, Estonia-Sweden, Finland-Sweden)
TT-Line (Germany-Lithuania, Germany-Sweden, Lithuania-Sweden, Poland-Sweden)
Unity Line (Poland-Sweden)
Viking Line (Estonia-Finland, Finland-Sweden)
Wasaline (Finland-Sweden)
Tourism
Piers
Ahlbeck (Usedom), Germany
Bansin, Germany
Binz, Germany
Heiligendamm, Germany
Kühlungsborn, Germany
Sellin, Germany
Liepāja, Latvia
Šventoji, Lithuania
Klaipėda, Lithuania
Gdańsk, Poland
Gdynia, Poland
Kołobrzeg, Poland
Międzyzdroje, Poland
Sopot, Poland
Resort towns
Haapsalu, Estonia
Kuressaare, Estonia
Narva-Jõesuu, Estonia
Pärnu, Estonia
Hanko, Finland
Mariehamn, Finland
Binz, Germany
Heiligendamm, Germany
Heringsdorf, Germany
Travemünde, Germany
Sellin, Germany
Ueckermünde, Germany
Jūrmala, Latvia
Nida, Lithuania
Palanga, Lithuania
Šventoji, Lithuania
Juodkrantė, Lithuania
Pervalka, Lithuania
Karklė, Lithuania
Kamień Pomorski, Poland
Kołobrzeg, Poland
Sopot, Poland
Świnoujście, Poland
Ustka, Poland
Svetlogorsk, Russia
Critical Maritime Infrastructure (CMI)
Critical maritime infrastructure (CMI) include pipelines, ports, undersea cables and energy installations. Following a series of incidents between 2022 and 2025, critical infrastructure in the Baltic Sea has drawn growing political attention. In September 2022, both Nord Stream I and Nord Stream II were damaged by explosives close to Bornholm in Denmark. In October 2023, the Balticconnector gas pipeline was damaged by the anchor of the Chinese container vessel Newnew Polar Bear. In November 2024, telecoms cables were damaged, another case of suspected sabotage from a Chinese bulk carrier, departing from a Russian port. In December 2024, a ship registered in the Cook Islands which was thought to be part of a Russian shadow fleet, the Estlink 2, was suspected to have damaged internet cables. These incidents have led to responses from NATO, the European Union and national governments. NATO has increased its air and naval presence, as well as agreed to establish the Maritime Centre for the Security of Critical Undersea Infrastructure within NATO's Allied Maritime Command (MARCOM), among other cooperation efforts. The EU has updated its Maritime Security Strategy, launched an action plan and a coordination group for infrastructure protection, while national governments have strengthened surveillance, legal tools, and seabed defence capabilities. Aside from technical standards, political decisions influence what is deemed as "critical" infrastructure. Critical maritime infrastructure is deemed as critical since economies in today's society are dependent on this infrastructure. Therefore, they require an extra layer of protection, whether by security policies or military protection.
Hybrid warfare
In the current geopolitical climate, CMI is facing challenges posed by hybrid warfare threats. Hybrid threats in the Baltic Sea are often associated with Russian actions and operate below the official threshold of war, which poses a political challenge. The challenge posed by hybrid threats is that they operate in a grey area between peace and violence. For example, the Nord Stream pipeline sabotage demonstrated this problem by exploitation of legal ambiguities, the complexity of attribution, and the disruption of alliance cohesion. The incident highlights the vulnerability of critical infrastructure and the absence of coherent political responses. Responding to hybrid threats requires sustained and coordinated efforts between civilian and military actors. However, the maritime domain presents unique difficulties, including jurisdictional overlaps, fragmented responsibilities, and the challenge of adapting land-based security frameworks to the sea. Hybrid tactics, such as uncrewed aerial vehicle (drone) surveillance, covert sabotage, and information manipulation, aim not only to damage infrastructure but also to undermine public trust and create strategic instability in the region.Swistek, G. and Paul, M. (2023) Geopolitics in the Baltic Sea region: The "Zeitenwende" in the context of critical maritime infrastructure, escalation threats and the German willingness to lead, SWP Comment, 9/2023. https://doi.org/10.18449/2023C09.
Geopolitical Dimensions of Critical Maritime Infrastructure
The Baltic Sea region in the post-Cold War era has for a long time been regarded as an area with little geopolitical tensions. With the Soviet presence in the south, the American influence through the NATO members Denmark and Germany and the neutral states Sweden and Finland, an equilibrium existed, often referred to as the "Nordic balance". This balance also prolonged after the steady integration of the region into western institutions.
However, in recent years this geopolitical reality has increasingly been challenged by the neo-imperial ambitions of Russia, manifesting itself in aggression against Ukraine. Russia has also pursued a strategy of regional dominance in the Baltic Sea, designating the regional sea as a zone of strategic influence in its naval doctrine published in 2022. Yet, such a positioning in itself was significantly complicated by the accession of Sweden and Finland to NATO.
Against this geopolitical background, it becomes clear, why many of the recent infrastructure projects in the Baltic Sea were subject to big political debates. Projects such as the Balticconnector, which links the Finnish and Estonian gas markets and has been described by the European Commission as an expression of European solidarity, whilst the Gas Interconnection Poland–Lithuania (GIPL) which connects the Polish and Lithuanian gas networks, and the development of multiple LNG terminals, have all played a significant role in reducing European reliance on Russian energy supplies. These initiatives form part of broader efforts to enhance regional integration and bring the Baltic Sea states into closer alignment with the European Union. In contrast, the Nord Stream pipelines, particularly Nord Stream 2, became a source of political controversy. Critics argued that the project would increase European dependence on Russian gas, bypass transit countries such as Ukraine and Poland, and undermine EU energy solidarity by strengthening Russia's leverage over countries like Germany. Nord Stream 2 experienced prolonged delays and was ultimately suspended following the imposition of international sanctions against Russia after the invasion of Ukraine in 2022. In addition to political and economic controversies, the Nord Stream pipelines also became the subject of security-related concerns regarding their potential strategic implications in the Baltic Sea region. Prior to the 2022 sabotage of the Nord Stream pipelines, security experts and several Eastern European states had raised concerns that such infrastructure could be exploited by Russia for intelligence gathering and military purposes in the Baltic Sea. These concerns gained renewed attention following the sabotage incident, which highlighted challenges in the legal and regulatory frameworks governing the protection of critical infrastructure. The difficulty in conclusively attributing the attack also drew attention to the limitations of existing mechanisms for responding to hybrid threats in the maritime domain.
Helsinki Convention
1974 Convention
For the first time ever, all the sources of pollution around an entire sea were made subject to a single convention, signed in 1974 by the then seven Baltic coastal states. The 1974 Convention entered into force on 3 May 1980.
1992 Convention
In the light of political changes and developments in international environmental and maritime law, a new convention was signed in 1992 by all the states bordering on the Baltic Sea, and the European Community. After ratification, the Convention entered into force on 17 January 2000. The Convention covers the whole of the Baltic Sea area, including inland waters and the water of the sea itself, as well as the seabed. Measures are also taken in the whole catchment area of the Baltic Sea to reduce land-based pollution. The convention on the Protection of the Marine Environment of the Baltic Sea Area, 1992, entered into force on 17 January 2000.
The governing body of the convention is the Helsinki Commission,Helcom : Welcome . Helcom.fi. Retrieved on 23 June 2011. also known as HELCOM, or Baltic Marine Environment Protection Commission. The present contracting parties are Denmark, Estonia, the European Community, Finland, Germany, Latvia, Lithuania, Poland, Russia, and Sweden.
The ratification instruments were deposited by the European Community, Germany, Latvia and Sweden in 1994, by Estonia and Finland in 1995, by Denmark in 1996, by Lithuania in 1997, and by Poland and Russia in November 1999.
Coordination in the Baltic Sea region
European Union
The European Union (EU) is one core framework shaping regional security coordination in the Baltic Sea region. The EU has recognised this area as one of thirteen designated zones for territorial cooperation. Following the accession of the Baltic States in 2004, the Baltic Sea is now considered an EU internal sea. The following initiatives form the basis of the EU's engagement in Maritime Domain Awareness (MDA) and Maritime Situational Awareness (MSA) in the Baltic Sea:
2006: Maritime Surveillance Network (MARSUR), a project aiming at facilitating communication between maritime information systems in Europe, that is undertaken by the European Defence Agency (EDA).
2009: Sea Surveillance Co-Operation Baltic Sea (SUCBAS), a Maritime Situational Awareness (MSA) cooperation between Baltic Sea countries with the objective of sharing information effectively.
2009: EU Strategy for the Baltic Sea Region (EUSBSR), a macro-regional strategy that involves EU member states bordering the Baltic Sea and the EU Commission. The strategy is centred on four core pillars: the environment, prosperity, accessibility and maritime security.
2021–2027: Interreg Baltic Sea Region, an EU co-funded transnational cooperation network.
NATO
The North Atlantic Treaty Organisation (NATO) is the primary provider of a collective defense system. Following the accession of Finland in 2023 and Sweden in 2024, the majority of the states bordering the Baltic Sea have become members of NATO, simplifying the organisational geography of the region.
The following NATO initiatives and bodies are particularly relevant for the Baltic Sea region:
2023: Critical Undersea Infrastructure Coordination Cell, a centre aiming at connecting military and civilian stakeholders.
2023: Maritime Centre for the Security of Critical Undersea Infrastructure (NMCSCUI), a centre aiming at protecting the allies' critical undersea infrastructure.
2023: EU-NATO Task Force on Resilience of Critical Infrastructure, a cooperation on increasing the resilience of critical infrastructure, supply chains and technology.
2025: Baltic Sentry, a NATO military operation aiming at increasing the military presence in the Baltic Sea to improve the safety of critical infrastructure.
Baltic Operations (BALTOPS), a multinational naval manoeuvre that is held annually in the Baltic Sea.
Nordic Defence Cooperation
The Nordic Defence Cooperation (NORDEFCO) is a military alliance comprising the Nordic countries of Denmark, Finland, Iceland, Norway and Sweden. It was established in 2009. The objectives of this cooperation structure include improving the national defense of each country, identifying shared strategic interests, and promoting the development of coordinated, effective responses. The strategy paper 'Vision 2025' outlines plans to enhance collaboration with the Baltic states and transatlantic allies.
Council of the Baltic Sea States
The Council of the Baltic Sea States (CBSS) is an intergovernmental political organisation that focuses on regional cooperation. It was established in 1992. CBSS comprises ten European states and the European Union. The organisation serves as a forum for political dialogue in the region and follows three main objectives: Regional Identity, Safe & Secure Region, and Sustainable & Prosperous Region. CBSS holds annual regional and international meetings.
See also
Baltic (disambiguation)
Baltic region
Baltic Sea Action Group (BSAG)
Council of the Baltic Sea States
List of cities and towns around the Baltic Sea
List of rivers of the Baltic Sea
Nord Stream 1
Nord Stream 2
Northern Europe
Ports of the Baltic Sea
Scandinavia
Notes
References
Bibliography
Further reading
Norbert Götz. "Spatial Politics and Fuzzy Regionalism: The Case of the Baltic Sea Area." Baltic Worlds 9 (2016) 3: 54–67.
Aarno Voipio (ed., 1981): "The Baltic Sea." Elsevier Oceanography Series, vol. 30, Elsevier Scientific Publishing, 418 p,
Historical
Bogucka, Maria. "The Role of Baltic Trade in European Development from the XVIth to the XVIIIth Centuries". Journal of European Economic History 9 (1980): 5–20.
Davey, James. The Transformation of British Naval Strategy: Seapower and Supply in Northern Europe, 1808–1812 (Boydell, 2012).
Fedorowicz, Jan K. England's Baltic Trade in the Early Seventeenth Century: A Study in Anglo-Polish Commercial Diplomacy (Cambridge UP, 2008).
Frost, Robert I. The Northern Wars: War, State, and Society in Northeastern Europe, 1558–1721 (Longman, 2000).
Grainger, John D. The British Navy in the Baltic (Boydell, 2014).
Kent, Heinz S. K. War and Trade in Northern Seas: Anglo-Scandinavian Economic Relations in the Mid Eighteenth Century (Cambridge UP, 1973).
Koningsbrugge, Hans van. "In War and Peace: The Dutch and the Baltic in Early Modern Times". Tijdschrift voor Skandinavistiek 16 (1995): 189–200.
Lindblad, Jan Thomas. "Structural Change in the Dutch Trade in the Baltic in the Eighteenth Century". Scandinavian Economic History Review 33 (1985): 193–207.
Lisk, Jill. The Struggle for Supremacy in the Baltic, 1600–1725 (U of London Press, 1967).
Roberts, Michael. The Early Vasas: A History of Sweden, 1523–1611 (Cambridge UP, 1968).
Rystad, Göran, Klaus-R. Böhme, and Wilhelm M. Carlgren, eds. In Quest of Trade and Security: The Baltic in Power Politics, 1500–1990. Vol. 1, 1500–1890. Stockholm: Probus, 1994.
Salmon, Patrick, and Tony Barrow, eds. Britain and the Baltic: Studies in Commercial, Political and Cultural Relations (Sunderland University Press, 2003).
Stiles, Andrina. Sweden and the Baltic 1523–1721 (1992).
Thomson, Erik. "Beyond the Military State: Sweden's Great Power Period in Recent Historiography". History Compass 9 (2011): 269–283.
Tielhof, Milja van. The "Mother of All Trades": The Baltic Grain Trade in Amsterdam from the Late 16th to Early 19th Century. Leiden, The Netherlands: Brill, 2002.
Warner, Richard. "British Merchants and Russian Men-of-War: The Rise of the Russian Baltic Fleet". In Peter the Great and the West: New Perspectives. Edited by Lindsey Hughes, 105–117. Basingstoke, UK: Palgrave Macmillan, 2001.
External links
The Baltic Sea, Kattegat and Skagerrak – sea areas and draining basins, poster with integral information by the Swedish Meteorological and Hydrological Institute
Baltic Sea clickable map and details.
Protect the Baltic Sea while it's still not too late.
The Baltic Sea Portal – a site maintained by the (FIMR) (in English, Finnish, Swedish and Estonian)
www.balticnest.org
Encyclopedia of Baltic History
Old shipwrecks in the Baltic
How the Baltic Sea was changing – Prehistory of the Baltic from the Polish Geological Institute
Late Weichselian and Holocene shore displacement history of the Baltic Sea in Finland – more prehistory of the Baltic from the Department of Geography of the University of Helsinki
Baltic Environmental Atlas: Interactive map of the Baltic Sea region
Can a New Cleanup Plan Save the Sea? – spiegel.de
List of all ferry lines in the Baltic Sea
The Helsinki Commission (HELCOM) HELCOM is the governing body of the "Convention on the Protection of the Marine Environment of the Baltic Sea Area"
Baltice.org – information related to winter navigation in the Baltic Sea.
Baltic Sea Wind – Marine weather forecasts
Ostseeflug – A short film (55'), showing the coastline and the major German cities at the Baltic sea.
Category:Baltic region
Category:Seas of the Atlantic Ocean
Category:European seas
Category:Geography of Scandinavia
Category:Seas of Germany
Category:Federal waterways in Germany
Category:Seas of Russia
Category:Seas of Denmark
Category:Bodies of water of Estonia
Category:Bodies of water of Finland
Category:Bodies of water of Lithuania
Category:Bodies of water of Poland
Category:Bodies of water of Sweden
Category:Bodies of water of Kaliningrad Oblast
Category:Bodies of water of Leningrad Oblast
Category:Bodies of water of Saint Petersburg
Category:Ecoregions of Denmark
Category:Ecoregions of Estonia
Category:Ecoregions of Finland
Category:Ecoregions of Germany
Category:Ecoregions of Latvia
Category:Ecoregions of Lithuania
Category:Ecoregions of Poland
Category:Ecoregions of Russia
Category:Ecoregions of Sweden
Category:Articles containing video clips
|
geography
| 10,781
|
3386
|
Black Sea
|
https://en.wikipedia.org/wiki/Black_Sea
|
The Black Sea is a marginal sea lying between Europe and Asia, east of the Balkans, south of the East European Plain, west of the Caucasus, and north of Anatolia. It is bounded by Bulgaria, Georgia, Romania, Russia, Turkey, and Ukraine. The Black Sea is supplied by major rivers, principally the Danube, Dnieper and Dniester. Consequently, while six countries have a coastline on the sea, its drainage basin includes parts of 24 countries in Europe. Living Black Sea
The Black Sea, not including the Sea of Azov, covers , has a maximum depth of ,Maximum depth— and a volume of .
Most of its coasts ascend rapidly.
These rises are the Pontic Mountains to the south, bar the southwest-facing peninsulas, the Caucasus Mountains to the east, and the Crimean Mountains to the mid-north.
In the west, the coast is generally small floodplains below foothills such as the Strandzha; Cape Emine, a dwindling of the east end of the Balkan Mountains; and the Dobruja Plateau considerably farther north. The longest east–west extent is about . Important cities along the coast include (clockwise from the Bosporus) the northern suburbs of Istanbul, Burgas, Varna, Constanța, Odesa, Yalta, Kerch, Yevpatoria, Sevastopol, Novorossiysk, Sochi, Poti, Batumi, Rize, Trabzon, Ordu, Simferopol, Samsun and Zonguldak.
The Black Sea has a positive water balance, with an annual net outflow of per year through the Bosporus and the Dardanelles into the Aegean Sea. While the net flow of water through the Bosporus and Dardanelles (known collectively as the Turkish Straits) is out of the Black Sea, water generally flows in both directions simultaneously: Denser, more saline water from the Aegean flows into the Black Sea underneath the less dense, fresher water that flows out of the Black Sea. This creates a significant and permanent layer of deep water that does not drain or mix and is therefore anoxic. This anoxic layer is responsible for the preservation of ancient shipwrecks which have been found in the Black Sea, which ultimately drains into the Atlantic Ocean, via the Turkish Straits and the Aegean Sea into the Mediterranean Sea, and from it to the Atlantic proper through the Strait of Gibraltar. The Bosporus strait connects it to the small Sea of Marmara which in turn is connected to the Aegean Sea via the strait of the Dardanelles. To the north, the Black Sea is connected to the Sea of Azov by the Kerch Strait.
The water level has varied significantly over geological time. Due to these variations in the water level in the basin, the surrounding shelf and associated aprons have sometimes been dry land. At certain critical water levels, connections with surrounding water bodies can become established. It is through the most active of these connective routes, the Turkish Straits, that the Black Sea joins the World Ocean. During geological periods when this hydrological link was not present, the Black Sea was an endorheic basin, operating independently of the global ocean system (similar to the Caspian Sea today). Currently, the Black Sea water level is relatively high; thus, water is being exchanged with the Mediterranean. The Black Sea undersea river is a current of particularly saline water flowing through the Bosporus Strait and along the seabed of the Black Sea, the first of its kind discovered.
Name
Modern names
Current names of the sea are usually equivalents of the English name "Black Sea", including these given in the countries bordering the sea:
,
,
,
,
,
,
,
Laz and , , or simply , , , "Sea"
,
,
,
,
Such names have not yet been shown conclusively to predate the 13thcentury.
In Greece, the historical name "Euxine Sea", which holds a different literal meaning (see below), is still widely used:
, ; the name , , is used, but is much less common.
The Black Sea is one of four seas named in English after common color terms – the others being the Red Sea, the White Sea and the Yellow Sea.
Historical names and etymology
The earliest known name of the Black Sea is the Sea of Zalpa, so called by both the HattiansThe Journal of Indo-European Studies, p.79. United States, n.p, 1985. Google Books and their conquerors, the Hittites. The Hattic city of Zalpa was "situated probably at or near the estuary of the Marrassantiya River, the modern Kızıl Irmak, on the Black Sea coast."Burney, Charles. Historical Dictionary of the Hittites, p.333. United States, Rowman & Littlefield Publishers, 2018. Google Books. Accessed 26 February 2024.
The principal Greek name Póntos Áxeinos is generally accepted to be a rendering of the Iranian word ("dark colored"). Ancient Greek voyagers adopted the name as , identified with the Greek word (inhospitable). The name (Inhospitable Sea), first attested in Pindar (), was considered an ill omen and was euphemized to its opposite, (Hospitable Sea), also first attested in Pindar. This became the commonly used designation in Greek, although in mythological contexts the "true" name remained favored.
Strabo's Geographica (1.2.10) reports that in antiquity, the Black Sea was often simply called "the Sea" ( ). He thought that the sea was called the "Inhospitable Sea" by the inhabitants of the Pontus region of the southern shoreline before Greek colonization due to its difficult navigation and hostile barbarian natives (7.3.6), and that the name was changed to "hospitable" after the Milesians colonized the region, bringing it into the Greek world.
Popular supposition derives "Black Sea" from the dark color of the water or climatic conditions. Some scholars understand the name to be derived from a system of color symbolism representing the cardinal directions, with black or dark for north, red for south, white for west, and green or light blue for east. Hence, "Black Sea" meant "Northern Sea". According to this scheme, the name could only have originated with a people living between the northern (black) and southern (red) seas: this points to the Achaemenids (550–330 BC). This interpretation has been labeled as folk etymologyKaratay, Osman. (2011). "On the origins of the name for the 'Black Sea'." Journal of Historical Geography, Volume 37, Issue 1, Pages 1–11. and may reflect a primitive historical understanding of the "P/Bla" phoneme originally associated with the name.Beekes, R. S. P. (2002). "The Origin of the Etruscans." Koninklijke Nederlandse Akademie van Wetenschappen,Amsterdam. Robert Beekes
In the Greater Bundahishn, a Middle Persian Zoroastrian scripture, the Black Sea is called . In the tenth-century Persian geography book , the Black Sea is called Georgian Sea ().§ 42. Discourse on the Country of Rūm, its Provinces and Towns Hudud al-'Alam The Georgian Chronicles use the name (Sea of Speri) after the Kartvelian tribe of Speris or Saspers.Part II Georgian Chronicles, Line of ed: 14 Other modern names such as and (both meaning Black Sea) originated during the 13th century. A 1570 map from Abraham Ortelius's labels the sea (Great Sea), compare Latin .
English writers of the 18th century often used Euxine Sea ( or ). During the Ottoman Empire, it was called either (Perso-Arabic) or (Ottoman Turkish), both meaning "Black Sea".
Geography
The International Hydrographic Organization defines the limits of the Black Sea as follows:
The area surrounding the Black Sea is commonly referred to as the Black Sea Region. Its northern part lies within the Chernozem belt (black soil belt) which goes from eastern Croatia (Slavonia), along the Danube (northern Serbia, northern Bulgaria (Danubian Plain) and southern Romania (Wallachian Plain) to northeast Ukraine and further across the Central Black Earth Region and southern Russia into Siberia.
The littoral zone of the Black Sea is often referred to as the Pontic littoral or Pontic zone.
The largest bays of the Black Sea are Karkinit Bay in Ukraine; the Gulf of Burgas in Bulgaria; Dnieprovska Gulf and Dniestrovsky Liman, both in Ukraine; and Sinop Bay and Samsun Bay, both in Turkey.
Coastline and exclusive economic zones
{| class="wikitable sortable" style="text-align:right;"
|+ Coastline length and area of exclusive economic zones
|-
! scope="col" | Country
! scope="col" | Coastline length (km)
! scope="col" | Exclusive economic zones area (km2)
|-
! scope="row" |
| 1,329
| 172,484
|-
! scope="row" |
| 2,782
| 132,414
|-
! scope="row" |
| 800
| 67,351
|-
! scope="row" |
| 354
| 35,132
|-
! scope="row" |
| 310 (without Abkhazia 100)
| 22,947
|-
! scope="row" |
| 225
| 29,756
|-
! scope="row" |
| 210
| –
|- class="sortbottom" style="background:#9acdff;"
! scope="row" | Total
| style="background:#9acdff; font-weight:bold;" | 5,800
| style="background:#9acdff; font-weight:bold;" | 460,084
|}
Drainage basin
The largest rivers flowing into the Black Sea are:
Danube
Dnieper
Don
Dniester
Kızılırmak
Kuban
Sakarya
Southern Bug
Çoruh/Chorokhi
Yeşilırmak
Rioni
Yeya
Mius
Kamchiya
Enguri/Egry
Kalmius
Molochna
Tylihul
Velykyi Kuialnyk
Veleka
Rezovo
Kodori/Kwydry
Bzyb/Bzipi
Supsa
Mzymta
These rivers and their tributaries comprise a Black Sea drainage basin that covers wholly or partially 24 countries:
Unrecognized states:
Islands
Some islands in the Black Sea belong to Bulgaria, Romania, Turkey, and Ukraine:
St. Thomas Island – Bulgaria
St. Anastasia Island – Bulgaria
St. Cyricus Island – Bulgaria
St. Ivan Island – Bulgaria
St. Peter Island – Bulgaria
Sacalinu Mare Island – Romania
Sacalinu Mic Island – Romania
K Island – Romania and Ukraine
Utrish Island
Krupinin Island
Sudiuk Island
Kefken Island
Oreke Island
Giresun Island – Turkey
Dzharylhach Island – Ukraine
Zmiinyi (Snake) Island – Ukraine
Climate
Short-term climatic variation in the Black Sea region is significantly influenced by the operation of the North Atlantic oscillation, the climatic mechanisms resulting from the interaction between the north Atlantic and mid-latitude air masses. While the exact mechanisms causing the North Atlantic Oscillation remain unclear, it is thought the climate conditions established in western Europe mediate the heat and precipitation fluxes reaching Central Europe and Eurasia, regulating the formation of winter cyclones, which are largely responsible for regional precipitation inputs and influence Mediterranean sea surface temperatures (SSTs).
The relative strength of these systems also limits the amount of cold air arriving from northern regions during winter. Other influencing factors include the regional topography, as depressions and storm systems arriving from the Mediterranean are funneled through the low land around the Bosporus, with the Pontic and Caucasus mountain ranges acting as waveguides, limiting the speed and paths of cyclones passing through the region.Brody, L. R., Nestor, M.J.R. (1980). Regional Forecasting Aids for the Mediterranean Basin . Handbook for Forecasters in the Mediterranean, Naval Research Laboratory. Part 2.
Geology and bathymetry
The Black Sea is divided into two depositional basins—the Western Black Sea and Eastern Black Sea—separated by the Mid-Black Sea High, which includes the Andrusov Ridge, Tetyaev High, and Archangelsky High, extending south from the Crimean Peninsula. The basin includes two distinct relict back-arc basins which were initiated by the splitting of an Albian volcanic arc and the subduction of both the Paleo- and Neo-Tethys oceans, but the timings of these events remain uncertain. Arc volcanism and extension occurred as the Neo-Tethys Ocean subducted under the southern margin of Laurasia during the Mesozoic. Uplift and compressional deformation took place as the Neotethys continued to close. Seismic surveys indicate that rifting began in the Western Black Sea in the Barremian and Aptian followed by the formation of oceanic crust 20million years later in the Santonian. Since its initiation, compressional tectonic environments led to subsidence in the basin, interspersed with extensional phases resulting in large-scale volcanism and numerous orogenies, causing the uplift of the Greater Caucasus, Pontides, southern Crimean Peninsula and Balkanides mountain ranges.
During the Messinian salinity crisis in the neighboring Mediterranean Sea, water levels fell but without drying up the sea. The collision between the Eurasian and African plates and the westward escape of the Anatolian block along the North Anatolian and East Anatolian faults dictates the current tectonic regime, which features enhanced subsidence in the Black Sea basin and significant volcanic activity in the Anatolian region. These geological mechanisms, in the long term, have caused the periodic isolations of the Black Sea from the rest of the global ocean system.
The large shelf to the north of the basin is up to wide and features a shallow apron with gradients between 1:40 and 1:1000. The southern edge around Turkey and the eastern edge around Georgia, however, are typified by a narrow shelf that rarely exceeds in width and a steep apron that is typically 1:40 gradient with numerous submarine canyons and channel extensions. The Euxine abyssal plain in the center of the Black Sea reaches a maximum depth of just south of Yalta on the Crimean Peninsula.
Chronostratigraphy
The Paleo-Euxinian is described by the accumulation of eolian silt deposits (related to the Riss glaciation) and the lowering of sea levels (MIS 6, 8 and 10). The Karangat marine transgression occurred during the Eemian Interglacial (MIS 5e). This may have been the highest sea levels reached in the late Pleistocene. Based on this some scholars have suggested that the Crimean Peninsula was isolated from the mainland by a shallow strait during the Eemian Interglacial.
The Neoeuxinian transgression began with an inflow of waters from the Caspian Sea. Neoeuxinian deposits are found in the Black Sea below water depth in three layers. The upper layers correspond with the peak of the Khvalinian transgression, on the shelf shallow-water sands and coquina mixed with silty sands and brackish-water fauna, and inside the Black Sea Depression hydrotroilite silts. The middle layers on the shelf are sands with brackish-water mollusc shells. Of continental origin, the lower level on the shelf is mostly alluvial sands with pebbles, mixed with less common lacustrine silts and freshwater mollusc shells. Inside the Black Sea Depression they are terrigenous non-carbonate silts, and at the foot of the continental slope turbidite sediments.
Hydrology
The Black Sea is the world's largest body of water with a meromictic basin. The deep waters do not mix with the upper layers of water that receive oxygen from the atmosphere. As a result, over 90% of the deeper Black Sea volume is anoxic water. The Black Sea's circulation patterns are primarily controlled by basin topography and fluvial inputs, which result in a strongly stratified vertical structure. Because of the extreme stratification, it is classified as a salt wedge estuary.
Inflow from the Mediterranean Sea through the Dardanelles and Bosporus has a higher salinity and density than the outflow, creating the classic estuarine circulation. This means that the inflow of dense water from the Mediterranean occurs at the bottom of the basin while the outflow of fresher Black Sea surface-water into the Sea of Marmara occurs near the surface. The outflow is or around , and the inflow is or around .Gregg, M. C., and E. Özsoy (2002), "Flow, water mass changes, and hydraulics in the Bosporus", Journal of Geophysical Research 107(C3), 3016,
The following water budget can be estimated:
Water in:
Total river discharge:
Precipitation:
Inflow via Bosporus:
Water out:
Evaporation: (reduced greatly since the 1970s)
Outflow via Bosporus:
The southern sill of the Bosporus is located at below present sea level (deepest spot of the shallowest cross-section in the Bosporus, located in front of Dolmabahçe Palace) and has a wet section of around . Inflow and outflow current speeds are averaged around , but much higher speeds are found locally, inducing significant turbulence and vertical shear. This allows for turbulent mixing of the two layers.Descriptive Physical Oceanography. Talley, Pickard, Emery, Swift. Surface water leaves the Black Sea with a salinity of 17 practical salinity units (PSU) and reaches the Mediterranean with a salinity of 34 PSU. Likewise, an inflow of the Mediterranean with salinity 38.5 PSU experiences a decrease to about 34 PSU.
Mean surface circulation is cyclonic; waters around the perimeter of the Black Sea circulate in a basin-wide shelfbreak gyre known as the Rim Current. The Rim Current has a maximum velocity of about . Within this feature, two smaller cyclonic gyres operate, occupying the eastern and western sectors of the basin. The Eastern and Western Gyres are well-organized systems in the winter but dissipate into a series of interconnected eddies in the summer and autumn. Mesoscale activity in the peripheral flow becomes more pronounced during these warmer seasons and is subject to interannual variability.
Outside of the Rim Current, numerous quasi-permanent coastal eddies are formed as a result of upwelling around the coastal apron and "wind curl" mechanisms. The intra-annual strength of these features is controlled by seasonal atmospheric and fluvial variations. During the spring, the Batumi eddy forms in the southeastern corner of the sea.
Beneath the surface waters—from about —there exists a halocline that stops at the Cold Intermediate Layer (CIL). This layer is composed of cool, salty surface waters, which are the result of localized atmospheric cooling and decreased fluvial input during the winter months. It is the remnant of the winter surface mixed layer. The base of the CIL is marked by a major pycnocline at about , and this density disparity is the major mechanism for isolation of the deep water.
Below the pycnocline is the Deep Water mass, where salinity increases to 22.3 PSU and temperatures rise to around . The hydrochemical environment shifts from oxygenated to anoxic, as bacterial decomposition of sunken biomass utilizes all of the free oxygen. Weak geothermal heating and long residence time create a very thick convective bottom layer.
The Black Sea undersea river is a current of particularly saline water flowing through the Bosporus Strait and along the seabed of the Black Sea. The discovery of the river, announced on 1 August 2010, was made by scientists at the University of Leeds and is the first of its kind to be identified. The undersea river stems from salty water spilling through the Bosporus Strait from the Mediterranean Sea into the Black Sea, where the water has a lower salt content.
Hydrochemistry
Because of the anoxic water at depth, organic matter, including anthropogenic artifacts such as boat hulls, are well preserved. During periods of high surface productivity, short-lived algal blooms form organic rich layers known as sapropels. Scientists have reported an annual phytoplankton bloom that can be seen in many NASA images of the region.Black Sea Becomes Turquoise earthobservatory.nasa.gov. Retrieved 2 December 2006. As a result of these characteristics the Black Sea has gained interest from the field of marine archaeology, as ancient shipwrecks in excellent states of preservation have been discovered, such as the Byzantine wreck Sinop D, located in the anoxic layer off the coast of Sinop, Turkey.
Modelling shows that, in the event of an asteroid impact on the Black Sea, the release of hydrogen sulfide clouds would pose a threat to health—and perhaps even life—for people living on the Black Sea coast.
There have been isolated reports of flares on the Black Sea occurring during thunderstorms, possibly caused by lightning igniting combustible gas seeping up from the sea depths.
Ecology
Marine
The Black Sea supports an active and dynamic marine ecosystem, dominated by species suited to the brackish, nutrient-rich, conditions. As with all marine food webs, the Black Sea features a range of trophic groups, with autotrophic algae, including diatoms and dinoflagellates, acting as primary producers. The fluvial systems draining Eurasia and central Europe introduce large volumes of sediment and dissolved nutrients into the Black Sea, but the distribution of these nutrients is controlled by the degree of physiochemical stratification, which is, in turn, dictated by seasonal physiographic development.
During winter, strong winds promote convective overturning and upwelling of nutrients, while high summer temperatures result in marked vertical stratification and a warm, shallow mixed layer. Day length and insolation intensity also control the extent of the photic zone. Subsurface productivity is limited by nutrient availability, as the anoxic bottom waters act as a sink for reduced nitrate, in the form of ammonia. The benthic zone also plays an important role in Black Sea nutrient cycling, as chemosynthetic organisms and anoxic geochemical pathways recycle nutrients which can be upwelled to the photic zone, enhancing productivity.
In total, the Black Sea's biodiversity contains around one-third of the Mediterranean's and is experiencing natural and artificial invasions or "Mediterranizations".Mechanisms impeding the natural Mediterranization process of Black Sea fauna (pdf). Retrieved on 6 September 2017Selifonova, P.J. (2011). Ships' ballast as a Primary Factor for 'Mediterranization' of Pelagic Copepod Fauna (Copepoda) in the Northeastern Black Sea (pdf). Retrieved on 6 September 2017
Phytoplankton
The main phytoplankton groups present in the Black Sea are dinoflagellates, diatoms, coccolithophores and cyanobacteria. Generally, the annual cycle of phytoplankton development comprises significant diatom and dinoflagellate-dominated spring production, followed by a weaker mixed assemblage of community development below the seasonal thermocline during summer months, and surface-intensified autumn production. This pattern of productivity is augmented by an Emiliania huxleyi bloom during the late spring and summer months.
Dinoflagellates
Annual dinoflagellate distribution is defined by an extended bloom period in subsurface waters during the late spring and summer. In November, subsurface plankton production is combined with surface production, due to vertical mixing of water masses and nutrients such as nitrite. The major bloom-forming dinoflagellate species in the Black Sea is Gymnodinium sp. Estimates of dinoflagellate diversity in the Black Sea range from 193Krakhmalny, A. F. (1994). "Dinophyta of the Black Sea (Brief history of investigations and species diversity)." Algologiya 4: 99–107. to 267 species. This level of species richness is relatively low in comparison to the Mediterranean Sea, which is attributable to the brackish conditions, low water transparency and presence of anoxic bottom waters. It is also possible that the low winter temperatures below of the Black Sea prevent thermophilous species from becoming established. The relatively high organic matter content of Black Sea surface water favors the development of heterotrophic (an organism that uses organic carbon for growth) and mixotrophic dinoflagellates species (able to exploit different trophic pathways), relative to autotrophs. Despite its unique hydrographic setting, there are no confirmed endemic dinoflagellate species in the Black Sea.
Diatoms
The Black Sea is populated by many species of the marine diatom, which commonly exist as colonies of unicellular, non-motile auto- and heterotrophic algae. The life-cycle of most diatoms can be described as 'boom and bust' and the Black Sea is no exception, with diatom blooms occurring in surface waters throughout the year, most reliably during March. In simple terms, the phase of rapid population growth in diatoms is caused by the in-wash of silicon-bearing terrestrial sediments, and when the supply of silicon is exhausted, the diatoms begin to sink out of the photic zone and produce resting cysts. Additional factors such as predation by zooplankton and ammonium-based regenerated production also have a role to play in the annual diatom cycle. Typically, blooms during spring and blooms during the autumn.
Coccolithophores
Coccolithophores are a type of motile, autotrophic phytoplankton that produce CaCO3 plates, known as coccoliths, as part of their life cycle. In the Black Sea, the main period of coccolithophore growth occurs after the bulk of the dinoflagellate growth has taken place. In May, the dinoflagellates move below the seasonal thermocline into deeper waters, where more nutrients are available. This permits coccolithophores to utilize the nutrients in the upper waters, and by the end of May, with favorable light and temperature conditions, growth rates reach their highest. The major bloom-forming species is , which is also responsible for the release of dimethyl sulfide into the atmosphere. Overall, coccolithophore diversity is low in the Black Sea, and although recent sediments are dominated by and , Holocene sediments have been shown to also contain Helicopondosphaera and Discolithina species.
Cyanobacteria
Cyanobacteria are a phylum of picoplanktonic (plankton ranging in size from 0.2 to 2.0 μm) bacteria that obtain their energy via photosynthesis, and are present throughout the world's oceans. They exhibit a range of morphologies, including filamentous colonies and biofilms. In the Black Sea, several species are present, and as an example, Synechococcus spp. can be found throughout the photic zone, although concentration decreases with increasing depth. Other factors which exert an influence on distribution include nutrient availability, predation, and salinity.
Animal species
Zebra mussel
The Black Sea along with the Caspian Sea is part of the zebra mussel's native range. The mussel has been accidentally introduced around the world and become an invasive species where it has been introduced.
Common carp
The common carp's native range extends to the Black Sea along with the Caspian Sea and Aral Sea. Like the zebra mussel, the common carp is an invasive species when introduced to other habitats.
Round goby
Another native fish that is also found in the Caspian Sea. It preys upon zebra mussels. Like the mussels and common carp, it has become invasive when introduced to other environments, like the Great Lakes in North America.
Marine mammals present within the basin include subspecies of two species of dolphin (common and bottlenose) and the harbour porpoise, although all of these are endangered due to pressures and impacts by human activities. All three species have been classified as distinct subspecies from those in the Mediterranean and the Atlantic and are endemic to the Black and Azov seas, and are more active during nights in the Turkish Straits.First stranding record of a Risso's Dolphin (Grampus griseus) in the Marmara Sea, Turkey (pdf). Retrieved on 6 September 2017 However, construction of the Crimean Bridge has caused increases in nutrients and planktons in the waters, attracting large numbers of fish and more than 1,000 bottlenose dolphins.Goldman E.. 2017. Crimean bridge construction boosts dolphin population in Kerch Strait . Russia Beyond the Headlines. Retrieved on 10 March 2017. However, others claim that construction may cause devastating damages to the ecosystem, including dolphins.Reznikova E.. 2017. Крымские стройки убивают все живое на дне моря . Примечания. Новости Севастополя и Крыма. Retrieved on 29 September 2017
Mediterranean monk seals, now a vulnerable species, were historically abundant in the Black Sea, and are regarded to have become extinct from the basin in 1997. Monk seals were present at Snake Island, near the Danube Delta, until the 1950s, and several locations such as the and Doğankent were the last of the seals' hauling-out sites post-1990. Very few animals still thrive in the Sea of Marmara.
Ongoing Mediterranizations may or may not boost cetacean diversity in the Turkish Straits and hence in the Black and Azov basins.
Various species of pinnipeds, sea otter, and beluga whaleAnderson R. (1992). Black Sea Whale Aided By Activists. Chicago Tribune. Retrieved on 21 April 2016 were introduced into the Black Sea by mankind and later escaped either by accidental or purported causes. Of these, grey seals and beluga whales have been recorded with successful, long-term occurrences.
Great white sharks are known to reach into the Sea of Marmara and Bosporus Strait and basking sharks into the Dardanelles, although it is unclear whether or not these sharks may reach into the Black and Azov basins.Cuma (2009). Çanakkale'de 10 metrelik köpekbalığı! . Retrieved on 4 September 2017
Ecological effects of pollution
Since the 1960s, rapid industrial expansion along the Black Sea coastline and the construction of a major dam on the Danube have significantly increased annual variability in the N:P:Si ratio in the basin. Coastal areas, accordingly, have seen an increase in the frequency of monospecific phytoplankton blooms, with diatom-bloom frequency increasing by a factor of 2.5 and non-diatom bloom frequency increasing by a factor of 6. The non-diatoms, such as the prymnesiophytes (coccolithophore), sp., and the Euglenophyte , can out-compete diatom species because of the limited availability of silicon, a necessary constituent of diatom frustules. As a consequence of these blooms, benthic macrophyte populations were deprived of light, while anoxia caused mass mortality in marine animals.
Overfishing during the 1970s further compounded the decline in macrophytes, while the invasive ctenophore Mnemiopsis reduced the biomass of copepods and other zooplankton in the late 1980s. Additionally, an alien species—the warty comb jelly ()—established itself in the basin, exploding from a few individuals to an estimated biomass of one billion metric tons.
Pollution-reduction and regulation efforts led to a partial recovery of the Black Sea ecosystem during the 1990s, and an EU monitoring exercise, 'EROS21', revealed decreased nitrogen and phosphorus values relative to the 1989 peak. Recently, scientists have noted signs of ecological recovery, in part due to the construction of new sewage-treatment plants in Slovakia, Hungary, Romania, and Bulgaria in connection with those countries' membership of the European Union. populations have been checked with the arrival of another alien species that feeds on them.Woodard, Colin , "The Black Sea's Cautionary Tale", Congressional Quarterly Global Researcher, October 2007, pp. 244–245 However other sources say that there was ecological decline in the early 21st century.
History
Mediterranean connection during the Holocene
The Black Sea is connected to the World Ocean by a chain of two shallow straits, the Dardanelles and the Bosporus. The Dardanelles is deep, and the Bosporus is as shallow as . By comparison, at the height of the last ice age, sea levels were more than lower than they are now.
There is evidence that water levels in the Black Sea were considerably lower at some point during the post-glacial period. Some researchers theorize that the Black Sea had been a landlocked freshwater lake (at least in upper layers) during the last glaciation and for some time after.
In the aftermath of the last glacial period, water levels in the Black Sea and the Aegean Sea rose independently until they were high enough to exchange water. The exact timeline of this development is still subject to debate. One possibility is that the Black Sea filled first, with excess freshwater flowing over the Bosporus sill and eventually into the Mediterranean Sea. There are also catastrophic scenarios, such as the "Black Sea deluge hypothesis" put forward by William Ryan, Walter Pitman and Petko Dimitrov.
Deluge hypothesis
The Black Sea deluge is a hypothesized catastrophic rise in the level of the Black Sea due to waters from the Mediterranean Sea breaching a sill in the Bosporus Strait. The hypothesis was headlined when The New York Times published it in December 1996, shortly before it was published in an academic journal. While it is agreed that the sequence of events described did occur, there is debate over the suddenness, dating, and magnitude of the events. Relevant to the hypothesis is that its description has led some to connect this catastrophe with prehistoric flood myths.Dimitrov P., D. Dimitrov. 2004. The Black Sea The Flood and the ancient myths . "Slavena", Varna, ISBN 954-579-335-X, 91 p., DOI: 10.13140/RG.2.2.18954.16327
Archaeology
The Black Sea was sailed by Hittites, Carians, Colchians, Armenians, Thracians, Greeks, Persians, Cimmerians, Scythians, Romans, Byzantines, Goths, Huns, Avars, Slavs, Varangians, Crusaders, Venetians, Genoese, Georgians, Bulgarians, Tatars and Ottomans.
The concentration of historical powers, combined with the preservative qualities of the deep anoxic waters of the Black Sea, has attracted increased interest from marine archaeologists who have begun to discover a large number of ancient ships and organic remains in a high state of preservation.
Recorded history
The Black Sea was a busy waterway on the crossroads of the ancient world: the Balkans to the west, the Eurasian steppes to the north, the Caucasus and Central Asia to the east, Asia Minor and Mesopotamia to the south, and Greece to the southwest.
The land at the eastern end of the Black Sea, Colchis (in present-day Georgia), marked for the ancient Greeks the edge of the known world.
The Pontic–Caspian steppe to the north of the Black Sea is seen by several researchers as the pre-historic original homeland () of the speakers of the Proto-Indo-European language (PIE).
Greek presence in the Black Sea began at least as early as the 9th century BC with colonies scattered along the Black Sea's southern coast, attracting traders and colonists due to the grain grown in the Black Sea hinterland.
By 500 BC, permanent Greek communities existed all around the Black Sea, and a lucrative trade network connected the entirety of the Black Sea to the wider Mediterranean. While Greek colonies generally maintained very close cultural ties to their founding polis, Greek colonies in the Black Sea began to develop their own Black Sea Greek culture, known today as Pontic. The coastal communities of Black Sea Greeks remained a prominent part of the Greek world for centuries, and the realms of Mithridates of Pontus, Rome and Constantinople spanned the Black Sea to include Crimean territories.
The Black Sea became a virtual Ottoman Navy lake within five years of the Republic of Genoa losing control of the Crimean Peninsula in 1479, after which the only Western merchant vessels to sail its waters were those of Venice's old rival Ragusa. The Black Sea became a trade route of slaves between Crimea and Ottoman Anatolia via the Crimean–Nogai slave raids in Eastern Europe.
Imperial Russia became a significant Black Sea power in the late-18th century,
occupying the littoral of Novorossiya in 1764 and of Crimea in 1783. Ottoman restrictions on Black Sea navigation were challenged by the Black Sea Fleet (founded in 1783) of the Imperial Russian Navy, and the Ottomans relaxed export controls after the outbreak in 1789 of the French Revolution.Compare:
Modern history
The Crimean War, fought between 1853 and 1856, saw naval engagements between the French and British allies and the forces of Nicholas I of Russia. On 2 March 1855, after the death of Nicholas I, Alexander II became Tsar. On 15 January 1856, the new tsar took Russia out of the war on the very unfavorable terms of the Treaty of Paris (1856), which included the loss of a naval fleet on the Black Sea, and the provision that the Black Sea was to be a demilitarized zone similar to a contemporaneous region of the Baltic Sea.
World Wars
The Black Sea was a significant naval theatre of World War I (1914–1918) and saw both naval and land battles between 1941 and 1945 during World War II. For example, Sevastopol was obliterated by the German Wehrmacht, successfully assisted by Schwerer Gustav in the Siege of Sevastopol (1941–1942). The Soviet naval base was one of the strongest fortifications in the world. Its site, on a deeply eroded, bare limestone promontory at the southwestern tip of the Crimea, made an approach by land forces exceedingly difficult. The high-level cliffs overlooking Severnaya Bay protected the anchorage, making an amphibious landing just as dangerous. The Soviet Navy had built upon these natural defenses by modernizing the port and installing heavy coastal batteries consisting of 180mm and 305mm re-purposed battleship guns which were capable of firing inland as well as out to sea. The artillery emplacements were protected by reinforced concrete fortifications and 9.8-inch thick armored turrets.
21st century
During the Russian invasion of Ukraine, Snake Island was a source of contention. On 24 February 2022, two Russian navy warships attacked and captured Snake Island. It was subsequently bombarded heavily by Ukraine. On 30 June 2022, Ukraine announced that it had driven Russian forces off the island.
On 14 April 2022, the flagship of the Black Sea Fleet, Russian cruiser Moskva was sunk by Ukrainian missiles.
As early as 29 April 2022 submarines of the Black Sea Fleet were used by Russia to bombard Ukrainian cities with Kalibr SLCMs. The Kalibr missile was so successful that on 10 March 2023 Defense Minister Sergey Shoigu announced plans to broaden the type of ship which carried it, to include the corvette Steregushchiy and the nuclear-powered cruiser Admiral Nakhimov.
On the morning of 14 March 2023, a Russian Su-27 fighter jet intercepted and damaged an American MQ-9 Reaper drone, causing the latter to crash into the Black Sea. At 13:20 on 5 May 2023 a Russian Su-35 fighter jet intercepted and threatened the safety of a Polish L-140 Turbolet on a "routine Frontex patrol mission.. and performed 'aggressive and dangerous' manoeuvres". The incident, which occurred "in international airspace over the Black Sea about 60km" east of Romanian airspace, "caused the crew of five Polish border guards to lose control of the plane and lose altitude."
As of January 2025, neither Ukraine nor Russia control the Black Sea, making it contested, claims Estonian Navy Commander Ivo Värk. "The entire Black Sea can currently be seen as a contested maritime area where both sides have some room for action,". This room is greater near the coasts, where both sides benefit from air defense systems, sea mines, and various land-based weaponry. Ships only move out for specific operations to prevent excessive risks. He also noted that both parties operate gas drilling platforms equipped with surveillance devices to monitor the situation, but that those platforms often change hands.
Economy and politics
According to NATO, the Black Sea is a strategic corridor that provides smuggling channels for moving legal and illegal goods including drugs, radioactive materials, and counterfeit goods that can be used to finance terrorism.
Navigation
According to an International Transport Workers' Federation 2013 study, there were at least 30 operating merchant seaports in the Black Sea (including at least 12 in Ukraine). There were also around 2,400 commercial vessels operating in the Black Sea.
Fishing
The Turkish commercial fishing fleet catches around 300,000 tons of anchovies per year. The fishery is carried out mainly in winter, and the highest portion of the stock is caught in November and December."Turkish Black Sea Acoustic Surveys: Winter distribution of anchovy along the Turkish coast". Serdar Sakinan. Middle East Technical University – Institute of Marine Sciences. .
Hydrocarbon exploration
The Black Sea contains oil and natural gas resources but exploration in the sea is incomplete. , 20 wells are in place. Throughout much of its existence, the Black Sea has had significant oil and gas-forming potential because of significant inflows of sediment and nutrient-rich waters. However, this varies geographically. For example, prospects are poorer off the coast of Bulgaria because of the large influx of sediment from the Danube which obscured sunlight and diluted organic-rich sediments. Many of the discoveries to date have taken place offshore of Romania in the Western Black Sea and only a few discoveries have been made in the Eastern Black Sea.
During the Eocene, the Paratethys Sea was partially isolated and sea levels fell. During this time sand shed off the rising Balkanide, Pontide and Caucasus mountains trapped organic material in the Maykop Suite of rocks through the Oligocene and early Miocene. Natural gas appears in rocks deposited in the Miocene and Pliocene by the paleo-Dnieper and paleo-Dniester rivers, or in deep-water Oligocene-age rocks. Serious exploration began in 1999 with two deep-water wells, Limanköy-1 and Limanköy-2, drilled in Turkish waters. Next, the HPX (Hopa)-1 deepwater well targeted late Miocene sandstone units in Achara-Trialeti fold belt (also known as the Gurian fold belt) along the Georgia-Turkey maritime border. Although geologists inferred that these rocks might have hydrocarbons that migrated from the Maykop Suite, the well was unsuccessful. No more drilling happened for five years after the HPX-1 well. In 2010, Sinop-1 targeted carbonate reservoirs potentially charged from the nearby Maykop Suite on the Andrusov Ridge, but the well-struck only Cretaceous volcanic rocks. Yassihöyük-1 encountered similar problems.
Other Turkish wells, Sürmene-1 and Sile-1 drilled in the Eastern Black Sea in 2011 and 2015 respectively tested four-way closures above Cretaceous volcanoes, with no results in either case. A different Turkish well, Kastamonu-1 drilled in 2011 did successfully find thermogenic gas in Pliocene and Miocene shale-cored anticlines in the Western Black Sea. A year later in 2012, Romania drilled Domino-1 which struck gas prompting the drilling of other wells in the Neptun Deep. In 2016, the Bulgarian well Polshkov-1 targeted Maykop Suite sandstones in the Polshkov High and Russia is in the process of drilling Jurassic carbonates on the Shatsky Ridge as of 2018.
In August 2020, Turkey found of natural gas in the biggest ever discovery in the Black Sea, and hoped to begin production in the Sakarya Gas Field by 2023. The sector is near where Romania has also found gas reserves.
Trans-sea cooperation
Urban areas
+ Most populous urban areas along the Black Sea City Image Country Region/county Population (urban) Istanbul border|135px Istanbul15,340,111 Odesa border|135px Odesa1,003,705 Samsun border|135px Samsun639,930 Varna border|135px Varna500,076 Constanța border|135px Constanța491,498 Sevastopol border|135px disputed: (de jure) (de facto) City with special status /Federal city379,200 Sochi border|135px Krasnodar Krai343,334 Trabzon border|135px Trabzon293,661 Novorossiysk border|135px Krasnodar Krai241,952 Burgas border|135px Burgas223,902 Batumi border|135px Adjara204,156 Ordu border|135px Ordu190,425 Kerch border|135px disputed: (de jure) (de facto) Autonomous Republic of Crimea /Republic of Crimea149,566
Tourism
In the years following the end of the Cold War, the popularity of the Black Sea as a tourist destination steadily increased. Tourism at Black Sea resorts became one of the region's growth industries.
The following is a list of notable Black Sea resort towns:
2 Mai (Romania)
Agigea (Romania)
Ahtopol (Bulgaria)
Amasra (Turkey)
Anaklia (Georgia)
Anapa (Russia)
Albena (Bulgaria)
Alupka (Crimea, Ukraine/Russia (disputed))
Alushta (Crimea, Ukraine/Russia (disputed))
Balchik (Bulgaria)
Batumi (Georgia)Postcard from the Silk Road – Batumi...( )
Burgas (Bulgaria)
Byala (Bulgaria)
Cap Aurora (Romania)
Chakvi (Georgia)
Constanța (Romania)
Constantine and Helena (Bulgaria)
Corbu (Romania)
Costinești (Romania)
Eforie (Romania)
Emona (Bulgaria)
Feodosia (Crimea, Ukraine/Russia (disputed))
Foros (Crimea, Ukraine/Russia (disputed))
Gagra (Abkhazia/Georgia)
Gelendzhik (Russia)
Giresun (Turkey)
Golden Sands (Bulgaria)
Gonio (Georgia)
Gudauta and subsequently the Gudauta Bay (Abkhazia/Georgia)
Gurzuf (Crimea, Ukraine/Russia (disputed))
Hopa (Artvin, Turkey)
Jupiter (Romania)
Kamchia (Bulgaria)
Kavarna (Bulgaria)
Kiten (Bulgaria)
Kobuleti (Georgia)
Koktebel (Crimea, Ukraine/Russia (disputed))
Lozenetz (Bulgaria)
Mamaia (Romania)
Mangalia (Romania)
Năvodari (Romania)
Neptun (Romania)
Nesebar (Bulgaria)
Novorossiysk (Russia)
Obzor (Bulgaria)
Odesa (Ukraine)
Olimp (Romania)
Ordu (Turkey)
Pitsunda/Bichvinta (Abkhazia/Georgia)
Pomorie (Bulgaria)
Primorsko (Bulgaria)
Rize (Turkey)
Rusalka (Bulgaria)
Samsun (Turkey)
Saturn (Romania)
Şile (Turkey)
Sinop (Turkey)
Skadovsk (Ukraine)
Sochi (Russia)
Sozopol (Bulgaria)
Sudak (Crimea, Ukraine/Russia (disputed))
Sulina (Romania)
Sunny Beach (Bulgaria)
Sveti Vlas (Bulgaria)
Trabzon (Turkey)
Tsikhisdziri (Georgia)
Tuapse (Russia)
Ureki (Georgia)
Vama Veche (Romania)
Varna (Bulgaria)
Venus (Romania)
Yalta (Crimea, Ukraine/Russia (disputed))
Yevpatoria (Crimea, Ukraine/Russia (disputed))
Zonguldak (Turkey)
Modern military use
The 1936 Montreux Convention provides for free passage of civilian ships between the international waters of the Black and the Mediterranean seas. However, a single country (Turkey) has complete control over the straits connecting the two seas. Military ships are categorized separately from civilian vessels and can pass through the straits only if the ship belongs to a Black Sea country. Other military ships have the right to pass through the straits if they are not in a war against Turkey and if they stay in the Black Sea basin for a limited time. The 1982 amendments to the Montreux Convention allow Turkey to close the straits at its discretion in both war and peacetime.
The Montreux Convention governs the passage of vessels between the Black, Mediterranean, and Aegean seas and the presence of military vessels belonging to non-littoral states in the Black Sea waters.
The Russian Black Sea Fleet has its official primary headquarters and facilities in the city of Sevastopol (Sevastopol Naval Base).
The Soviet hospital ship was sunk on 7 November 1941 by German aircraft while evacuating civilians and wounded soldiers from Crimea. It has been estimated that approximately 5,000 to 7,000 people were killed during the sinking, making it one of the deadliest maritime disasters in history. There were only eight survivors.
In December 2018, the Kerch Strait incident occurred, in which the Russian navy and coast guard took control of three Ukrainian vessels as the ships were trying to transit from the Black Sea into the Sea of Azov.
In April 2022, during the Russian invasion of Ukraine, the Russian cruiser Moskva was sunk in the western Black Sea by sea-skimming Neptune missiles of the Ukrainian Armed Forces while the Russians claimed that an onboard fire had caused munitions to explode and damage the ship extensively. She was the largest ship to be lost in naval combat in Europe since World War II.
In late 2023, Russia announced plans to build a naval base on the Black Sea coast of Abkhazia."Russia plans naval base in Abkhazia, triggering criticism from Georgia" 5 October 2023, Reuters, retrieved 12 January 2024"Russia to build naval base in breakaway Georgia region" 5 October 2023, Politico (Politico.eu), retrieved 12 January 2024"Russia's new Black Sea naval base alarms Georgia," 12 December 2023, BBC News, retrieved 12 January 2024
See also
1927 Crimean earthquakes
Kerch Strait
Regions of Europe
Sea of Azov
Laz people
Lazistan Sanjak
Lazistan
Notes and references
Informational notes
Citations
General and cited references
Neal Ascherson (1996). Black Sea. Vintage. .
Ghervas, Stella (2008). "Odessa et les confins de l'Europe: un éclairage historique", in Stella Ghervas et François Rosset (ed.), Lieux d'Europe. Mythes et limites. Paris: Editions de la Maison des sciences de l'homme. pp. 107–124. .
King, Charles (2004). The Black Sea: A History. .
Ryan, William, and Walter Pitman (1999). Noah's Flood. .
Schmitt, Rüdiger (1996). "Considerations on the Name of the Black Sea", in: Hellas und der griechische Osten. Saarbrücken. pp. 219–224.
External links
Space Monitoring of the Black Sea Coastline and Waters
Pictures of the Black sea coast all along the Crimean peninsula
Black Sea Environmental Internet Node
Black Sea-Mediterranean Corridor during the last 30 ky: UNESCO IGCP 521 WG12
Category:Seas of the Mediterranean Sea
Category:Anoxic waters
Category:Back-arc basins
Category:European seas
Category:Seas of Russia
Category:Seas of Turkey
Category:Seas of Ukraine
Category:Bodies of water of Bulgaria
Category:Bodies of water of Georgia (country)
Category:Bodies of water of Romania
Category:Bodies of water of Crimea
Category:Bulgaria–Romania border
Category:Bulgaria–Turkey border
Category:Georgia (country)–Russia border
Category:Georgia (country)–Turkey border
Category:Romania–Ukraine border
Category:Russia–Ukraine border
Category:Seas of the Atlantic Ocean
Category:Seas of Asia
Category:Geography of West Asia
|
geography
| 7,539
|
3469
|
British Virgin Islands
|
https://en.wikipedia.org/wiki/British_Virgin_Islands
|
The British Virgin Islands (BVI), officially the Virgin Islands,According to the Virgin Islands Constitution Order, 2007, the territory's official name is simply 'Virgin Islands'. are a British Overseas Territory in the Caribbean, to the east of Puerto Rico and the US Virgin Islands and north-west of Anguilla. The islands are geographically part of the Virgin Islands archipelago and are located in the Leeward Islands of the Lesser Antilles and part of the West Indies.
The British Virgin Islands consist of the main islands of Tortola, Virgin Gorda, Anegada and Jost Van Dyke, along with more than 50 other smaller islands and cays. About 16 of the islands are inhabited. The capital, Road Town, is on Tortola, the largest island, which is about long and wide. The islands had a population of 28,054 at the 2010 Census, of whom 23,491 lived on Tortola; current estimates put the population at 35,802 (July 2018).
The economy of the territory is overwhelmingly dominated by tourism and financial services. In terms of financial services, the territory is known as a leading hub for tax evasion and concealment of assets.
British Virgin Islanders are British Overseas Territories citizens and, since 2002, also British citizens.
Etymology
The islands were named "Santa Úrsula y las Once Mil Vírgenes" by Christopher Columbus in 1493 after the legend of Saint Ursula and the 11,000 virgins. The name was later shortened to "the Virgin Islands".
The official name of the territory is still simply the "Virgin Islands", but the prefix "British" is often used. This is commonly believed to distinguish it from the neighbouring American territory which changed its name from the "Danish West Indies" to "Virgin Islands of the United States" in 1917. However, local historians have disputed this, pointing to a variety of publications and public records dating from between 21 February 1857 and 12 September 1919 where the territory is referred to as the British Virgin Islands. British Virgin Islands government publications continue to begin with the name "The territory of the Virgin Islands", and the territory's passports simply refer to the "Virgin Islands", and all laws begin with the words "Virgin Islands". Moreover, the territory's Constitutional Commission has expressed the view that "every effort should be made" to encourage the use of the name "Virgin Islands". But various public and quasi-public bodies continue to use the name "British Virgin Islands" or "BVI", including BVI Finance, BVI Electricity Corporation, BVI Tourist Board, BVI Athletic Association, BVI Bar Association and others.
In 1968 the British Government issued a memorandum requiring that the postage stamps in the territory should say "British Virgin Islands" (whereas previously they had simply stated "Virgin Islands"), a practice which is still followed today. This was likely to prevent confusion following on from the adoption of US currency in the territory in 1959, and the references to US currency on the stamps of the territory.
History
It is generally thought that the Virgin Islands were first settled by the Arawak from South America around 100 BC to AD 200, though there is some evidence of Amerindian presence on the islands as far back as 1500 BC.Wilson, Samuel M. ed. The Indigenous People of the Caribbean. Gainesville: University Press of Florida, 1997. The Arawaks inhabited the islands until the 15th century when they were displaced by the Kalinago (Island Caribs), a tribe from the Lesser Antilles islands.
The first European sighting of the Virgin Islands was by the Spanish expedition of Christopher Columbus in 1493 on his second voyage to the Americas, who gave the islands their modern name.
The Spanish Empire claimed the islands by discovery in the early 16th century, but never settled them, and subsequent years saw the English, Dutch, French, Spanish, and Danish all jostling for control of the region, which became a notorious haunt for pirates. There is no record of any native Amerindian population in the British Virgin Islands during this period; it is thought that they either fled to safer islands or were killed.
The Dutch established a permanent settlement on the island of Tortola by 1648, frequently clashing with the Spanish who were based on nearby Puerto Rico. In 1672, the English captured Tortola from the Dutch, and the English annexation of Anegada and Virgin Gorda followed in 1680. Meanwhile, over the period 1672–1733, the Danish gained control of the nearby islands of Saint Thomas, Saint John and Saint Croix (i.e. the modern US Virgin Islands).
The British islands were considered principally a strategic possession. The British introduced sugar cane which was to become the main crop and source of foreign trade, and large numbers of slaves were forcibly brought from Africa to work on the sugar cane plantations. The islands prospered economically until the middle of the nineteenth century, when a combination of the abolition of slavery in the British Empire in 1834, a series of disastrous hurricanes, and the growth in the sugar beet crop in Europe and the United StatesIn the United Kingdom, a major market for sugar from the territory, the Sugar Duties Act 1846 also created a considerable downward effect on the price of Caribbean sugar cane. significantly reduced sugar cane production and led to a period of economic decline.
In 1917, the United States purchased the Danish Virgin Islands for US$25 million, renaming them the United States Virgin Islands. Economic linkages with the US islands prompted the British Virgin Islands to adopt the US dollar as its currency in 1959.
The British Virgin Islands were administered variously as part of the British Leeward Islands or with St. Kitts and Nevis, with an administrator representing the British Government on the islands. The islands gained separate colony status in 1960 and became autonomous in 1967 under the new post of Chief Minister. Since the 1960s, the islands have diversified away from their traditionally agriculture-based economy towards tourism and financial services, becoming one of the wealthiest areas in the Caribbean. The constitution of the islands was amended in 1977, 2004 and 2007, giving them greater local autonomy.
In 2017 Hurricane Irma struck the islands, causing four deaths and immense damage.
Geography
The British Virgin Islands comprise around 60 tropical Caribbean islands, ranging in size from the largest, Tortola, being long and wide, to tiny uninhabited islets, altogether about in extent. They are located in the Virgin Islands archipelago, a few miles east of the US Virgin Islands, and about from the Puerto Rican mainland. About east south-east lies Anguilla. The North Atlantic Ocean lies to the east of the islands, and the Caribbean Sea lies to the west. Most of the islands are volcanic in origin and have a hilly, rugged terrain. The highest point is Mount Sage on Tortola at 521m. Anegada is geologically distinct from the rest of the group, being a flat island composed of limestone and coral. The British Virgin Islands contain the Leeward Islands moist forests and Leeward Islands xeric scrub terrestrial ecoregions. In the British Virgin Islands forest cover is around 24% of the total land area, equivalent to 3,620 hectares (ha) of forest in 2020, down from 3,710 hectares (ha) in 1990.
Climate
The British Virgin Islands have a tropical savanna climate, moderated by trade winds. Temperatures vary little throughout the year. In the capital, Road Town, typical daily maxima are around in the summer and in the winter. Typical daily minima are around in the summer and in the winter. Rainfall averages about per year, higher in the hills and lower on the coast. Rainfall can be quite variable, but the wettest months on average are September to November and the driest months on average are February and March.
Hurricanes
Hurricanes occasionally hit the islands, with the Atlantic hurricane season running from June to November.
Hurricane Irma
On 6 September 2017, Hurricane Irma struck the islands, causing extensive damage, especially on Tortola, and killing four people. The Caribbean Disaster Emergency Management Agency declared a state of emergency. Visiting Tortola on 13 September 2017, UK Foreign Secretary Boris Johnson said that he was reminded of photos of Hiroshima after it had been hit by the atom bomb.
By 8 September, the UK government sent troops with medical supplies and other aid. More troops were expected to arrive a day or two later, but
, carrying more extensive assistance, was not expected to reach the islands for another two weeks.
Entrepreneur Richard Branson, a resident of Necker Island, called on the UK government to develop a massive disaster recovery plan to include "both through short-term aid and long-term infrastructure spending". Premier Orlando Smith also called for a comprehensive aid package to rebuild the territory. On 10 September UK Prime Minister Theresa May pledged £32 million to the Caribbean for a hurricane relief fund and promised that the UK government would match donations from the public to the British Red Cross appeal. Specifics were not provided to the news media as to the amount that would be allocated to the Virgin Islands. Boris Johnson's visit to Tortola on 13 September 2017 during his Caribbean tour was intended to confirm the UK's commitment to helping restore British islands but he provided no additional comments on the aid package. He did confirm that HMS Ocean had departed for the BVI carrying items like timber, buckets, bottled water, food, baby milk, bedding and clothing, as well as ten pick-up trucks, building materials and hardware.
The UK offered to underwrite rebuilding loans up to US$400m as long as there was accountability as to how the monies were spent. Successive NDP and VIP governments declined, despite there having been created a Recovery & Development Authority led by highly skilled infrastructure personnel, many of whom were ex-military with decades of infrastructure rebuilding expertise from war zones and natural disaster sites. Many wealthy residents also proposed a large rebuilding plan, starting with key infrastructure, such as the high school. Nearly five years later, there was no sign of any such rebuilding of the high school or certain other key infrastructure.
Politics
The territory operates as a parliamentary democracy. Ultimate executive authority in the British Virgin Islands is vested in the King, , and is exercised on his behalf by the Governor of the British Virgin Islands. The governor is appointed by the King on the advice of the British Government. Defence and most foreign affairs remain the responsibility of the United Kingdom.
The most recent constitution was adopted in 2007 (the Virgin Islands Constitution Order, 2007) and came into force when the Legislative Council was dissolved for the 2007 general election. The head of government under the constitution is the Premier (before the new constitution the office was referred to as Chief Minister), who is elected in a general election along with the other members of the ruling government as well as the members of the opposition. Elections are held roughly every four years. A cabinet is nominated by the Premier and appointed and chaired by the Governor. The Legislature consists of the King (represented by the Governor) and a unicameral House of Assembly made up of 13 elected members plus the Speaker and the Attorney General.
The current Governor is Daniel Pruce (since 29 January 2024). The current Premier is Natalio Wheatley (since 5 May 2022), who is leader of the Virgin Islands Party.
On 8 June 2022, subordinate UK legislation was made allowing for direct rule for the islands.The Virgin Islands Constitution (Interim Amendment) Order 2022 – No. 627 However, the British Government decided on that date not to implement direct rule.
Subdivisions
The British Virgin Islands is a unitary territory. The territory is divided into nine electoral districts, and each voter is registered in one of those districts. Eight of the nine districts are partly or wholly on Tortola, and encompass nearby neighbouring islands. Only the ninth district (Virgin Gorda and Anegada) does not include any part of Tortola. At elections, in addition to voting their local representative, voters also cast votes for four candidates who are elected upon an at-large territory-wide basis.
Law and criminal justice
Crime in the British Virgin Islands is comparatively low by Caribbean standards. While statistics and hard data are relatively rare, and are not regularly published by governmental sources in the British Virgin Islands, the then-Premier did announce that in 2013 there was a 14% decline in recorded crime compared to 2012. Homicides are rare, with just one incident recorded in 2013.
The Virgin Islands Prison Service operates a single facility, His Majesty's Prison in East End, Tortola.
The British and US Virgin Islands sit at the axis of a major drugs transshipment point between Latin America and the continental United States. The American Drug Enforcement Administration regards the adjacent US territories of Puerto Rico and the US Virgin Islands as a "High Intensity Drug Trafficking Area".
Military
As a British Overseas territory, defence of the islands is the responsibility of the United Kingdom.
Economy
The twin pillars of the economy are financial services (60%) and tourism (roughly 40–45% of GDP).
Economically however, financial services associated with the territory's status as an offshore financial centre are by far the more important. 51.8% of the Government's revenue comes directly from licence fees for offshore companies, and considerable further sums are raised directly or indirectly from payroll taxes relating to salaries paid within the trust industry sector (which tend to be higher on average than those paid in the tourism sector). According to Transparency International, the British Virgin Islands is one of the top incorporation hubs for anonymous companies used to conceal assets and stolen funds.
The official currency of the British Virgin Islands has been the United States dollar (US$) since 1959, the currency also used by the United States Virgin Islands.
The British Virgin Islands enjoys one of the more prosperous economies of the Caribbean region, with a per capita average income of around $47,000 (2022 est.), (Select all countries, "GDP, Per Capita GDP – US Dollars", and "2022" to generate the table), United Nations Statistics Division. Access date: 18 June 2024.
Although it is common to hear criticism in the British Virgin Islands' press about income inequality, no serious attempt has been made by economists to calculate a Gini coefficient or similar measure of income equality for the territory. A report from 2000 suggested that, despite the popular perception, income inequality was actually lower in the British Virgin Islands than in any other OECS state, although in global terms income equality is higher in the Caribbean than in many other regions.
Tourism
+Tourist arrivals of 2024 in %
Tourism accounts for approximately 45% of national income. The islands are a popular destination for US citizens. Tourists frequent the numerous white sand beaches, visit The Baths on Virgin Gorda, snorkel the coral reefs near Anegada, or experience the well-known bars of Jost Van Dyke. The BVI are known as one of the world's greatest sailing destinations, and charter sailboats are a very popular way to visit less accessible islands. Established in 1972, the BVI hosts the BVI Spring Regatta and Sailing Festival. A substantial number of the tourists who visit the BVI are cruise ship passengers, and although they produce far lower revenue per head than charter boat tourists and hotel based tourists, they are nonetheless important to the substantial – and politically important – taxi driving community.
Financial services
Financial services account for over half of the income of the territory. The majority of this revenue is generated by the licensing of offshore companies and related services. The British Virgin Islands is a significant global player in the offshore financial services industry. Since 2001, financial services in the British Virgin Islands have been regulated by the independent Financial Services Commission.
The BVI is relied upon for its sophisticated Commercial Court division of the Eastern Caribbean Supreme Court, as well as the more recent BVI Arbitration Centre. Caribbean KCs and British KCs preside over the majority of important cases and the laws of the Virgin Islands are based on English laws, meaning the jurisdiction provides clarity and consistency should parties require commercial disputes to be resolved. Owing to the international nature of BVI companies' operations and asset holdings, the BVI Commercial Court routinely hears highly sophisticated matters at the cutting edge of cross-border litigation and enforcement, where billions of dollars are at issue.
Citco, also known as the Citco Group of Companies and the Curaçao International Trust Co., is a privately owned global hedge fund administrator headquartered in the British Virgin Islands, founded in 1948.Halah Touryalai (6 April 2011). "Protection Racket," Forbes. It is the world's largest hedge fund administrator, managing over $1 trillion in assets under administration.
In May 2022, the banking sector of the British Virgin Islands comprised only seven commercial banks and one restricted bank, 12 authorised custodians, two licensed money services businesses and one licensed financing service provider.
The British Virgin Islands is frequently referred to as a "tax haven" by campaigners and NGOs, including Oxfam. Successive governments in the British Virgin Islands have implemented tax exchange agreements and verified beneficial ownership information of companies following the 2013 G8 summit putting their governance and regulatory regimes far ahead of many "onshore" jurisdictions.
On 10 September 2013, British Prime Minister David Cameron said "I do not think it is fair any longer to refer to any of the Overseas Territories or Crown Dependencies as tax havens. They have taken action to make sure that they have fair and open tax systems. It is very important that our focus should now shift to those territories and countries that really are tax havens." Yet journalist and author for The Economist, Nicholas Shaxson, writes in his 2016 Treasure Islands, tax havens and the men who stole the world: "...Britain sits, spider-like, at the centre of a vast international web of tax havens, which hoover up trillions of dollars' worth of business and capital from around the globe and funnel it up to the City of London. The British Crown Dependencies and Overseas Territories – ...the British Virgin Islands... are some of the biggest players in the offshore world."(pp. vii–viii) Shaxson points out that despite BVI having fewer than 25000 inhabitants, hosts over 800,000 companies.
In the April 2016 Panama Papers leak, while all of the wrongdoing by Mossack Fonseca personnel occurred in Panama and the US, the British Virgin Islands was by far the most commonly used jurisdiction by clients of Mossack Fonseca.
In 2022, the verified nature of beneficial ownership registers of the British Overseas Territories and Crown Dependencies were a crucial tool in giving effect to sanctions against Russia and Belarus, enabling the efficient identification and seizure of yachts, real estate and businesses.
Foreign Account Tax Compliance Act
On 30 June 2014, The British Virgin Islands was deemed to have an Inter- Governmental Agreement (IGA) with the United States of America with respect to the "Foreign Account Tax Compliance Act" of the United States of America.
The Model 1 Agreement (14 Pages) recognises that: The Government of the United Kingdom of Great Britain and Northern Ireland provided a copy of the Letter of Entrustment which was sent to the Government of the British Virgin Islands, to the Government of the United States of America "via diplomatic note of 28 May 2014".
The Letter of Entrustment dated 14 July 2010 was originally provided to the Government of the British Virgin Islands and authorised the Government of the BVI "to negotiate and conclude Agreements relating to taxation that provide for exchange of information on tax matters to the OECD standard" (Paragraph 2 of the FATCA Agreement). Via an "Entrustment Letter" dated 24 March 2014, The Government of the United Kingdom, authorised the Government of the BVI to sign an agreement on information exchange to facilitate the Implementation of the Foreign Account Tax Compliance Act. On 27 March 2017, the US Treasury site disclosed that the Model 1 agreement and related agreement were "In Force" on 13 July 2015.
Sanctions and Anti-Money Laundering Act
Under the UK Sanctions and Anti-Money Laundering Act of 2018, beneficial ownership of companies in British overseas territories such as the British Virgin Islands must be publicly registered for disclosure by 31 December 2020. The Government of the British Virgin Islands has not yet formally challenged this law, yet has criticised it, noting that it violates the Constitutional sovereignty granted to the islands, and would in practice be relatively ineffective in anti-money laundering and counter-terrorism financing, while raising serious privacy and human rights issues. Further, this would put the British Virgin Islands in a position where it would be at a severe disadvantage because other International Finance Centres do not have this in place, and in the case of the US and the UK, there is very little near-term prospect of the same.
In late 2022, both of the US and EU appeared to have endorsed the British Overseas Territories' beneficial ownership register regimes. In a judgement dated 22 November 2022, the European Court of Justice (ECJ) has at last decided that open public access to the beneficial owner registers of EU member state companies is no longer valid, as it is in contravention of articles 7 and 8 of the Charter of Fundamental Rights of the European Union (the Charter). The US appears to have come to a similar conclusion regarding balancing confidentiality and legitimate privacy with the Anti-Money Laundering advantages of having verified beneficial ownerships registers. The resultant goal appears to be to bring the US in line with the current Cayman and BVI regimes. The UK's Crown Dependencies have already stated that they will not implement public registers without beforehand having received fresh legal advice on the matter and it is thought that the Overseas Territories would logically take a similar position. The UK is yet to come out in support of the BOTs and CDs and their current gold standard regulatory positions.
Agriculture and industry
Agriculture and industry account for only a small proportion of the islands' GDP. Agricultural produce includes fruit, vegetables, sugar cane, livestock and poultry, and industries include rum distillation, construction and boat building. Commercial fishing is also practised in the islands' waters.
Workforce
The British Virgin Islands is heavily dependent on migrant workers, and over 50% of all workers on the islands are of a foreign descent. Only 37% of the entire population were born in the territory. The national labour-force is estimated at 12,770, of whom approximately 59.4% work in the service sector but less than 0.6% are estimated to work in agriculture (the balance working in industry). The British Virgin Islands has met challenges in recruiting sufficient numbers in recent years, having been affected by hurricanes Irma and Maria, and having continued to lag behind other jurisdictions in providing a reliable permanent residence regime. This has had a knock-on effect in limiting schooling and amenities when compared to IFCs like Cayman, UAE, Singapore, and Hong Kong.
CARICOM status and the CARICOM Single Market Economy
As of 2 July 1991, the British Virgin Islands holds Associate Member status in CARICOM, the Caribbean Single Market and Economy (CSME).
In recognition of the CARICOM (Free Movement) Skilled Persons Act which came into effect in July 1997 in some of the CARICOM countries such as Jamaica and which has been adopted in other CARICOM countries, such as Trinidad and Tobago, it is possible that CARICOM nationals who hold the "A Certificate of Recognition of Caribbean Community Skilled Person" may be allowed to work in the BVI under normal working conditions.
Transport
There are of roads. The main airport, Terrance B. Lettsome International Airport, also known as Beef Island Airport, is located on Beef Island, which lies off the eastern tip of Tortola and is accessible by the Queen Elizabeth II Bridge. Cape Air, and Air Sunshine are among the airlines offering scheduled service. Virgin Gorda and Anegada have their own smaller airports. Private air charter services operated by Island Birds Air Charter fly directly to all three islands from any major airport in the Caribbean. Helicopters are used to get to islands with no runway facilities; Antilles Helicopter Services is the only helicopter service based in the country.
The main harbour is in Road Town. There are also ferries that operate within the British Virgin Islands and to the neighbouring United States Virgin Islands. Cars in the British Virgin Islands drive on the left just as they do in the United Kingdom and the United States Virgin Islands. However, most cars are left hand drive, because they are imported from the United States. The roads are often quite steep, narrow and winding, and ruts, mudslides and rockfall can be a problem when it rains.
Demographics
As of the 2010 Census, the population of the territory was 28,054. Estimates put the population at 35,800 (July 2018) yet in 2022, it is thought to be much less than 30,000 in Irma's wake and with people having left during COVID lockdowns due to unemployment in the tourism industry. The majority of the population (76.9%) are Afro-Caribbean, descended from slaves brought to the islands by the British. Other large ethnic groups include Latinos (5.6%), those of European ancestry (5.4%), Mixed ancestry (5.4%) and Indian (2.1%).
The 2010 Census reports:
76.9% African
5.6% Hispanic
5.4% European/Caucasian
5.4% Mixed
2.1% East Indian
4.6% Others*
The 2010 Census reports the main places of origin of residents as follows:
39.1% local born (though many locals go to St. Thomas or the United States for maternity services)
7.2% Guyana
7.0% St. Vincent and the Grenadines
6.0% Jamaica
5.5% United States
5.4% Dominican Republic
5.3% United States Virgin Islands
The islands are heavily dependent upon migrant labour. In 2004, migrant workers accounted for 50% of the total population. 32% of workers employed in the British Virgin Islands work for the government. In the late 2000s the first Overseas Filipino Worker came to the British Virgin Islands, by 2020 total British Filipino population was about 800.
Unusually, the territory has one of the highest drowning mortality rates in the world, being higher than other high-risk countries such as China and India. 20% of deaths in the British Virgin Islands during 2012 were recorded as drownings,The BVI Beacon, Thursday, 15 August 2013 article entitled "Report: Passports up, marriages down last year".Annual Report of the Civil Registry and Passport Office for 2012 which includes "For the 20 per cent that represented drowning, all were tourists who died from snorkelling or diving in the VI waters in and around caves at Norman Island, as well as near Virgin Gorda...The Virgin Islands should, therefore put safety measures in place such as the dissemination of information to hotels, dive shops and marinas." The same report confirms that the deaths of 86 persons were recorded in the territory during 2012. all of them being tourists. Despite this, the territory's most popular beach still has no lifeguard presence.
Religion
Over 90% of the population who indicated a religious affiliation at the 2010 Census were Christian,The BVI Beacon "Portrait of a population: 2010 Census published" pg. 6, 20 November 2014 with the largest individual Christian denominations being Methodist (17.6%), Anglican (9.5%), Church of God (10.4%), Seventh-Day Adventists (9.0%) and Roman Catholic (8.9%). The largest non-Christian faiths in 2010 were Hinduism (1.9%) and Islam (0.9%). However, Hindus and Muslims constitute each approximately 1.2% of the population according to Word Religion Database 2005.cited in
The Constitution of the British Virgin Islands commences with a professed national belief in God.The second paragraphs of the recitals (appearing between Article 1 and Article 2) contains the words: "[T]he society of the Virgin Islands is based upon certain moral, spiritual and democratic values including a belief in God".
+ Religion by % of population(National Census 2010) Denomination 2010 2001 1991 Methodist 17.6 22.7 32.9 Church of God 10.4 11.4 9.2 Anglican 9.5 11.6 16.7 Seventh Day Adventist 9.0 8.4 6.3 Roman Catholic 8.9 9.5 10.5 Pentecostal 8.2 9.1 4.1 None 7.9 6.4 3.6 Baptist 7.4 8.2 4.7 Other 4.1 3.4 4.4 Jehovah's Witnesses 2.5 2.2 2.1 Not stated 2.4 2.7 1.1 Hindu 1.9 2.0 2.2 Muslim 0.9 0.9 0.6 Evangelical 0.7 0.5 – Rastafarian 0.6 0.4 0.2 Moravian 0.3 0.5 0.6 Presbyterian 0.2 0.4 0.7 Buddhist 0.2 – – Jewish 0.04 – – Bahai 0.04 0.03 0.00 Brethren – 0.03 0.04 Salvation Army – 0.03 0.04
Education
The British Virgin Islands operates several government schools as well as private schools. There is also a community college, H. Lavity Stoutt Community College, that is located on the eastern end of Tortola. This college was named after Lavity Stoutt, the first Chief Minister of the British Virgin Islands.British Virgin Islands Schools , BVI Government website There remains segregation in the school system; while BVIslander and Belonger children make up a significant proportion of pupils in private schools, Non-Belongers are prohibited from attending government schools. It is extremely common for students from the British Virgin Islands to travel overseas for secondary and tertiary education, either to the University of the West Indies, or to colleges and universities in either the United Kingdom, United States or Canada. Coaching in certain sports, such as athletics, squash and football is of a high level.
The literacy rate in the British Virgin Islands is high at 98%.
There is a University of the West Indies Open campus in the territory and a Marine Science educational facility.
Culture
Language
The primary language is English, although there is a local dialect. Spanish is spoken by Puerto Rican, Dominican and other Hispanic immigrants.
Music
The traditional music of the British Virgin Islands is called fungi after the local cornmeal dish with the same name, often made with okra. The special sound of fungi is due to a unique local fusion between African and European music. The fungi bands, also called "scratch bands", use instruments ranging from calabash, washboard, bongos and ukulele, to more traditional western instruments like keyboard, banjo, guitar, bass, triangle and saxophone. Apart from being a form of festive dance music, fungi often contains humorous social commentaries, as well as BVI oral history.Penn, Dexter J.A. Music of the British Virgin Islands: Fungi . Retrieved 13 January 2008.
Sport
Because of its location and climate, the British Virgin Islands has long been a haven for sailing enthusiasts. Sailing is regarded as one of the foremost sports in all of the BVI. Calm waters and steady breezes provide some of the best sailing conditions in the Caribbean.
Many sailing events are held in the waters of this country, the largest of which is a week-long series of races called the Spring Regatta, the premier sailing event of the Caribbean, with several races hosted each day. Boats include everything from full-size mono-hull yachts to dinghies. Captains and their crews come from all around the world to attend these races. The Spring Regatta is part race, part party, part festival. The Spring Regatta is normally held during the first week of April.
Since 2009, the BVI have made a name for themselves as a host of international basketball events. The BVI hosted three of the last four events of the Caribbean Basketball Championship (FIBA CBC Championship).
See also
List of British Virgin Islanders
Outline of the British Virgin Islands
Notes
References
External links
Directories
British Virgin Islands from UCB Libraries GovPubs
NGO sources
Official websites and overviews
Government of the British Virgin Islands official website
British Virgin Islands – London Office
Old Government House Museum, British Virgin Islands
British Virgin Islands Tourist Board
Home
National Parks Trust of the British Virgin Islands—Official site
British Virgin Islands Financial Services Commission—Official site
The British Virgin Islands Ports Authority—Official site
British Virgin Islands. The World Factbook. Central Intelligence Agency.
Wikimedia content
Category:Dependent territories in the Caribbean
.British Virgin
Category:Virgin Islands
Category:British Leeward Islands
Category:British West Indies
Category:Countries and territories where English is an official language
Category:Former Dutch colonies
Category:Member states of the Organisation of Eastern Caribbean States
Category:Small Island Developing States
Category:States and territories established in 1672
Category:1672 establishments in the British Empire
Category:1672 establishments in North America
Category:1670s establishments in the Caribbean
|
geography
| 5,333
|
3793
|
Battle of Bosworth Field
|
https://en.wikipedia.org/wiki/Battle_of_Bosworth_Field
|
The Battle of Bosworth or Bosworth Field ( ) was the last significant battle of the Wars of the Roses, the civil war between the houses of Lancaster and York that extended across England and Wales in the latter half of the 15th century. Fought on 22 August 1485, the battle was won by an alliance of Lancastrians and disaffected Yorkists. Their leader Henry Tudor, Earl of Richmond, became the first Welsh monarch of England from the Tudor dynasty by his victory and subsequent marriage to the de facto Yorkist heiress, Elizabeth of York. His opponent Richard III, the last king of the House of York, was killed during the battle, the last English monarch to fall in battle. Historians consider Bosworth Field to mark the end of the Plantagenet dynasty, making it one of the defining moments of English history.
Richard's reign began in 1483 when he ascended the throne after his twelve-year-old nephew, Edward V, was declared illegitimate, likely at Richard’s instigation. The boy and his younger brother Richard soon disappeared, and their fate remains a mystery. Across the English Channel Henry Tudor, a descendant of the greatly diminished House of Lancaster, seized on Richard's difficulties and laid claim to the throne. Henry's first attempt to invade England in 1483 foundered in a storm, but his second arrived unopposed on 7 August 1485 on the south-west coast of Wales. Marching inland, Henry gathered support as he made for London. Richard hurriedly mustered his troops and intercepted Henry's army near Ambion Hill, south of the town of Market Bosworth in Leicestershire. Lord Stanley and Sir William Stanley also brought a force to the battlefield, but held back while they decided which side it would be most advantageous to support, initially lending only four knights to Henry's cause; these were: Sir Robert Tunstall, Sir John Savage (nephew of Lord Stanley), Sir Hugh Persall and Sir Humphrey Stanley.The Ballad of Bosworth Fielde, Text from Bishop Percy's Folio Manuscript. Ballads and Romances, ed. J.W. Hales and F.J. Furnivall, 3 vols. (London, 1868), III, pp. 233–259. Reproduced by kind permission of Department of Special Collections, University of Pennsylvania Library Sir John Savage was placed in command of the left flank of Henry's army.
Richard divided his army, which outnumbered Henry's, into three groups (or "battles"). One was assigned to the Duke of Norfolk and another to the Earl of Northumberland. Henry kept most of his force together and placed it under the command of the experienced Earl of Oxford. Richard's vanguard, commanded by Norfolk, attacked but struggled against Oxford's men, and some of Norfolk's troops fled the field. Northumberland took no action when signalled to assist his king, so Richard gambled everything on a charge across the battlefield to kill Henry and end the fight. Seeing the king's knights separated from his army, the Stanleys intervened; Sir William led his men to Henry's aid, surrounding and killing Richard. After the battle, Henry was crowned king.
Henry hired chroniclers to portray his reign favourably; the Battle of Bosworth Field was popularised to represent his Tudor dynasty as the start of a new age, marking the end of the Middle Ages for England. From the 15th to the 18th centuries the battle was glamourised as a victory of good over evil, and features as the climax of William Shakespeare's play Richard III. The exact site of the battle is disputed because of the lack of conclusive data, and memorials have been erected at different locations. The Bosworth Battlefield Heritage Centre was built in 1974, on a site that has since been challenged by several scholars and historians. In October 2009, a team of researchers who had performed geological surveys and archaeological digs in the area since 2003 suggested a location south-west of Ambion Hill.
Background
During the 15th century, civil war raged across England as the Houses of York and Lancaster fought each other for the English throne. In 1471 the Yorkists defeated their rivals in the battles of Barnet and Tewkesbury. The Lancastrian King Henry VI and his only son, Edward of Westminster, died in the aftermath of the Battle of Tewkesbury. Their deaths left the House of Lancaster with no direct claimants to the throne. The Yorkist king, Edward IV, was in complete control of England. He attainted those who refused to submit to his rule, such as Jasper Tudor and his nephew Henry, naming them traitors and confiscating their lands. The Tudors tried to flee to France but strong winds forced them to land in Brittany, which was a semi-independent duchy, where they were taken into the custody of Duke Francis II. Henry's mother, Lady Margaret Beaufort, was a great-granddaughter of John of Gaunt, uncle of King Richard II and father of King Henry IV. The Beauforts were originally bastards, but Richard II legitimised them through an Act of Parliament, a decision quickly modified by a royal decree of Henry IV ordering that their descendants were not eligible to inherit the throne. Henry Tudor, the only remaining Lancastrian noble with a trace of the royal bloodline, had a weak claim to the throne, and Edward regarded him as "a nobody". The Duke of Brittany, however, viewed Henry as a valuable tool to bargain for England's aid in conflicts with France, and kept the Tudors under his protection.
Edward IV died 12 years after Tewkesbury in April 1483. His 12-year-old elder son succeeded him as King Edward V; the younger son, nine-year-old Richard of Shrewsbury, Duke of York, was next in line to the throne. Edward V was too young to rule and a Royal Council was established to rule the country until the king's coming of age. Some among the council were worried when it became apparent that the relatives of Edward V's mother, Elizabeth Woodville, were plotting to use their control of the young king to dominate the council. Having offended many in their quest for wealth and power, the Woodville family was not popular. To frustrate the Woodvilles' ambitions, Lord Hastings and other members of the council turned to the new king's uncle—Richard, Duke of Gloucester, brother of Edward IV. The courtiers urged Gloucester to assume the role of Protector quickly, as had been previously requested by his now dead brother. On 29 April Gloucester, accompanied by a contingent of guards and Henry Stafford, 2nd Duke of Buckingham, took Edward V into custody and arrested several prominent members of the Woodville family. After bringing the young king to London, Gloucester had the Queen's brother Anthony Woodville, 2nd Earl Rivers, and her son by her first marriage Richard Grey executed, without trial, on charges of treason.
On 13 June, Gloucester accused Hastings of plotting with the Woodvilles and had him beheaded. Nine days later the Three Estates of the Realm, an informal Parliament declared the marriage between Edward IV and Elizabeth illegal, rendering their children illegitimate and disqualifying them from the throne. With his brother's children out of the way, he was next in the line of succession and was proclaimed King Richard III on 26 June. The timing and extrajudicial nature of the deeds done to obtain the throne for Richard won him no popularity, and rumours that spoke ill of the new king spread throughout England. After they were declared bastards, the two princes were confined in the Tower of London and never seen in public again.
In October 1483, a conspiracy emerged to displace him from the throne. The rebels were mostly loyalists to Edward IV, who saw Richard as a usurper. Their plans were coordinated by a Lancastrian, Henry's mother Lady Margaret, who was promoting her son as a candidate for the throne. The highest-ranking conspirator was Buckingham. No chronicles tell of the duke's motive in joining the plot, although historian Charles Ross proposes that Buckingham was trying to distance himself from a king who was becoming increasingly unpopular with the people. Michael Jones and Malcolm Underwood suggest that Margaret deceived Buckingham into thinking the rebels supported him to be king.
The plan was to stage uprisings within a short time in southern and western England, overwhelming Richard's forces. Buckingham would support the rebels by invading from Wales, while Henry came in by sea. Bad timing and weather wrecked the plot. An uprising in Kent started 10 days prematurely, alerting Richard to muster the royal army and take steps to put down the insurrections. Richard's spies informed him of Buckingham's activities, and the king's men captured and destroyed the bridges across the River Severn. When Buckingham and his army reached the river, they found it swollen and impossible to cross because of a violent storm that broke on 15 October. Buckingham was trapped and had no safe place to retreat; his Welsh enemies seized his home castle after he had set forth with his army. The duke abandoned his plans and fled to Wem, where he was betrayed by his servant and arrested by Richard's men. p. 450. On 2 November he was executed. Henry had attempted a landing on 10 October (or 19 October), but his fleet was scattered by a storm. He reached the coast of England (at either Plymouth or Poole) and a group of soldiers hailed him to come ashore. They were, in fact, Richard's men, prepared to capture Henry once he set foot on English soil. Henry was not deceived and returned to Brittany, abandoning the invasion. Without Buckingham or Henry, the rebellion was easily crushed by Richard.
The survivors of the failed uprisings fled to Brittany, where they openly supported Henry's claim to the throne. At Christmas, Henry Tudor swore an oath in Rennes Cathedral to marry Edward IV's daughter, Elizabeth of York, to unite the warring houses of York and Lancaster. Henry's rising prominence made him a great threat to Richard, and the Yorkist king made several overtures to the Duke of Brittany to surrender the young Lancastrian. Francis refused, holding out for the possibility of better terms from Richard. In mid-1484 Francis was incapacitated by illness and while recuperating, his treasurer Pierre Landais took over the reins of government. Landais reached an agreement with Richard to send back Henry and his uncle in exchange for military and financial aid. John Morton, a bishop of Flanders, learned of the scheme and warned the Tudors, who fled to France. The French court allowed them to stay; the Tudors were useful pawns to ensure that Richard's England did not interfere with French plans to annex Brittany. On 16 March 1485 Richard's queen, Anne Neville, died, and rumours spread across the country that she was murdered to pave the way for Richard to marry his niece, Elizabeth. Later findings though, showed that Richard had entered into negotiations to marry Joanna of Portugal and to marry off Elizabeth to Manuel, Duke of Beja. The gossip must have upset Henry across the English Channel. The loss of Elizabeth's hand in marriage could unravel the alliance between Henry's supporters who were Lancastrians and those who were loyalists to Edward IV. Anxious to secure Elizabeth as his bride, Henry recruited mercenaries formerly in French service to supplement his following of exiles and set sail from France on 1 August.
Factions
By the 15th century, English chivalric ideas of selfless service to the king had been corrupted. Armed forces were raised mostly through musters in individual estates; every able-bodied man had to respond to his lord's call to arms, and each noble had authority over his militia. Although a king could raise personal militia from his lands, he could muster a large army only through the support of his nobles. Richard, like his predecessors, had to win over these men by granting gifts and maintaining cordial relationships. Powerful nobles could demand greater incentives to remain on the liege's side or else they might turn against him. Three groups, each with its own agenda, stood on Bosworth Field: Richard III and his Yorkist army; his challenger, Henry Tudor, who championed the Lancastrian cause; and the fence-sitting Stanleys.
Yorkist
Small and slender, Richard III did not have the robust physique associated with many of his Plantagenet predecessors. However, he enjoyed very rough sports and activities that were considered manly. His performances on the battlefield impressed his brother greatly, and he became Edward's right-hand man. During the 1480s Richard defended the northern borders of England. In 1482, Edward charged him to lead an army into Scotland with the aim of replacing King James III with the Duke of Albany. Richard's army broke through the Scottish defences and occupied the capital, Edinburgh, but Albany decided to give up his claim to the throne in return for the post of Lieutenant General of Scotland. As well as obtaining a guarantee that the Scottish government would concede territories and diplomatic benefits to the English crown, Richard's campaign retook the town of Berwick-upon-Tweed, which the Scots had conquered in 1460. Edward was not satisfied by these gains, which, according to Ross, could have been greater if Richard had been resolute enough to capitalise on the situation while in control of Edinburgh. In her analysis of Richard's character, Christine Carpenter sees him as a soldier who was more used to taking orders than giving them. However, he was not averse to displaying his militaristic streak; on ascending the throne he made known his desire to lead a crusade against "not only the Turks, but all [his] foes".
Richard's most loyal subject was John Howard, 1st Duke of Norfolk. The duke had served Richard's brother for many years and had been one of Edward IV's closer confidants. He was a military veteran, having fought in the Battle of Towton in 1461 and served as Hastings' deputy at Calais in 1471. Ross speculates that he bore a grudge against Edward for depriving him of a fortune. Norfolk was due to inherit a share of the wealthy Mowbray estate on the death of eight-year-old Anne de Mowbray, the last of her family. However, Edward convinced Parliament to circumvent the law of inheritance and transfer the estate to his younger son, who was married to Anne. Consequently, Howard supported Richard III in deposing Edward's sons, for which he received the dukedom of Norfolk and his original share of the Mowbray estate.
Henry Percy, 4th Earl of Northumberland, also supported Richard's ascension to the throne of England. The Percys were loyal Lancastrians, but Edward IV eventually won the earl's allegiance. Northumberland had been captured and imprisoned by the Yorkists in 1461, losing his titles and estates; however, Edward released him eight years later and restored his earldom. From that time Northumberland served the Yorkist crown, helping to defend northern England and maintain its peace. Initially the earl had issues with Richard III as Edward groomed his brother to be the leading power of the north. Northumberland was mollified when he was promised he would be the Warden of the East March, a position that was formerly hereditary for the Percys. He served under Richard during the 1482 invasion of Scotland, and the allure of being in a position to dominate the north of England if Richard went south to assume the crown was his likely motivation for supporting Richard's bid for kingship. However, after becoming king, Richard began moulding his nephew, John de la Pole, 1st Earl of Lincoln, to manage the north, passing over Northumberland for the position. According to Carpenter, although the earl was amply compensated, he despaired of any possibility of advancement under Richard.
Lancastrians
Henry Tudor was unfamiliar with the arts of war and was a stranger to the land he was trying to conquer. He spent the first fourteen years of his life in Wales and the next fourteen in Brittany and France. Slender but strong and decisive, Henry lacked a penchant for battle and was not much of a warrior; chroniclers such as Polydore Vergil and ambassadors like Pedro de Ayala found him more interested in commerce and finance. Having not fought in any battles, Henry recruited several experienced veterans to command his armies.
John de Vere, 13th Earl of Oxford, was Henry's principal military commander. He was adept in the arts of war. At the Battle of Barnet, he commanded the Lancastrian right wing and routed the division opposing him. However, as a result of confusion over identities, Oxford's group came under friendly fire from the Lancastrian main force and retreated from the field. The earl fled abroad and continued his fight against the Yorkists, raiding shipping and eventually capturing the island fort of St Michael's Mount in 1473. He surrendered after receiving no aid or reinforcement, but in 1484 escaped from prison and joined Henry's court in France, bringing along his erstwhile gaoler Sir James Blount. Oxford's presence raised morale in Henry's camp and troubled Richard III.
Stanleys
In the early stages of the Wars of the Roses, the Stanleys of Cheshire had been predominantly Lancastrians. Sir William Stanley, however, was a staunch Yorkist supporter, fighting in the Battle of Blore Heath in 1459 and helping Hastings to put down uprisings against Edward IV in 1471.; . When Richard took the crown, Sir William showed no inclination to turn against the new king, refraining from joining Buckingham's rebellion, for which he was amply rewarded. Sir William's elder brother, Thomas Stanley, 2nd Baron Stanley, was not as steadfast. By 1485, he had served three kings, namely Henry VI, Edward IV and Richard III. Lord Stanley's skilled political manoeuvrings—vacillating between opposing sides until it was clear who would be the winner—gained him high positions; he was Henry's chamberlain and Edward's steward. His non-committal stance, until the crucial point of a battle, earned him the loyalty of his men, who felt he would not needlessly send them to their deaths.
Lord Stanley's relations with the king's brother, the eventual Richard III, were not cordial. The two had conflicts that erupted into violence around March 1470. Furthermore, having taken Lady Margaret as his second wife in June 1472, Stanley was Henry Tudor's stepfather, a relationship which did nothing to win him Richard's favour. Despite these differences, Stanley did not join Buckingham's revolt in 1483. When Richard executed those conspirators who had been unable to flee England, he spared Lady Margaret. However, he declared her titles forfeit and transferred her estates to Stanley's name, to be held in trust for the Yorkist crown. Richard's act of mercy was calculated to reconcile him with Stanley, but it may have been to no avail—Carpenter has identified a further cause of friction in Richard's intention to reopen an old land dispute that involved Thomas Stanley and the Harrington family. Edward IV had ruled the case in favour of Stanley in 1473, but Richard planned to overturn his brother's ruling and give the wealthy estate to the Harringtons. Immediately before the Battle of Bosworth, being wary of Stanley, Richard took his son, Lord Strange, as hostage to discourage him from joining Henry.
Crossing the English Channel and through Wales
Henry's initial force consisted of the English and Welsh exiles who had gathered around Henry, combined with a contingent of mercenaries put at his disposal by Charles VIII of France. The history of Scottish author John Major (published in 1521) claims that Charles had granted Henry 5,000 men, of whom 1,000 were Scots, headed by Sir Alexander Bruce. No mention of Scottish soldiers was made by subsequent English historians.
Henry's crossing of the English Channel in 1485 was without incident. Thirty ships sailed from Harfleur on 1 August and, with fair winds behind them, landed in his native Wales, at Mill Bay (near Dale) on the north side of Milford Haven on 7 August, easily capturing nearby Dale Castle. Henry received a muted response from the local population. No joyous welcome awaited him on shore, and at first few individual Welshmen joined his army as it marched inland. Historian Geoffrey Elton suggests only Henry's ardent supporters felt pride over his Welsh blood. His arrival had been hailed by contemporary Welsh bards such as Dafydd Ddu and Gruffydd ap Dafydd as the true prince and "the youth of Brittany defeating the Saxons" in order to bring their country back to glory. When Henry moved to Haverfordwest, the county town of Pembrokeshire, Richard's lieutenant in South Wales, Sir Walter Herbert, failed to move against Henry, and two of his officers, Richard Griffith and Evan Morgan, deserted to Henry with their men.
The most important defector to Henry in this early stage of the campaign was probably Rhys ap Thomas, who was the leading figure in West Wales. Richard had appointed Rhys Lieutenant in West Wales for his refusal to join Buckingham's rebellion, asking that he surrender his son Gruffydd ap Rhys ap Thomas as surety, although by some accounts Rhys had managed to evade this condition. However, Henry successfully courted Rhys, offering the lieutenancy of all Wales in exchange for his fealty. Henry marched via Aberystwyth while Rhys followed a more southerly route, recruiting a force of Welshmen en route, variously estimated at 500 or 2,000 men, to swell Henry's army when they reunited at Cefn Digoll, Welshpool. By 15 or 16 August, Henry and his men had crossed the English border, making for the town of Shrewsbury.
Shrewsbury: the gateway to England
Since 22 June Richard had been aware of Henry's impending invasion, and had ordered his lords to maintain a high level of readiness. News of Henry's landing reached Richard on 11 August, but it took three to four days for his messengers to notify his lords of their king's mobilisation. On 16 August, the Yorkist army started to gather; Norfolk set off for Leicester, the assembly point, that night. The city of York, a historical stronghold of Richard's family, asked the king for instructions, and receiving a reply three days later sent 80 men to join the king. Simultaneously Northumberland, whose northern territory was the most distant from the capital, had gathered his men and ridden to Leicester.
Although London was his goal, Henry did not move directly towards the city. After resting in Shrewsbury, his forces went eastwards and picked up Sir Gilbert Talbot and other English allies, including deserters from Richard's forces. Although its size had increased substantially since the landing, Henry's army was still considerably outnumbered by Richard's forces. Henry's pace through Staffordshire was slow, delaying the confrontation with Richard so that he could gather more recruits to his cause. Henry had been communicating on friendly terms with the Stanleys for some time before setting foot in England, and the Stanleys had mobilised their forces on hearing of Henry's landing. They ranged themselves ahead of Henry's march through the English countryside, meeting twice in secret with Henry as he moved through Staffordshire. At the second of these, at Atherstone in Warwickshire, they conferred "in what sort to arraign battle with King Richard, whom they heard to be not far off". On 21 August, the Stanleys were making camp on the slopes of a hill north of Dadlington, while Henry encamped his army at White Moors to the north-west of their camp.
On 20 August, Richard rode from Nottingham to Leicester, joining Norfolk. He spent the night at the Blue Boar inn (demolished 1836). Northumberland arrived the following day. The royal army proceeded westwards to intercept Henry's march on London. Passing Sutton Cheney, Richard moved his army towards Ambion Hill—which he thought would be of tactical value—and made camp on it. Richard's sleep was not peaceful and, according to the Croyland Chronicle, in the morning his face was "more livid and ghastly than usual".
Engagement
The Yorkist army, variously estimated at between 7,500 and 12,000 men, deployed on the hilltop along the ridgeline from west to east. Norfolk's force (or "battle" in the parlance of the time) of spearmen stood on the right flank, protecting the cannon and about 1,200 archers. Richard's group, comprising 3,000 infantry, formed the centre. Northumberland's men guarded the left flank; he had approximately 4,000 men, many of them mounted.; . Standing on the hilltop, Richard had a wide, unobstructed view of the area. He could see the Stanleys and their 4,000–6,000 men holding positions on and around Dadlington Hill, while to the south-west was Henry's army.
Henry's force has been variously estimated at between 5,000 and 8,000 men, his original landing force of exiles and mercenaries having been augmented by the recruits gathered in Wales and the English border counties (in the latter area probably mustered chiefly by the Talbot interest), and by deserters from Richard's army. Historian John Mackie believes that 1,800 French mercenaries, led by Philibert de Chandée, formed the core of Henry's army. John Mair, writing thirty-five years after the battle, claimed that this force contained a significant Scottish component, and this claim is accepted by some modern writers, but Mackie argues that the French would not have released their elite Scottish knights and archers, and concludes that there were probably few Scottish troops in the army, although he accepts the presence of captains like Bernard Stewart, Lord of Aubigny.
In their interpretations of the vague mentions of the battle in the old text, historians placed areas near the foot of Ambion Hill as likely regions where the two armies clashed, and thought up possible scenarios of the engagement. In their recreations of the battle, Henry started by moving his army towards Ambion Hill where Richard and his men stood. As Henry's army advanced past the marsh at the south-western foot of the hill, Richard sent a message to Stanley, threatening to execute his son, Lord Strange, if Stanley did not join the attack on Henry immediately. Stanley replied that he had other sons. Incensed, Richard gave the order to behead Strange but his officers temporised, saying that battle was imminent, and it would be more convenient to carry out the execution afterwards. Henry had also sent messengers to Stanley asking him to declare his allegiance. The reply was evasive—the Stanleys would "naturally" come, after Henry had given orders to his army and arranged them for battle. Henry had no choice but to confront Richard's forces alone.
Well aware of his own military inexperience, Henry handed command of his army to Oxford and retired to the rear with his bodyguards. Oxford, seeing the vast line of Richard's army strung along the ridgeline, decided to keep his men together instead of splitting them into the traditional three battles: vanguard, centre, and rearguard. He ordered the troops to stray no further than from their banners, fearing that they would become enveloped. Individual groups clumped together, forming a single large mass flanked by horsemen on the wings.
The Lancastrians were harassed by Richard's cannon as they manoeuvred around the marsh, seeking firmer ground. Once Oxford and his men were clear of the marsh, Norfolk's battle and several contingents of Richard's group, under the command of Sir Robert Brackenbury, started to advance. Hails of arrows showered both sides as they closed. Oxford's men proved the steadier in the ensuing hand-to-hand combat; they held their ground and several of Norfolk's men fled the field. Norfolk lost one of his senior officers, Walter Devereux, in this early clash.
Recognising that his force was at a disadvantage, Richard signalled for Northumberland to assist but Northumberland's group showed no signs of movement. Historians, such as Horrox and Pugh, believe Northumberland chose not to aid his king for personal reasons.; Pugh (1992). p. 49. Ross doubts the aspersions cast on Northumberland's loyalty, suggesting instead that Ambion Hill's narrow ridge hindered him from joining the battle. The earl would have had to either go through his allies or execute a wide flanking move—near impossible to perform given the standard of drill at the time—to engage Oxford's men.
At this juncture Richard saw Henry at some distance behind his main force. Seeing this, Richard decided to end the fight quickly by killing the enemy commander. He led a charge of mounted men around the melee and tore into Henry's group; several accounts state that Richard's force numbered 800–1000 knights, but Ross says it was more likely that Richard was accompanied only by his household men and closest friends.; . Richard killed Henry's standard-bearer Sir William Brandon in the initial charge and unhorsed burly John Cheyne, Edward IV's former standard-bearer, with a blow to the head from his broken lance. French mercenaries in Henry's retinue related how the attack had caught them off guard and that Henry sought protection by dismounting and concealing himself among them to present less of a target. Henry made no attempt to engage in combat himself.
Oxford had left a small reserve of pike-equipped men with Henry. They slowed the pace of Richard's mounted charge, and bought Tudor some critical time. The remainder of Henry's bodyguards surrounded their master, and succeeded in keeping him away from the Yorkist king. Meanwhile, seeing Richard embroiled with Henry's men and separated from his main force, William Stanley made his move and rode to the aid of Henry. Now outnumbered, Richard's group was surrounded and gradually pressed back. Richard's force was driven several hundred yards away from Tudor, near to the edge of a marsh, into which the king's horse toppled. Richard, now unhorsed, gathered himself and rallied his dwindling followers, supposedly refusing to retreat: "God forbid that I retreat one step. I will either win the battle as a king, or die as one." In the fighting Richard's banner man—Sir Percival Thirlwall—lost his legs, but held the Yorkist banner aloft until he was killed. It is likely that James Harrington also died in the charge. The king's trusted advisor Richard Ratcliffe was also slain.
Polydore Vergil, Henry Tudor's official historian, recorded that "King Richard, alone, was killed fighting manfully in the thickest press of his enemies".Kendall, p. 368. Richard had come within a sword's length of Henry Tudor before being surrounded by William Stanley's men and killed. The Burgundian chronicler Jean Molinet says that a Welshman struck the death-blow with a halberd while Richard's horse was stuck in the marshy ground.Ralph Griffith (1993). Sir Rhys ap Thomas and his family: a study in the Wars of the Roses and early Tudor politics, University of Wales Press, p. 43, . It was said that the blows were so violent that the king's helmet was driven into his skull.Thomas Penn (2011). Winter King: Henry VII and The Dawn of Tudor England, Simon & Schuster, p. 9, The contemporary Welsh poet Guto'r Glyn implies the leading Welsh Lancastrian Rhys ap Thomas, or one of his men, killed the king, writing that he ("Killed the boar, shaved his head").E. A. Rees (2008). A Life of Guto'r Glyn, Y Lolfa, p. 211, . The original Welsh is "Lladd y baedd, eilliodd ei ben". The usual meaning of eilliodd is "shaved", which might mean "chopped off" or "sliced". Analysis of King Richard's skeletal remains found 11 wounds, nine of them to the head; a blade consistent with a halberd had sliced off part of the rear of Richard's skull, suggesting he had lost his helmet.
Richard's forces disintegrated as news of his death spread. Northumberland and his men fled north on seeing the king's fate, and Norfolk was killed by the knight Sir John Savage in single combat according to the Ballad of Lady Bessy.Brereton, H. The most pleasant song of Lady Bessy: the eldest daughter of King Edward the Fourth, and how she married King Henry the Seventh of the House of Lancaster p.46 (Text taken from the Ballad of Lady Bessy a contemporary primary source)
After the battle
Although he claimed fourth-generation maternal Lancastrian descendancy, Henry seized the crown by right of conquest. After the battle, Richard's circlet is said to have been found and brought to Henry, who was proclaimed king at the top of Crown Hill, near the village of Stoke Golding. According to Vergil, Henry's official historian, Lord Stanley found the circlet. Historians Stanley Chrimes and Sydney Anglo dismiss the legend of the circlet's finding in a hawthorn bush; none of the contemporary sources reported such an event. Ross, however, does not ignore the legend. He argues that the hawthorn bush would not be part of Henry's coat of arms if it did not have a strong relationship to his ascendance. Baldwin points out that a hawthorn bush motif was already used by the House of Lancaster, and Henry merely added the crown.
In Vergil's chronicle, 100 of Henry's men, compared to 1,000 of Richard's, died in this battle—a ratio Chrimes believes to be an exaggeration. The bodies of the fallen were brought to St James Church at Dadlington for burial. However, Henry denied any immediate rest for Richard; instead the last Yorkist king's corpse was stripped naked and strapped across a horse. His body was brought to Leicester and openly exhibited to prove that he was dead. Early accounts suggest that this was in the major Lancastrian collegiate foundation, the Church of the Annunciation of Our Lady of the Newarke. After two days, the corpse was interred in a plain tomb, within the church of the Greyfriars. The church was demolished following the friary's dissolution in 1538, and the location of Richard's tomb was long uncertain.
On 12 September 2012, archaeologists announced the discovery of a buried skeleton with spinal abnormalities and head injuries under a car park in Leicester, and their suspicions that it was Richard III. On 4 February 2013, it was announced that DNA testing had convinced Leicester University scientists and researchers "beyond reasonable doubt" that the remains were those of King Richard. On 26 March 2015, these remains were ceremonially buried in Leicester Cathedral. Richard's tomb was unveiled on the following day.
Henry dismissed the mercenaries in his force, retaining only a small core of local soldiers to form a "Yeomen of his Garde", and proceeded to establish his rule of England. Parliament reversed his attainder and recorded Richard's kingship as illegal, although the Yorkist king's reign remained officially in the annals of England history. The proclamation of Edward IV's children as illegitimate was also reversed, restoring Elizabeth's status to a royal princess. The marriage of Elizabeth, the heiress to the House of York, to Henry, the master of the House of Lancaster, marked the end of the feud between the two houses and the start of the Tudor dynasty. The royal matrimony, however, was delayed until Henry was crowned king and had established his claim on the throne firmly enough to preclude that of Elizabeth and her kin. Henry further convinced Parliament to backdate his reign to the day before the battle, enabling him retrospectively to declare as traitors those who had fought against him at Bosworth Field. Northumberland, who had remained inactive during the battle, was imprisoned but later released and reinstated to pacify the north in Henry's name. Henry proved prepared to accept those who submitted to him regardless of their former allegiances.
Of his supporters, Henry rewarded the Stanleys the most generously. Aside from making William his chamberlain, he bestowed the earldom of Derby upon Lord Stanley along with grants and offices in other estates. Henry rewarded Oxford by restoring to him the lands and titles confiscated by the Yorkists and appointing him as Constable of the Tower and admiral of England, Ireland, and Aquitaine. For his kin, Henry created Jasper Tudor the Duke of Bedford. He returned to his mother the lands and grants stripped from her by Richard, and proved to be a filial son, granting her a place of honour in the palace and faithfully attending to her throughout his reign. Parliament's declaration of Margaret as femme sole effectively empowered her; she no longer needed to manage her estates through Stanley. Elton points out that despite his initial largesse, Henry's supporters at Bosworth would enjoy his special favour for only the short term; in later years, he would instead promote those who best served his interests.
Like the kings before him, Henry faced dissenters. The first open revolt occurred two years after Bosworth Field; Lambert Simnel claimed to be Edward Plantagenet, 17th Earl of Warwick, who was Edward IV's nephew. The Earl of Lincoln backed him for the throne and led rebel forces in the name of the House of York. The rebel army fended off several attacks by Northumberland's forces, before engaging Henry's army at the Battle of Stoke Field on 16 June 1487. Oxford and Bedford led Henry's men, including several former supporters of Richard III. Henry won this battle easily, but other malcontents and conspiracies would follow. A rebellion in 1489 started with Northumberland's murder; military historian Michael C. C. Adams says that the author of a note, which was left next to Northumberland's body, blamed the earl for Richard's death.
Legacy and historical significance
Contemporary accounts of the Battle of Bosworth can be found in four main sources, one of which is the English Croyland Chronicle, written by a senior Yorkist chronicler who relied on second-hand information from nobles and soldiers. The other accounts were written by foreigners—Vergil, Jean Molinet, and Diego de Valera. Whereas Molinet was sympathetic to Richard, Vergil was in Henry's service and drew information from the king and his subjects to portray them in a good light. Diego de Valera, whose information Ross regards as unreliable, compiled his work from letters of Spanish merchants. However, other historians have used Valera's work to deduce possibly valuable insights not readily evident in other sources. Ross finds the poem, The Ballad of Bosworth Field, a useful source to ascertain certain details of the battle. The multitude of different accounts, mostly based on second- or third-hand information, has proved an obstacle to historians as they try to reconstruct the battle. Their common complaint is that, except for its outcome, very few details of the battle are found in the chronicles. According to historian Michael Hicks, the Battle of Bosworth is one of the worst-recorded clashes of the Wars of the Roses.
Historical depictions and interpretations
Henry tried to present his victory as a new beginning for the country; he hired chroniclers to portray his reign as a "modern age" with its dawn in 1485. Hicks states that the works of Vergil and the blind historian Bernard André, promoted by subsequent Tudor administrations, became the authoritative sources for writers for the next four hundred years. As such, Tudor literature paints a flattering picture of Henry's reign, depicting the Battle of Bosworth as the final clash of the civil war and downplaying the subsequent uprisings. For England the Middle Ages ended in 1485, and English Heritage claims that other than William the Conqueror's successful invasion of 1066, no other year holds more significance in English history. By portraying Richard as a hunchbacked tyrant who usurped the throne by killing his nephews, the Tudor historians attached a sense of myth to the battle: it became an epic clash between good and evil with a satisfying moral outcome. According to Reader Colin Burrow, André was so overwhelmed by the historic significance of the battle that he represented it with a blank page in his Henry VII (1502). For Professor Peter Saccio, the battle was indeed a unique clash in the annals of English history, because "the victory was determined, not by those who fought, but by those who delayed fighting until they were sure of being on the winning side."
Historians such as Adams and Horrox believe that Richard lost the battle not for any mythic reasons, but because of morale and loyalty problems in his army. Most of the common soldiers found it difficult to fight for a liege whom they distrusted, and some lords believed that their situation might improve if Richard were dethroned. According to Adams, against such duplicities Richard's desperate charge was the only knightly behaviour on the field. As fellow historian Michael Bennet puts it, the attack was "the swan-song of [mediaeval] English chivalry". Adams believes this view was shared at the time by the printer William Caxton, who enjoyed sponsorship from Edward IV and Richard III. Nine days after the battle, Caxton published Thomas Malory's story about chivalry and death by betrayal—Le Morte d'Arthur—seemingly as a response to the circumstances of Richard's death.
Elton does not believe Bosworth Field has any true significance, pointing out that the 20th-century English public largely ignored the battle until its quincentennial celebration. In his view, the dearth of specific information about the battle—no-one even knows exactly where it took place—demonstrates its insignificance to English society. Elton considers the battle as just one part of Henry's struggles to establish his reign, underscoring his point by noting that the young king had to spend ten more years pacifying factions and rebellions to secure his throne.
Mackie asserts that, in hindsight, Bosworth Field is notable as the decisive battle that established a dynasty which would rule unchallenged over England for more than a hundred years. Mackie notes that contemporary historians of that time, wary of the three royal successions during the long Wars of the Roses, considered Bosworth Field just another in a lengthy series of such battles. It was through the works and efforts of Francis Bacon and his successors that the public started to believe the battle had decided their futures by bringing about "the fall of a tyrant".
Shakespearean dramatisation
William Shakespeare gives prominence to the Battle of Bosworth in his play, Richard III. It is the "one big battle"; no other fighting scene distracts the audience from this action, represented by a one-on-one sword fight between Henry Tudor and Richard III. Shakespeare uses their duel to bring a climactic end to the play and the Wars of the Roses; he also uses it to champion morality, portraying the "unequivocal triumph of good over evil". Richard, the villainous lead character, has been built up in the battles of Shakespeare's earlier play, Henry VI, Part 3, as a "formidable swordsman and a courageous military leader"—in contrast to the dastardly means by which he becomes king in Richard III. Although the Battle of Bosworth has only five sentences to direct it, three scenes and more than four hundred lines precede the action, developing the background and motivations for the characters in anticipation of the battle.
Shakespeare's account of the battle was mostly based on chroniclers Edward Hall's and Raphael Holinshed's dramatic versions of history, which were sourced from Vergil's chronicle. However, Shakespeare's attitude towards Richard was shaped by scholar Thomas More, whose writings displayed extreme bias against the Yorkist king. The result of these influences is a script that vilifies the king, and Shakespeare had few qualms about departing from history to incite drama. Margaret of Anjou died in 1482, but Shakespeare had her speak to Richard's mother before the battle to foreshadow Richard's fate and fulfill the prophecy she had given in Henry VI. Shakespeare exaggerated the cause of Richard's restless night before the battle, imagining it as a haunting by the ghosts of those whom the king had murdered, including Buckingham. Richard is portrayed as suffering a pang of conscience, but as he speaks he regains his confidence and asserts that he will be evil, if such needed to retain his crown.
The fight between the two armies is simulated by rowdy noises made off-stage (alarums or alarms) while actors walk on-stage, deliver their lines, and exit. To build anticipation for the duel, Shakespeare requests more alarums after Richard's councillor, William Catesby, announces that the king is "[enacting] more wonders than a man". Richard punctuates his entrance with the classic line, "A horse, a horse! My kingdom for a horse!" He refuses to withdraw, continuing to seek to slay Henry's doubles until he has killed his nemesis. There is no documentary evidence that Henry had five decoys at Bosworth Field; the idea was Shakespeare's invention. He drew inspiration from Henry IV's use of them at the Battle of Shrewsbury (1403) to amplify the perception of Richard's courage on the battlefield. Similarly, the single combat between Henry and Richard is Shakespeare's creation. The True Tragedy of Richard III, by an unknown playwright, earlier than Shakespeare's, has no signs of staging such an encounter: its stage directions give no hint of visible combat.
Despite the dramatic licences taken, Shakespeare's version of the Battle of Bosworth was the model of the event for English textbooks for many years during the 18th and 19th centuries. This glamorised version of history, promulgated in books and paintings and played out on stages across the country, perturbed humorist Gilbert Abbott à Beckett. He voiced his criticism in the form of a poem, equating the romantic view of the battle to watching a "fifth-rate production of Richard III": shabbily costumed actors fight the Battle of Bosworth on-stage while those with lesser roles lounge at the back, showing no interest in the proceedings.
In Laurence Olivier's 1955 film adaptation of Richard III, the Battle of Bosworth is represented not by a single duel but a general melee that became the film's most recognised scene and a regular screening at Bosworth Battlefield Heritage Centre.; . The film depicts the clash between the Yorkist and Lancastrian armies on an open field, focusing on individual characters amidst the savagery of hand-to-hand fighting, and received accolades for the realism portrayed. One reviewer for The Manchester Guardian newspaper, however, was not impressed, finding the number of combatants too sparse for the wide plains and a lack of subtlety in Richard's death scene. The means by which Richard is shown to prepare his army for the battle also earned acclaim. As Richard speaks to his men and draws his plans in the sand using his sword, his units appear on-screen, arraying themselves according to the lines that Richard had drawn. Intimately woven together, the combination of pictorial and narrative elements effectively turns Richard into a storyteller, who acts out the plot he has constructed. Shakespearian critic Herbert Coursen extends that imagery: Richard sets himself up as a creator of men, but dies amongst the savagery of his creations. Coursen finds the depiction a contrast to that of Henry V and his "band of brothers".
The adaptation of the setting for Richard III to a 1930s fascist England in Ian McKellen's 1995 film, however, did not sit well with historians. Adams posits that the original Shakespearian setting for Richard's fate at Bosworth teaches the moral of facing one's fate, no matter how unjust it is, "nobly and with dignity". By overshadowing the dramatic teaching with special effects, McKellen's film reduces its version of the battle to a pyrotechnic spectacle about the death of a one-dimensional villain. Coursen agrees that, in this version, the battle and Richard's end are trite and underwhelming.
Battlefield location
The site of the battle is deemed by Leicestershire County Council to be in the vicinity of the town of Market Bosworth. The council engaged historian Daniel Williams to research the battle, and in 1974 his findings were used to build the Bosworth Battlefield Heritage Centre and the presentation it houses. Williams's interpretation, however, has since been questioned. Sparked by the battle's quincentenary celebration in 1985, a dispute among historians has led many to doubt the accuracy of Williams's theory. In particular, geological surveys conducted from 2003 to 2009 by the Battlefields Trust, a charitable organisation that protects and studies old English battlefields, show that the southern and eastern flanks of Ambion Hill were solid ground in the 15th century, contrary to Williams's claim that it was a large area of marshland. Landscape archaeologist Glenn Foard, leader of the survey, said the collected soil samples and finds of medieval military equipment suggest that the battle took place south-west of Ambion Hill (52°34′41″N 1°26′02″W), contrary to the popular belief that it was fought near the foot of the hill.; .
Historians' theories
English Heritage argues that the battle was named after Market Bosworth because the town was then the nearest significant settlement to the battlefield. As explored by Professor Philip Morgan, a battle might initially not be named specifically at all. As time passes, writers of administrative and historical records find it necessary to identify a notable battle, ascribing it a name that is usually toponymical in nature and sourced from combatants or observers. This name then becomes accepted by society and without question. Early records associated the Battle of Bosworth with "Brownehethe", "bellum Miravallenses", "Sandeford" and "Dadlyngton field". The earliest record, a municipal memorandum of 23 August 1485 from York, locates the battle "on the field of Redemore". This is corroborated by a 1485–86 letter that mentions "Redesmore" as its site. According to the historian, Peter Foss, records did not associate the battle with "Bosworth" until 1510.
Foss is named by English Heritage as the principal advocate for "Redemore" as the battle site. He suggests the name is derived from "Hreod Mor", an Anglo-Saxon phrase that means "reedy marshland". Basing his opinion on 13th- and 16th-century church records, he believes "Redemore" was an area of wetland that lay between Ambion Hill and the village of Dadlington, and was close to the Fenn Lanes, a Roman road running east to west across the region. Foard believes this road to be the most probable route that both armies took to reach the battlefield. Williams dismisses the notion of "Redmore" as a specific location, saying that the term refers to a large area of reddish soil; Foss argues that Williams's sources are local stories and flawed interpretations of records. Moreover, he proposes that Williams was influenced by William Hutton's 1788 The Battle of Bosworth-Field, which Foss blames for introducing the notion that the battle was fought west of Ambion Hill on the north side of the River Sence. Hutton, as Foss suggests, misinterpreted a passage from his source, Raphael Holinshed's 1577 Chronicle. Holinshed wrote, "King Richard pitched his field on a hill called Anne Beame, refreshed his soldiers and took his rest." Foss believes that Hutton mistook "field" to mean "field of battle", thus creating the idea that the fight took place on Anne Beame (Ambion) Hill. To "[pitch] his field", as Foss clarifies, was a period expression for setting up a camp.
Foss brings further evidence for his "Redemore" theory by quoting Edward Hall's 1550 Chronicle. Hall stated that Richard's army stepped onto a plain after breaking camp the next day. Furthermore, historian William Burton, author of Description of Leicestershire (1622), wrote that the battle was "fought in a large, flat, plaine, and spacious ground, distant from [Bosworth], between the Towne of Shenton, Sutton [Cheney], Dadlington and Stoke [Golding]". In Foss's opinion both sources are describing an area of flat ground north of Dadlington.
Physical site
English Heritage, responsible for managing England's historic sites, used both theories to designate the site for Bosworth Field. Without preference for either theory, they constructed a single continuous battlefield boundary that encompasses the locations proposed by both Williams and Foss. The region has experienced extensive changes over the years, starting after the battle. Holinshed stated in his chronicle that he found firm ground where he expected the marsh to be, and Burton confirmed that by the end of the 16th century, areas of the battlefield were enclosed and had been improved to make them agriculturally productive. Trees were planted on the south side of Ambion Hill, forming Ambion Wood. In the 18th and 19th centuries, the Ashby Canal carved through the land west and south-west of Ambion Hill. Winding alongside the canal at a distance, the Ashby and Nuneaton Joint Railway crossed the area on an embankment. The changes to the landscape were so extensive that when Hutton revisited the region in 1807 after an earlier 1788 visit, he could not readily find his way around.
Bosworth Battlefield Heritage Centre was built on Ambion Hill, near Richard's Well. According to legend, Richard III drank from one of the several springs in the region on the day of the battle. In 1788, a local pointed out one of the springs to Hutton as the one mentioned in the legend. A stone structure was later built over the location. The inscription on the well reads:
North-west of Ambion Hill, just across the northern tributary of the , a flag and memorial stone mark Richard's Field. Erected in 1973, the site was selected on the basis of Williams's theory. St James's Church at Dadlington is the only structure in the area that is reliably associated with the Battle of Bosworth; the bodies of those killed in the battle were buried there.
Rediscovered battlefield and possible battle scenario
The very extensive survey carried out (2005–2009) by the Battlefields Trust headed by Glenn Foard led eventually to the discovery of the real location of the core battlefield.Glenn Foard & Anne Curry (2013). Bosworth 1485: A Battlefield Rediscovered. Oxford: Oxbow Books. pp. 195–198. This lies about a kilometre further west of the location suggested by Peter Foss. It is in what was at the time of the battle an area of marginal land at the meeting of several township boundaries. There was a cluster of field names suggesting the presence of marshland and heath. Thirty four lead round shot were discovered as a result of systematic metal detecting (more than the total found previously on all other C15th European battlefields), as well as other significant finds,Bosworth: all potential battlefield finds, Battlefields Trust including a small silver gilt badge depicting a boar. Experts believe that the boar badge could indicate the actual site of Richard III's death, since this high-status badge depicting his personal emblem was probably worn by a member of his close retinue.
A new interpretation of the battle now integrates the historic accounts with the battlefield finds and landscape history. The new site lies either side of the Fenn Lanes Roman road, close to Fenn Lane Farm and is some three kilometres to the south-west of Ambion Hill.
Based on the round shot scatter, the likely size of Richard III's army, and the topography, Glenn Foard and Anne Curry think that Richard may have lined up his forces on a slight ridge which lies just east of Fox Covert Lane and behind a postulated medieval marsh.Bosworth Battlefield: Conjectural terrain reconstruction with two options for the Royal army deployment, Battlefields TrustDeployments, Battlefields Trust Richard's vanguard commanded by the Duke of Norfolk was on the right (north) side of Richard's battle line, with the Earl of Northumberland on Richard's left (south) side.
Tudor's forces approached along the line of the Roman road and lined up to the west of the present day Fenn Lane Farm, having marched from the vicinity of Merevale in Warwickshire.
Historic England have re-defined the boundaries of the registered Bosworth Battlefield to incorporate the newly identified site. There are hopes that public access to the site will be possible in the future.
References
Citations
General sources
Books
Jones, Michael. Bosworth 1485: Psychology of a Battle (2014)
Periodicals
Online sources
External links
Bosworth Battlefield Heritage Centre and Country Park: website for the museum, contains information and photos about the current state of the battlefield
Richard III Society : history society, which contains photos and articles that present several competing theories about the location of the battle
Bosworth Field – The Battle of 1485: on website The History Notes
Category:1485 in England
Bosworth 1485
Category:Conflicts in 1485
Category:Military history of Leicestershire
Category:Registered historic battlefields in England
Category:Tourist attractions in Leicestershire
Category:Richard III of England
Category:Henry VII of England
|
wars_military
| 9,216
|
4436
|
Brownian motion
|
https://en.wikipedia.org/wiki/Brownian_motion
|
Brownian motion is the random motion of particles suspended in a medium (a liquid or a gas). The traditional mathematical formulation of Brownian motion is that of the Wiener process, which is often called Brownian motion, even in mathematical sources.
This motion pattern typically consists of random fluctuations in a particle's position inside a fluid sub-domain, followed by a relocation to another sub-domain. Each relocation is followed by more fluctuations within the new closed volume. This pattern describes a fluid at thermal equilibrium, defined by a given temperature. Within such a fluid, there exists no preferential direction of flow (as in transport phenomena). More specifically, the fluid's overall linear and angular momenta remain null over time. The kinetic energies of the molecular Brownian motions, together with those of molecular rotations and vibrations, sum up to the caloric component of a fluid's internal energy (the equipartition theorem).Pathria, RK (1972). Statistical Mechanics. Pergamon Press. pp. 43–48, 73–74. ISBN 0-08-016747-0.
This motion is named after the Scottish botanist Robert Brown, who first described the phenomenon in 1827, while looking through a microscope at pollen of the plant Clarkia pulchella immersed in water. In 1900, the French mathematician Louis Bachelier modeled the stochastic process now called Brownian motion in his doctoral thesis, The Theory of Speculation (Théorie de la spéculation), prepared under the supervision of Henri Poincaré. Then, in 1905, theoretical physicist Albert Einstein published a paper in which he modelled the motion of the pollen particles as being moved by individual water molecules, making one of his first major scientific contributions.
The direction of the force of atomic bombardment is constantly changing, and at different times the particle is hit more on one side than another, leading to the seemingly random nature of the motion. This explanation of Brownian motion served as convincing evidence that atoms and molecules exist and was further verified experimentally by Jean Perrin in 1908. Perrin was awarded the Nobel Prize in Physics in 1926 "for his work on the discontinuous structure of matter".
The many-body interactions that yield the Brownian pattern cannot be solved by a model accounting for every involved molecule. Consequently, only probabilistic models applied to molecular populations can be employed to describe it. Two such models of the statistical mechanics, due to Einstein and Smoluchowski, are presented below. Another, pure probabilistic class of models is the class of the stochastic process models. There exist sequences of both simpler and more complicated stochastic processes which converge (in the limit) to Brownian motion (see random walk and Donsker's theorem).
History
The Roman philosopher-poet Lucretius' scientific poem On the Nature of Things () has a remarkable description of the motion of dust particles in verses 113–140 from Book II. He uses this as a proof of the existence of atoms:
Although the mingling, tumbling motion of dust particles is caused largely by air currents, the glittering, jiggling motion of small dust particles is caused chiefly by true Brownian dynamics; Lucretius "perfectly describes and explains the Brownian movement by a wrong example".
The discovery of this phenomenon is credited to the botanist Robert Brown in 1827. Brown was studying plant reproduction when he observed pollen grains of the plant Clarkia pulchella in water under a microscope. These grains contain minute particles on the order of 1/4000th of an inch in size. He observed these particles executing a jittery motion. By repeating the experiment with particles of inorganic matter he was able to rule out that the motion was life-related, although its origin was yet to be explained.
The mathematics of much of stochastic analysis including the mathematics of Brownian motion was introduced by Louis Bachelier in 1900 in his PhD thesis "The theory of speculation", in which he presented an analysis of the stock and option markets. However this work was largely unknown until the 1950s.
Albert Einstein (in one of his 1905 papers) provided an explanation of Brownian motion in terms of atoms and molecules at a time when their existence was still debated. Einstein proved the relation between the probability distribution of a Brownian particle and the diffusion equation. These equations describing Brownian motion were subsequently verified by the experimental work of Jean Baptiste Perrin in 1908, leading to his Nobel prize. Norbert Wiener gave the first complete and rigorous mathematical analysis in 1923, leading to the underlying mathematical concept being called a Wiener process.
The instantaneous velocity of the Brownian motion can be defined as , when , where is the momentum relaxation time.
In 2010, the instantaneous velocity of a Brownian particle (a glass microsphere trapped in air with optical tweezers) was measured successfully. The velocity data verified the Maxwell–Boltzmann velocity distribution, and the equipartition theorem for a Brownian particle.
Statistical mechanics theories
Einstein's theory
There are two parts to Einstein's theory: the first part consists in the formulation of a diffusion equation for Brownian particles, in which the diffusion coefficient is related to the mean squared displacement of a Brownian particle, while the second part consists in relating the diffusion coefficient to measurable physical quantities. In this way Einstein was able to determine the size of atoms, and how many atoms there are in a mole, or the molecular weight in grams, of a gas. In accordance to Avogadro's law, this volume is the same for all ideal gases, which is 22.414 liters at standard temperature and pressure. The number of atoms contained in this volume is referred to as the Avogadro number, and the determination of this number is tantamount to the knowledge of the mass of an atom, since the latter is obtained by dividing the molar mass of the gas by the Avogadro constant.
The first part of Einstein's argument was to determine how far a Brownian particle travels in a given time interval. Classical mechanics is unable to determine this distance because of the enormous number of bombardments a Brownian particle will undergo, roughly of the order of 1014 collisions per second.
He regarded the increment of particle positions in time in a one-dimensional (x) space (with the coordinates chosen so that the origin lies at the initial position of the particle) as a random variable () with some probability density function (i.e., is the probability density for a jump of magnitude , i.e., the probability density of the particle incrementing its position from to in the time interval ). Further, assuming conservation of particle number, he expanded the number density (number of particles per unit volume around ) at time in a Taylor series,
where the second equality is by definition of . The integral in the first term is equal to one by the definition of probability, and the second and other even terms (i.e. first and other odd moments) vanish because of space symmetry. What is left gives rise to the following relation:
Where the coefficient after the Laplacian, the second moment of probability of displacement , is interpreted as mass diffusivity D:
Then the density of Brownian particles at point at time satisfies the diffusion equation:
Assuming that N particles start from the origin at the initial time t = 0, the diffusion equation has the solution:
This expression (which is a normal distribution with the mean and variance usually called Brownian motion ) allowed Einstein to calculate the moments directly. The first moment is seen to vanish, meaning that the Brownian particle is equally likely to move to the left as it is to move to the right. The second moment is, however, non-vanishing, being given by
This equation expresses the mean squared displacement in terms of the time elapsed and the diffusivity. From this expression Einstein argued that the displacement of a Brownian particle is not proportional to the elapsed time, but rather to its square root. His argument is based on a conceptual switch from the "ensemble" of Brownian particles to the "single" Brownian particle: we can speak of the relative number of particles at a single instant just as well as of the time it takes a Brownian particle to reach a given point.
The second part of Einstein's theory relates the diffusion constant to physically measurable quantities, such as the mean squared displacement of a particle in a given time interval. This result enables the experimental determination of the Avogadro number and therefore the size of molecules. Einstein analyzed a dynamic equilibrium being established between opposing forces. The beauty of his argument is that the final result does not depend upon which forces are involved in setting up the dynamic equilibrium.
In his original treatment, Einstein considered an osmotic pressure experiment, but the same conclusion can be reached in other ways.
Consider, for instance, particles suspended in a viscous fluid in a gravitational field. Gravity tends to make the particles settle, whereas diffusion acts to homogenize them, driving them into regions of smaller concentration. Under the action of gravity, a particle acquires a downward speed of , where is the mass of the particle, is the acceleration due to gravity, and is the particle's mobility in the fluid. George Stokes had shown that the mobility for a spherical particle with radius is , where is the dynamic viscosity of the fluid. In a state of dynamic equilibrium, and under the hypothesis of isothermal fluid, the particles are distributed according to the barometric distribution
where is the difference in density of particles separated by a height difference, of , is the Boltzmann constant (the ratio of the universal gas constant, , to the Avogadro constant, ), and is the absolute temperature.
Dynamic equilibrium is established because the more that particles are pulled down by gravity, the greater the tendency for the particles to migrate to regions of lower concentration. The flux is given by Fick's law,
where . Introducing the formula for , we find that
In a state of dynamical equilibrium, this speed must also be equal to . Both expressions for are proportional to , reflecting that the derivation is independent of the type of forces considered. Similarly, one can derive an equivalent formula for identical charged particles of charge in a uniform electric field of magnitude , where is replaced with the electrostatic force . Equating these two expressions yields the Einstein relation for the diffusivity, independent of or or other such forces:
Here the first equality follows from the first part of Einstein's theory, the third equality follows from the definition of the Boltzmann constant as , and the fourth equality follows from Stokes's formula for the mobility. By measuring the mean squared displacement over a time interval along with the universal gas constant , the temperature , the viscosity , and the particle radius , the Avogadro constant can be determined.
The type of dynamical equilibrium proposed by Einstein was not new. It had been pointed out previously by J. J. Thomson in his series of lectures at Yale University in May 1903 that the dynamic equilibrium between the velocity generated by a concentration gradient given by Fick's law and the velocity due to the variation of the partial pressure caused when ions are set in motion "gives us a method of determining Avogadro's constant which is independent of any hypothesis as to the shape or size of molecules, or of the way in which they act upon each other".
An identical expression to Einstein's formula for the diffusion coefficient was also found by Walther Nernst in 1888 in which he expressed the diffusion coefficient as the ratio of the osmotic pressure to the ratio of the frictional force and the velocity to which it gives rise. The former was equated to the law of van 't Hoff while the latter was given by Stokes's law. He writes for the diffusion coefficient , where is the osmotic pressure and is the ratio of the frictional force to the molecular viscosity which he assumes is given by Stokes's formula for the viscosity. Introducing the ideal gas law per unit volume for the osmotic pressure, the formula becomes identical to that of Einstein's. The use of Stokes's law in Nernst's case, as well as in Einstein and Smoluchowski, is not strictly applicable since it does not apply to the case where the radius of the sphere is small in comparison with the mean free path.
Confirming Einstein's formula experimentally proved difficult.
Initial attempts by Theodor Svedberg in 1906 and 1907 were critiqued by Einstein and by Perrin as not measuring a quantity directly comparable to the formula. Victor Henri in 1908 took cinematographic shots through a microscope and found quantitative disagreement with the formula but again the analysis was uncertain. Einstein's predictions were finally confirmed in a series of experiments carried out by Chaudesaigues in 1908 and Perrin in 1909. The confirmation of Einstein's theory constituted empirical progress for the kinetic theory of heat. In essence, Einstein showed that the motion can be predicted directly from the kinetic model of thermal equilibrium. The importance of the theory lay in the fact that it confirmed the kinetic theory's account of the second law of thermodynamics as being an essentially statistical law.See P. Clark 1976, p. 97
Smoluchowski model
Smoluchowski's theory of Brownian motion starts from the same premise as that of Einstein and derives the same probability distribution for the displacement of a Brownian particle along the in time . He therefore gets the same expression for the mean squared displacement: However, when he relates it to a particle of mass moving at a velocity which is the result of a frictional force governed by Stokes's law, he finds
where is the viscosity coefficient, and is the radius of the particle. Associating the kinetic energy with the thermal energy , the expression for the mean squared displacement is times that found by Einstein. The fraction 27/64 was commented on by Arnold Sommerfeld in his necrology on Smoluchowski: "The numerical coefficient of Einstein, which differs from Smoluchowski by 27/64 can only be put in doubt."See p. 535 in
Smoluchowski attempts to answer the question of why a Brownian particle should be displaced by bombardments of smaller particles when the probabilities for striking it in the forward and rear directions are equal.
If the probability of gains and losses follows a binomial distribution,
with equal probabilities of 1/2, the mean total gain is
If is large enough so that Stirling's approximation can be used in the form
then the expected total gain will be
showing that it increases as the square root of the total population.
Suppose that a Brownian particle of mass is surrounded by lighter particles of mass which are traveling at a speed . Then, reasons Smoluchowski, in any collision between a surrounding and Brownian particles, the velocity transmitted to the latter will be . This ratio is of the order of . But we also have to take into consideration that in a gas there will be more than 1016 collisions in a second, and even greater in a liquid where we expect that there will be 1020 collision in one second. Some of these collisions will tend to accelerate the Brownian particle; others will tend to decelerate it. If there is a mean excess of one kind of collision or the other to be of the order of 108 to 1010 collisions in one second, then velocity of the Brownian particle may be anywhere between . Thus, even though there are equal probabilities for forward and backward collisions there will be a net tendency to keep the Brownian particle in motion, just as the ballot theorem predicts.
These orders of magnitude are not exact because they do not take into consideration the velocity of the Brownian particle, , which depends on the collisions that tend to accelerate and decelerate it. The larger is, the greater will be the collisions that will retard it so that the velocity of a Brownian particle can never increase without limit. Could such a process occur, it would be tantamount to a perpetual motion of the second type. And since equipartition of energy applies, the kinetic energy of the Brownian particle, will be equal, on the average, to the kinetic energy of the surrounding fluid particle,
In 1906, Smoluchowski published a one-dimensional model to describe a particle undergoing Brownian motion. The model assumes collisions with where is the test particle's mass and the mass of one of the individual particles composing the fluid. It is assumed that the particle collisions are confined to one dimension and that it is equally probable for the test particle to be hit from the left as from the right. It is also assumed that every collision always imparts the same magnitude of . If is the number of collisions from the right and the number of collisions from the left then after collisions the particle's velocity will have changed by . The multiplicity is then simply given by:
and the total number of possible states is given by . Therefore, the probability of the particle being hit from the right times is:
As a result of its simplicity, Smoluchowski's 1D model can only qualitatively describe Brownian motion. For a realistic particle undergoing Brownian motion in a fluid, many of the assumptions don't apply. For example, the assumption that on average there are an equal number of collisions from the right as from the left falls apart once the particle is in motion. Also, there would be a distribution of different possible s instead of always just one in a realistic situation.
Langevin equation
The diffusion equation yields an approximation of the time evolution of the probability density function associated with the position of the particle going under a Brownian movement under the physical definition. The approximation becomes valid on timescales much larger than the timescale of individual atomic collisions, since it does not include a term to describe the acceleration of particles during collision. The time evolution of the position of the Brownian particle over all time scales described using the Langevin equation, an equation that involves a random force field representing the effect of the thermal fluctuations of the solvent on the particle. At longer times scales, where acceleration is negligible, individual particle dynamics can be approximated using Brownian dynamics in place of Langevin dynamics.
Astrophysics: star motion within galaxies
In stellar dynamics, a massive body (star, black hole, etc.) can experience Brownian motion as it responds to gravitational forces from surrounding stars. The rms velocity of the massive object, of mass , is related to the rms velocity of the background stars by
where is the mass of the background stars. The gravitational force from the massive object causes nearby stars to move faster than they otherwise would, increasing both and . The Brownian velocity of Sgr A*, the supermassive black hole at the center of the Milky Way galaxy, is predicted from this formula to be less than 1 km s−1.
Mathematics
In mathematics, Brownian motion is described by the Wiener process, a continuous-time stochastic process named in honor of Norbert Wiener. It is one of the best known Lévy processes (càdlàg stochastic processes with stationary independent increments) and occurs frequently in pure and applied mathematics, economics and physics.
The Wiener process is characterized by four facts:
is almost surely continuous
has independent increments
denotes the normal distribution with expected value and variance . The condition that it has independent increments means that if then and are independent random variables. In addition, for some filtration is measurable for all
An alternative characterisation of the Wiener process is the so-called Lévy characterisation that says that the Wiener process is an almost surely continuous martingale with and quadratic variation
A third characterisation is that the Wiener process has a spectral representation as a sine series whose coefficients are independent random variables. This representation can be obtained using the Kosambi–Karhunen–Loève theorem.
The Wiener process can be constructed as the scaling limit of a random walk, or other discrete-time stochastic processes with stationary independent increments. This is known as Donsker's theorem. Like the random walk, the Wiener process is recurrent in one or two dimensions (meaning that it returns almost surely to any fixed neighborhood of the origin infinitely often) whereas it is not recurrent in dimensions three and higher. Unlike the random walk, it is scale invariant.
A d-dimensional Gaussian free field has been described as "a d-dimensional-time analog of Brownian motion."
Statistics
The Brownian motion can be modeled by a random walk.
In the general case, Brownian motion is a Markov process and described by stochastic integral equations.
Lévy characterisation
The French mathematician Paul Lévy proved the following theorem, which gives a necessary and sufficient condition for a continuous -valued stochastic process to actually be -dimensional Brownian motion. Hence, Lévy's condition can actually be used as an alternative definition of Brownian motion.
Let be a continuous stochastic process on a probability space taking values in . Then the following are equivalent:
is a Brownian motion with respect to , i.e., the law of with respect to is the same as the law of an -dimensional Brownian motion, i.e., the push-forward measure is classical Wiener measure on .
both
is a martingale with respect to (and its own natural filtration); and
for all , is a martingale with respect to (and its own natural filtration), where denotes the Kronecker delta.
Spectral content
The spectral content of a stochastic process can be found from the power spectral density, formally defined as
where stands for the expected value. The power spectral density of Brownian motion is found to be
where is the diffusion coefficient of . For naturally occurring signals, the spectral content can be found from the power spectral density of a single realization, with finite available time, i.e.,
which for an individual realization of a Brownian motion trajectory, it is found to have expected value
and variance
For sufficiently long realization times, the expected value of the power spectrum of a single trajectory converges to the formally defined power spectral density but its coefficient of variation tends to This implies the distribution of is broad even in the infinite time limit.
Riemannian manifolds
Brownian motion is usually considered to take place in Euclidean space. It is natural to consider how such motion generalizes to more complex shapes, such as surfaces or higher dimensional manifolds. The formalization requires the space to possess some form of a derivative, as well as a metric, so that a Laplacian can be defined. Both of these are available on Riemannian manifolds.
Riemannian manifolds have the property that geodesics can be described in polar coordinates; that is, displacements are always in a radial direction, at some given angle. Uniform random motion is then described by Gaussians along the radial direction, independent of the angle, the same as in Euclidean space.
The infinitesimal generator (and hence characteristic operator) of Brownian motion on Euclidean is , where denotes the Laplace operator. Brownian motion on an -dimensional Riemannian manifold can be defined as diffusion on with the characteristic operator given by , half the Laplace–Beltrami operator .
One of the topics of study is a characterization of the Poincaré recurrence time for such systems.
Narrow escape
The narrow escape problem is a ubiquitous problem in biology, biophysics and cellular biology which has the following formulation: a Brownian particle (ion, molecule, or protein) is confined to a bounded domain (a compartment or a cell) by a reflecting boundary, except for a small window through which it can escape. The narrow escape problem is that of calculating the mean escape time. This time diverges as the window shrinks, thus rendering the calculation a singular perturbation problem.
See also
References
Further reading
Also includes a subsequent defense by Brown of his original observations, Additional remarks on active molecules.
Lucretius, On The Nature of Things, translated by William Ellery Leonard. (on-line version, from Project Gutenberg. See the heading 'Atomic Motions'; this translation differs slightly from the one quoted).
Nelson, Edward, (1967). Dynamical Theories of Brownian Motion. (PDF version of this out-of-print book, from the author's webpage.) This is primarily a mathematical work, but the first four chapters discuss the history of the topic, in the era from Brown to Einstein.
See also Perrin's book "Les Atomes" (1914).
Theile, T. N.
Danish version: "Om Anvendelse af mindste Kvadraters Methode i nogle Tilfælde, hvor en Komplikation af visse Slags uensartede tilfældige Fejlkilder giver Fejlene en 'systematisk' Karakter".
French version: "Sur la compensation de quelques erreurs quasi-systématiques par la méthodes de moindre carrés" published simultaneously in Vidensk. Selsk. Skr. 5. Rk., naturvid. og mat. Afd., 12:381–408, 1880.
External links
Einstein on Brownian Motion
Discusses history, botany and physics of Brown's original observations, with videos
"Einstein's prediction finally witnessed one century later" : a test to observe the velocity of Brownian motion
Large-Scale Brownian Motion Demonstration
Category:Statistical mechanics
Category:Wiener process
Category:Fractals
Category:Colloidal chemistry
Category:Robert Brown (botanist, born 1773)
Category:Albert Einstein
Category:Articles containing video clips
Category:Lévy processes
|
physics
| 4,153
|
4925
|
Blue whale
|
https://en.wikipedia.org/wiki/Blue_whale
|
The blue whale (Balaenoptera musculus) is a marine mammal and a baleen whale. Reaching a maximum confirmed length of and weighing up to , it is the largest animal known to have ever existed. The blue whale's long and slender body can be of various shades of greyish-blue on its upper surface and somewhat lighter underneath. Four subspecies are recognized: B. m. musculus in the North Atlantic and North Pacific, B. m. intermedia in the Southern Ocean, B. m. brevicauda (the pygmy blue whale) in the Indian Ocean and South Pacific Ocean, and B. m. indica in the Northern Indian Ocean. There is a population in the waters off Chile that may constitute a fifth subspecies.
In general, blue whale populations migrate between their summer feeding areas near the poles and their winter breeding grounds near the tropics. There is also evidence of year-round residencies, and partial or age- and sex-based migration. Blue whales are filter feeders; their diet consists almost exclusively of krill. They are generally solitary or gather in small groups, and have no well-defined social structure other than mother–calf bonds. Blue whales vocalize, with a fundamental frequency ranging from 8 to 25 Hz; their vocalizations may vary by region, season, behavior, and time of day. Orcas are their only natural predators.
The blue whale was abundant in nearly all the Earth's oceans until the end of the 19th century. It was hunted almost to the point of extinction by whalers until the International Whaling Commission banned all blue whale hunting in 1966. The International Union for Conservation of Nature has listed blue whales as Endangered as of 2018. Blue whales continue to face numerous man-made threats such as ship strikes, pollution, ocean noise, and climate change.
Taxonomy
Nomenclature
The genus name, Balaenoptera, means winged whale, while the species name, musculus, could mean "muscle" or a diminutive form of "mouse", possibly a pun by Carl Linnaeus when he named the species in Systema Naturae. One of the first published descriptions of a blue whale comes from Robert Sibbald's Phalainologia Nova, after Sibbald found a stranded whale in the estuary of the Firth of Forth, Scotland, in 1692. The name "blue whale" was derived from the Norwegian blåhval, coined by Svend Foyn shortly after he had perfected the harpoon gun. The Norwegian scientist G. O. Sars adopted it as the common name in 1874.
Blue whales were referred to as "Sibbald's rorqual", after Robert Sibbald, who first described the species. Whalers sometimes referred to them as "sulphur bottom" whales, as the bellies of some individuals are tinged with yellow. This tinge is due to a coating of huge numbers of diatoms. (Herman Melville briefly refers to "sulphur bottom" whales in his novel Moby-Dick.)
Evolution
Blue whales are rorquals in the family Balaenopteridae. A 2018 analysis estimates that the Balaenopteridae family diverged from other families in between 10.48 and 4.98 million years ago during the late Miocene. The earliest discovered anatomically modern blue whale is a partial skull fossil from southern Italy identified as B. cf. musculus, dating to the Early Pleistocene, roughly 1.5–1.25 million years ago. The Australian pygmy blue whale diverged during the Last Glacial Maximum. Their more recent divergence has resulted in the subspecies having a relatively low genetic diversity, and New Zealand blue whales have an even lower genetic diversity.
Whole genome sequencing suggests that blue whales are most closely related to sei whales with gray whales as a sister group. This study also found significant gene flow between minke whales and the ancestors of the blue and sei whale. Blue whales also displayed high genetic diversity.
Hybridization
Blue whales are known to interbreed with fin whales.This may have already been known to Icelanders in the 17th century, see The earliest description of a possible hybrid between a blue whale and a fin whale was a anomalous female whale with the features of both the blue and the fin whales taken in the North Pacific. A whale captured off northwestern Spain in 1984, was found to have been the product of a blue whale mother and a fin whale father.
Two live blue-fin whale hybrids have since been documented in the Gulf of St. Lawrence (Canada), and in the Azores (Portugal). DNA tests done in Iceland on a blue whale killed in July 2018 by the Icelandic whaling company Hvalur hf., found that the whale was the offspring of a male fin whale and female blue whale; however, the results are pending independent testing and verification of the samples. Because the International Whaling Commission classified blue whales as a "Protection Stock", trading their meat is illegal, and the kill is an infraction that must be reported. Blue-fin hybrids have been detected from genetic analysis of whale meat samples taken from Japanese markets. Blue-fin whale hybrids are capable of being fertile. Molecular tests on a pregnant female whale caught off Iceland in 1986 found that it had a blue whale mother and a fin whale father, while its fetus was sired by a blue whale.
In 2024, a genome analysis of North Atlantic blue whales found evidence that approximately 3.5% of the blue whales' genome was derived from hybridization with fin whales. Gene flow was found to be unidirectional from fin whales to blue whales. Comparison with Antarctic blue whales showed that this hybridization began after the separation of the northern and southern populations. Despite their smaller size, fin whales have similar cruising and sprinting speeds to blue whales, which would allow fin males to complete courtship chases with blue females.
There is a reference to a humpback–blue whale hybrid in the South Pacific, attributed to marine biologist Michael Poole.
Subspecies and stocks
At least four subspecies of blue whale are traditionally recognized, some of which are divided into population stocks or "management units". Like many large rorquals, the blue whale is a cosmopolitan species. They have a worldwide distribution, but are mostly absent from the Arctic Ocean and the Mediterranean, Okhotsk, and Bering Sea.
Northern subspecies (B. m. musculus)
North Atlantic population – This population is mainly documented from New England along eastern Canada to Greenland, particularly in the Gulf of St. Lawrence, during summer, though some individuals may remain there all year. They also aggregate near Iceland and have increased their presence in the Norwegian Sea. They are reported to migrate south to the West Indies, the Azores and northwest Africa.
Eastern North Pacific population – Whales in this region mostly feed off California's coast from summer to fall and then Oregon, Washington State, the Alaska Gyre and Aleutian Islands later in the fall. During winter and spring, blue whales migrate south to the waters of Mexico, mostly the Gulf of California, and the Costa Rica Dome, where they both feed and breed.
Central/Western Pacific population – This stock is documented around the Kamchatka Peninsula during the summer; some individuals may remain there year-round. They have been recorded wintering in Hawaiian waters, though some can be found in the Gulf of Alaska during fall and early winter.
Northern Indian Ocean subspecies (B. m. indica) – This subspecies can be found year-round in the northwestern Indian Ocean, though some individuals have been recorded travelling to the Crozet Islands during between summer and fall.
Pygmy blue whale (B. m. brevicauda)
Madagascar population – This population migrates between the Seychelles and Amirante Islands in the north and the Crozet Islands and Prince Edward Islands in the south where they feed, passing through the Mozambique Channel.
Australia/Indonesia population – Whales in this region appear to winter off Indonesia and migrate to their summer feeding grounds off the coast of Western Australia, with major concentrations at Perth Canyon and an area stretching from the Great Australian Bight and Bass Strait.
Eastern Australia/New Zealand population – This stock may reside in the Tasman Sea and the Lau Basin in winter and feed mostly in the South Taranaki Bight and off the coast of eastern North Island. Blue whales have been detected around New Zealand throughout the year.
Antarctic subspecies (B. m. intermedia) – This subspecies includes all populations found around the Antarctic. They have been recorded to travel as far north as the Eastern Tropical Pacific, the central Indian Ocean, and the waters of southwestern Australia and northern New Zealand.
Blue whales off the Chilean coast might be a separate subspecies based on their geographic separation, genetics, and unique song types. Chilean blue whales might overlap in the Eastern Tropical Pacific with Antarctica blue whales and Eastern North Pacific blue whales. Chilean blue whales are genetically differentiated from Antarctica blue whales such that interbreeding is unlikely. However, the genetic distinction is less between them and the Eastern North Pacific blue whale, hence there might be gene flow between the Southern and Northern Hemispheres. A 2019 study by Luis Pastene, Jorge Acevedo and Trevor Branch provided new morphometric data from a survey of 60 Chilean blue whales, hoping to address the debate about the possible distinction of this population from others in the Southern Hemisphere. Data from this study, based on whales collected in the 1965/1966 whaling season, shows that both the maximum and mean body length of Chilean blue whales lies between these values in pygmy and Antarctic blue whales. Data also indicates a potential difference in snout-eye measurements between the three, and a significant difference in fluke-anus length between the Chilean population and pygmy blue whales. This further confirms Chilean blue whales as a separate population, and implies that they do not fall under the same subspecies as the pygmy blue whale (B. m. brevicauda).
A 2024 genomic study of the global blue whale population found support for the subspecific status of Antarctic and Indo-western Pacific blue whales but not eastern Pacific blue whales. The study found "...divergence between the eastern North and eastern South Pacific, and among the eastern Indian Ocean, the western South Pacific and the northern Indian Ocean." and "no divergence within the Antarctic".
Description
The blue whale is a slender-bodied cetacean with a broad U-shaped head; thin, elongated flippers; a small sickle-shaped dorsal fin located close to the tail, and a large tail stock at the root of the wide and thin flukes. The upper jaw is lined with 70–395 black baleen plates. The throat region has 60–88 grooves which allows the skin to expand during feeding. It has two blowholes that can squirt up in the air. The skin has a mottled grayish-blue coloration, appearing blue underwater. The mottling patterns near the dorsal fin vary between individuals. The underbelly has lighter pigmentation and can appear yellowish due to diatoms in the water, which historically earned them the nickname "sulphur bottom". The male blue whale has the largest penis in the animal kingdom, at around long and wide.
Size
The blue whale is the largest animal known ever to have existed. Some studies have estimated that certain shastasaurid ichthyosaurs and the ancient whale Perucetus could have rivalled the blue whale in size, with Perucetus actually being heavier with a mean weight of . However, these estimates were based on fragmentary remains, and the proposed size for Perucetus was disputed by studies in 2024. Other studies estimate that, on land, large sauropods like Bruhathkayosaurus (mean weight: 110–170 tons) and Maraapunisaurus (mean weight: 80–120 tons) might have rivalled the blue whale, with the former even exceeding the blue whale based on its most liberal estimates (240 tons). However, these estimates were based on even more fragmentary specimens that had disintegrated by the time estimates could be made.
The International Whaling Commission (IWC) whaling database reports 88 individuals longer than , including one of . The Discovery Committee reported lengths up to . The longest scientifically measured individual blue whale was from rostrum tip to tail notch. Female blue whales are larger than males. Hydrodynamic models suggest a blue whale could not exceed because of metabolic and energy constraints. The existence of blue whales exceeding in length has been questioned.
The average length of sexually mature female blue whales is for Eastern North Pacific blue whales, for central and western North Pacific blue whales, for North Atlantic blue whales, for Antarctic blue whales, for Chilean blue whales, and for pygmy blue whales. Length measurements of blue whales in the Gulf of California suggest a mean length of and a maximum length of , which is comparable to Northeast Pacific blue whales.
In the Northern Hemisphere, males weigh an average and females . Eastern North Pacific blue whale males average and females . Antarctic males average and females . Pygmy blue whale males average to . The weight of the heart of a stranded North Atlantic blue whale was , the largest known in any animal. The record-holder blue whale was caught in the Southern Ocean on March 20, 1947, and was recorded as measuring long and weighing , with estimates of up to .
In 2024, Motani and Pyenson calculated the body mass of blue whales at different lengths, compiling records of their sizes from previous academic literatures and using regression analyses and volumetric analyses. A long individual was estimated to weigh approximately , while a long individual was estimated to weigh approximately . Considering that the largest blue whale was indeed long, they estimated that a blue whale of such length would have weighed approximately . In 2025, Paul and Larramendi estimated that blue whales could exceed , but likely not by as much as Motani and Pyenson documented.
During the harvest of a female blue whale, Messrs. Irvin and Johnson collected a fetus that is now 70% preserved and used for educational purposes. The fetus was collected in 1922, so some shrinkage may have occurred, making visualization of some features fairly difficult. However, due to this collection researchers now know that the external anatomy of a blue whale fetus is approximately 133 mm. Along with during the developmental phases, the fetus is located where the embryonic and fetal phases converge. This fetus is the youngest gestational age of the specimen recorded.
Life span
Blue whales live around 80–90 years or more. Scientists look at a blue whale's earwax or ear plug to estimate its age. Each year, a light and dark layer of wax is laid corresponding with fasting during migration and feeding time. Each set is thus an indicator of age. The oldest blue whale found was determined, using this method, to be 110 years old. The maximum age of a pygmy blue whale determined this way is 73 years. Long-term identification studies in the Northeast Pacific suggest that they live for at least 40–45 years. In addition, female blue whales develop scars or corpora albicantia on their ovaries every time they ovulate. In a female pygmy blue whale, one corpus albicans is formed on average every 2.6 years.
Behavior and ecology
The blue whale is usually solitary, but can be found in pairs. When productivity is high enough, blue whales can be seen in gatherings of more than 50 individuals. Populations may go on long migrations, traveling to their summer feeding grounds towards the poles and then heading to their winter breeding grounds in more equatorial waters. The animals appear to use memory to locate the best feeding areas. There is evidence of alternative strategies, such as year-round residency, and partial (where only some individuals migrate) or age/sex-based migration. Some whales have been recorded feeding in breeding grounds. Blue whale typically swim at but may swim faster at during encounters with boats, predators or other individuals. Their massive size limits their ability to breach.
The greatest dive depth reported from tagged blue whales was . Their theoretical aerobic dive limit was estimated at 31.2 minutes, however, the longest dive measured was 15.2 minutes. The deepest confirmed dive from a pygmy blue whale was . A blue whale's heart rate can drop to 2 beats per minute (bpm) at deep depths, but upon surfacing, can rise to 37 bpm, which is close to its peak heart rate.
Diet and feeding
The blue whale's diet consists almost exclusively of krill, which they capture through lunge feeding, where they swim towards krill at high speeds with their mouths open up to 80 degrees. They may engulf of water at one time. They squeeze the water out through their baleen plates with pressure from the throat pouch and tongue, and swallow the remaining krill. Blue whales have been recorded making 180° rolls during lunge-feeding, possibly allowing them to search the prey field and find the densest patches.
While pursuing krill patches, blue whales maximize their calorie intake by increasing the number of lunges while selecting the thickest patches. This provides them enough energy for everyday activities while storing additional energy necessary for migration and reproduction. Due to their size, blue whales have larger energetic demands than most animals resulting in their need for this specific feeding habit. Blue whales have to engulf densities greater than 100 krill/m3 to maintain the cost of lunge feeding. They can consume from one mouthful of krill, which can provide up to 240 times more energy than used in a single lunge. It is estimated that an average-sized blue whale must consume of krill a day. On average, a blue whale eats each day.
In the southern ocean, blue whales feed on Antarctic krill (Euphausia superba). In the South Australia, pygmy blue whales (B. m. brevicauda) feeds on Nyctiphanes australis. In California, they feed mostly on Thysanoessa spinifera, but also less commonly on North pacific krill (Euphausia pacifica). Research of the Eastern North Pacific population shows that when diving to feed on krill, the whales reach an average depth of 201 meters, with dives lasting 9.8 minutes on average.
While most blue whales feed almost exclusively on krill, the Northern Indian Ocean subspecies (B. m. indica) instead feeds predominantly on sergestid shrimp. To do so, they dive deeper and for longer periods of time than blue whales in other regions of the world, with dives of 10.7 minutes on average, and a hypothesized dive depth of about 300 meters. Fecal analysis also found the presence of fish, krill, amphipods, cephalopods, and scyphozoan jellyfish in their diet.
Blue whales appear to avoid directly competing with other baleen whales. Different whale species select different feeding spaces and times as well as different prey species. In the Southern Ocean, baleen whales appear to feed on Antarctic krill of different sizes, which may lessen competition between them.
Blue whale feeding habits may differ due to situational disturbances, like environmental shifts or human interference. This can cause a change in diet due to stress response. Due to these changing situations, there was a study performed on blue whales measuring cortisol levels and comparing them with the levels of stressed individuals, it gave a closer look to the reasoning behind their diet and behavioral changes.
Reproduction and birth
The age of sexual maturity in blue whales is thought to be between 5 and 15 years, with females reaching an average of 10 years and males reaching an average of 12 years. In the Northern Hemisphere, the length at which they reach maturity is for females and for males. In the Southern Hemisphere, the length of maturity is and for females and males respectively. Male pygmy blue whales average at sexual maturity. Female pygmy blue whales are in length and roughly 10 years old at the age of sexual maturity. Since corpora are added every ~2.5 years after sexual maturity, physical maturity is assumed to occur at 35 years. Little is known about mating behavior, or breeding and birthing areas. Blue whales appear to be polygynous, with males competing for females. A male blue whale typically trails a female and will fight off potential rivals. The species mates from fall to winter.
Pregnant females eat roughly four percent of their body weight daily, amounting to 60% of their overall body weight throughout summer foraging periods. Gestation may last 10–12 months with calves being long and weighing at birth. Estimates suggest that because calves require milk per kg of mass gain, blue whales likely produce of milk per day (ranging from of milk per day). The first video of a calf thought to be nursing was filmed in New Zealand in 2016. Calves may be weaned when they reach 6–8 months old at a length of . A newborn blue whale calf gains approximately per day. They gain roughly during the weaning period. Interbirth periods last two to three years; they average 2.6 years in pygmy blue whales. Mother-calf pairings are infrequently observed, and this may be due to mothers birthing and weaning their young in-between their entry and return to their summer feeding grounds.
Vocalizations
Blue whales produce some of the loudest and lowest frequency vocalizations in the animal kingdom, and their inner ears appear well adapted for detecting low-frequency sounds. The fundamental frequency for blue whale vocalizations ranges from 8 to 25 Hz. The maximum loudness is 188 dB. Blue whale songs vary between populations.
Vocalizations produced by the Eastern North Pacific population have been well studied. This population produces pulsed calls ("A") and tonal calls ("B"), upswept tones that precede type B calls ("C") and separate downswept tones ("D"). A and B calls are often produced in repeated co-occurring sequences and sung only by males, suggesting a reproductive function. D calls may have multiple functions. They are produced by both sexes during social interactions while feeding. and by males when competing for mates.
Blue whale calls recorded off Sri Lanka have a three-unit phrase. The first unit is a 19.8 to 43.5 Hz pulsive call, and is normally 17.9 ± 5.2 seconds long. The second unit is a 55.9 to 72.4 Hz FM upsweep that is 13.8 ± 1.1 seconds long. The final unit is 28.5 ± 1.6 seconds long with a tone of 108 to 104.7 Hz. A blue whale call recorded off Madagascar, a two-unit phrase, consists of 5–7 pulses with a center frequency of 35.1 ± 0.7 Hz lasting 4.4 ± 0.5 seconds proceeding a 35 ± 0 Hz tone that is 10.9 ± 1.1 seconds long. In the Southern Ocean, blue whales produce 18-second vocals which start with a 9-second-long, 27 Hz tone, and then a 1-second downsweep to 19 Hz, followed by a downsweep further to 18 Hz. Other vocalizations include 1–4 second long, frequency-modulated calls with a frequency of 80 and 38 Hz.
There is evidence that some blue whale songs have temporally declined in tonal frequency. The vocalization of blue whales in the Eastern North Pacific decreased in tonal frequency by 31% from the early 1960s to the early 21st century. The frequency of pygmy blue whales in the Antarctic has decreased by a few tenths of a hertz every year starting in 2002. It is possible that as blue whale populations recover from whaling, there is increasing sexual selection pressure (i.e., a lower frequency indicates a larger body size). In February 2025, a study tracing "more than six years of acoustic monitoring" off of California, researchers found that during a heatwave, the blue whales were vocalizing less often, potentially due to needing to spend their energy trying to find food that is increasingly scarce due to the effects of climate change. A June 2022 study suggested that the decline in song frequency in blue whales is simply a cultural phenomenon.
Predators
The only known natural predator to blue whales is the orca, although the rate of fatal attacks by orcas is unknown. Photograph-identification studies of blue whales have estimated that a high proportion of the individuals in the Gulf of California have rake-like scars, indicative of encounters with orcas. Off southeastern Australia, 3.7% of blue whales photographed had rake marks and 42.1% of photographed pygmy blue whales off Western Australia had rake marks. Documented predation by orcas has been rare. A blue whale mother and calf were first observed being chased at high speeds by orcas off southeastern Australia. The first documented attack occurred in 1977 off southwestern Baja California, Mexico, but the injured whale escaped after five hours. Four more blue whales were documented as being chased by a group of orcas between 1982 and 2003. The first documented predation event by orcas occurred in September 2003, when a group of orcas in the Eastern Tropical Pacific was encountered feeding on a recently killed blue whale calf. In March 2014, a commercial whale watch boat operator recorded an incident involving a group of orcas harassing a blue whale in Monterey Bay. The blue whale defended itself by slapping its tail. A similar incident was recorded by a drone in Monterey Bay in May 2017. The first direct observations of orca predation occurred off the south coast of Western Australia, two in 2019 and one more in 2021. The first victim was estimated to be .
Infestations and health threats
In Antarctic waters, blue whales accumulate diatoms of the species Cocconeis ceticola and the genera Navicola, which are normally removed when the whales enter warmer waters. Barnacles such as Coronula diadema, Coronula reginae, and Cryptolepas rhachianecti, latch on to whale skin deep enough to leave behind a pit if removed. Whale lice species make their home in cracks of the skin and are relatively harmless. The copepod species Pennella balaenopterae digs in and attaches itself to the blubber to feed on. Intestinal parasites include the trematode genera Ogmogaster and Lecithodesmus; the tapeworm genera Priapocephalus, Phyllobotrium, Tetrabothrius, Diphyllobotrium, and Diplogonoporus; and the thorny-headed worm genus Bolbosoma. In the North Atlantic, blue whales also contain the protozoans Entamoeba, Giardia and Balantidium.
Conservation
The global blue whale population is estimated to be 5,000–15,000 mature individuals and 10,000–25,000 total as of 2018. By comparison, there were at least 140,000 mature whales in 1926. There are an estimated total of 1,000–3,000 whales in the North Atlantic, 3,000–5,000 in the North Pacific, and 5,000–8,000 in the Antarctic. There are possibly 1,000–3,000 whales in the eastern South Pacific while the pygmy blue whale may number 2,000–5,000 individuals. Blue whales have been protected in areas of the Southern Hemisphere since 1939. In 1955, they were given complete protection in the North Atlantic under the International Convention for the Regulation of Whaling; this protection was extended to the Antarctic in 1965 and the North Pacific in 1966. The protected status of North Atlantic blue whales was not recognized by Iceland until 1960. In the United States, the species is protected under the Endangered Species Act.
Blue whales are formally classified as endangered under both the U.S. Endangered Species Act and the IUCN Red List. They are also listed on Appendix I under the Convention on International Trade in Endangered Species of Wild Fauna and Flora (CITES) and the Convention on the Conservation of Migratory Species of Wild Animals. Although, for some populations, there is not enough information on current abundance trends (e.g., pygmy blue whales), others are critically endangered (e.g., Antarctic blue whales).
Threats
In 2017, DNA evidence was used to identify whale bones at Icelandic archaeological sites. Of the 124 bones analyzed more than 50% were from blue whales and some dated as far back as 900 CE. This, and other evidence, suggests that Icelanders were hunting whales as early as the 9th century, just as the settlement of Iceland began. Thus Icelanders would have been among the earliest known humans to hunt the blue whale.
Blue whales were initially difficult to hunt because of their size and speed. This began to change in the mid-19th century with the development of harpoons that can be shot as projectiles. Blue whale whaling peaked between 1930 and 1931 with 30,000 animals taken. Harvesting of the species was particularly high in the Antarctic, with 350,000–360,000 whales taken in the first half of the 20th century. In addition, 11,000 North Atlantic whales (mostly around Iceland) and 9,500 North Pacific whales were killed during the same period. The International Whaling Commission banned all hunting of blue whales in 1966 and gave them worldwide protection. However, the Soviet Union continued to illegally hunt blue whales and other species up until the 1970s.
Ship strikes are a significant mortality factor for blue whales, especially off the U.S. West Coast. A total of 17 blue whales were killed or suspected to have been killed by ships between 1998 and 2019 off the U.S. West Coast. Five deaths in 2007 off California were considered an unusual mortality event, as defined under the Marine Mammal Protection Act. Lethal ship strikes are also a problem in Sri Lankan waters, where their habitat intersects with one of the world's most active shipping routes. Here, strikes caused the deaths of eleven blue whales in 2010 and 2012, and at least two in 2014. Ship-strike mortality claimed the lives of two blue whales off southern Chile in the 2010s. Possible measures for reducing future ship strikes include better predictive models of whale distribution, changes in shipping lanes, vessel speed reductions, and seasonal and dynamic management of shipping lanes. Few cases of blue whale entanglement in commercial fishing gear have been documented. The first report in the U.S. occurred off California in 2015, reportedly some type of deep-water trap/pot fishery. Three more entanglement cases were reported in 2016. In Sri Lanka, a blue whale was documented with a net wrapped through its mouth, along the sides of its body, and wound around its tail.
Increasing man-made underwater noise impacts blue whales. They may be exposed to noise from commercial shipping and seismic surveys as a part of oil and gas exploration. Blue whales in the Southern California Bight decreased calling in the presence of mid-frequency active (MFA) sonar. Exposure to simulated MFA sonar was found to interrupt blue whale deep-dive feeding, but no changes in behavior were observed in individuals feeding at shallower depths. The responses also depended on the animal's behavioral state, its (horizontal) distance from the sound source and the availability of prey.
The potential impacts of pollutants on blue whales is unknown. However, because blue whales feed low on the food chain, there is a lesser chance for bioaccumulation of organic chemical contaminants. Analysis of the earwax of a male blue whale killed by a collision with a ship off the coast of California showed contaminants like pesticides, flame retardants, and mercury. Reconstructed persistent organic pollutant (POP) profiles suggested that a substantial maternal transfer occurred during gestation and/or lactation. Male blue whales in the Gulf of St. Lawrence, Canada, were found to have higher concentrations of PCBs, dichlorodiphenyltrichloroethane (DDT), metabolites, and several other organochlorine compounds relative to females, reflecting maternal transfer of these persistent contaminants from females into young.
See also
Largest organisms
List of cetaceans
List of largest mammals
List of whale vocalizations
Note
References
Further reading
NOAA Fisheries, Office of Protected Resources Blue whale biology & status
External links
Blue whale vocalizations – Cornell Lab of Ornithology—Bioacoustics Research Program (archived 26 February 2015)
Blue whale video clips and news from the BBC – BBC Wildlife Finder
Voices in the Sea – Sounds of the Blue Whale
NOAA Stock Assessments
Life of a Hunter: Blue Whale – BBC America
Living With Predators – BBC America
Category:Balaenoptera
Category:Mammals described in 1758
Category:Conservation-reliant species
Category:Cosmopolitan mammals
Category:Biological records
Category:ESA endangered species
Category:Animal taxa named by Carl Linnaeus
|
nature_wildlife
| 5,205
|
5468
|
Cayman Islands
|
https://en.wikipedia.org/wiki/Cayman_Islands
|
The Cayman Islands () is a self-governing British Overseas Territory in the western Caribbean. It is the largest by population of all the British Overseas Territories. The territory comprises the three islands of Grand Cayman, Cayman Brac and Little Cayman, which are located south of Cuba and north-east of Honduras, between Jamaica and Mexico's Yucatán Peninsula. The capital city is George Town on Grand Cayman, which is the most populous of the three islands.
The Cayman Islands is considered to be part of the geographic Western Caribbean zone as well as the Greater Antilles. The territory is a major offshore financial centre for international businesses and wealthy individuals mainly due to the state charging no tax on income earned or stored.
With a GDP per capita of $97,750 in 2023, the Cayman Islands has the highest standard of living in the Caribbean, and one of the highest in the world. Immigrants from over 140 countries and territories reside in the Cayman Islands.
History
Origins and colonization
, no evidence has been found that the islands had been occupied before their discovery by Europeans. The Cayman Islands got their name from the word for crocodile (caiman) in the language of the Arawak-Taíno people. It is believed that the first European to sight the islands was Christopher Columbus, on 10 May 1503, during his final voyage to the Americas. He named them "Las Tortugas", after the large number of turtles found there (which were soon hunted to near-extinction). However, in succeeding decades, the islands began to be referred to as "Caimanas" or "Caymanes".
No immediate colonisation followed Columbus's sighting, but a variety of settlers from various backgrounds eventually arrived, including pirates, shipwrecked sailors, and deserters from Oliver Cromwell's army in Jamaica. Sir Francis Drake briefly visited the islands in 1586.
The first recorded permanent inhabitant, Isaac Bodden, was born on Grand Cayman around 1661. He was the grandson of an original settler named Bodden, probably one of Oliver Cromwell's soldiers involved in the capture of Jamaica from Spain in 1655.
England took formal control of the Cayman Islands, along with Jamaica, as a result of the Treaty of Madrid of 1670. That same year saw an attack on a turtle fishing settlement on Little Cayman by the Spanish under Portuguese privateer Manuel Ribeiro Pardal. Following several unsuccessful attempts at settlement in what had by then become a haven for pirates, a permanent English-speaking population in the islands dates from the 1730s. With settlement, after the first royal land grant by the governor of Jamaica in 1734, came the introduction of slaves. Many were purchased and brought to the islands from Africa. That has resulted in the majority of native Caymanians being of African or British descent.
On 8 February 1794, the Caymanians rescued the crews of a group of ten merchant ships, including HMS Convert, an incident that has since become known as the Wreck of the Ten Sail. The ships had struck a reef and run aground during rough seas.. Legend has it that King George III rewarded the islanders for their generosity with a promise never to introduce taxes, because one of the ships carried a member of the King's family. Despite the legend, the story is not true.
19th century
The first census taken in the islands, in 1802, showed the population on Grand Cayman to be 933, with 545 of those inhabitants being slaves. Slavery was abolished in the Cayman Islands in 1833, following the passing of the Slavery Abolition Act by the British Parliament. At the time of abolition, there were over 950 slaves of African ancestry, owned by 116 families.
On 22 June 1863, the Cayman Islands was officially declared and administered as a dependency of the Crown Colony of Jamaica. The islands continued to be governed as part of the Colony of Jamaica until 1962, when they became a separate Crown colony, after Jamaica became an independent Commonwealth realm..
20th century
In the 1950s, tourism began to flourish, following the opening of Owen Roberts International Airport (ORIA), along with a bank and several hotels, as well as the introduction of a number of scheduled flights and cruise stop-overs. Politically, the Cayman Islands were an internally self-governing territory of Jamaica from 1958 to 1962, but they reverted to direct British rule following the independence of Jamaica in 1962. In 1972, a large degree of internal autonomy was granted by a new constitution, with further revisions being made in 1994. The Cayman Islands government focused on boosting the territory's economy via tourism and the attraction of off-shore finance, both of which mushroomed from the 1970s onwards. Historically, the Cayman Islands has been a tax-exempt destination, and the government has always relied on indirect and not direct taxes. The territory has never levied income tax, capital gains tax, or any wealth tax, making it a popular tax haven..
In April 1986, the first marine protected areas were designated in the Cayman Islands, making them the first islands in the Caribbean to protect their fragile marine life.
21st century
The constitution was further modified in 2001 and 2009, codifying various aspects of human rights legislation.
On 11 September 2004, the island of Grand Cayman, which lies largely unprotected at sea level, was battered by Hurricane Ivan, the worst hurricane to hit the islands in 86 years. It created an storm surge which flooded many areas of Grand Cayman. An estimated 83% of the dwellings on the island were damaged, with 4% requiring complete reconstruction. A reported 70% of all dwellings suffered severe damage from flooding or wind. Another 26% sustained minor damage from partial roof removal, low levels of flooding, or impact with floating or wind-driven hurricane debris. Power, water, and communications were disrupted for months in some areas. Within two years, a major rebuilding program on Grand Cayman meant that its infrastructure was almost back to its pre-hurricane condition. Due to the tropical location of the islands, more hurricanes or tropical systems have affected the Cayman Islands than any other region in the Atlantic basin. On average, it has been brushed, or directly hit, every 2.23 years.
Geography
The islands are in the western Caribbean Sea and are the peaks of an undersea mountain range called the Cayman Ridge (or Cayman Rise). This ridge flanks the Cayman Trough, deep which lies to the south.Bush, Phillippe G. Grand Cayman, British West Indies . UNESCO Coastal region and small island papers 3. The islands lie in the northwest of the Caribbean Sea, east of Quintana Roo, Mexico and Yucatán State, Mexico, northeast of Costa Rica, north of Panama, south of Cuba and west of Jamaica. They are situated about south of Miami, east of Mexico, south of Cuba, and about northwest of Jamaica. Grand Cayman is by far the largest, with an area of .Bush . Unesco.org. Retrieved on 12 April 2014. Grand Cayman's two "sister islands", Cayman Brac and Little Cayman, are about east north-east of Grand Cayman and have areas of respectively. The nearest land mass from Grand Cayman is the Canarreos Archipelago (about 240 km or 150 miles away), whereas the nearest from the easternmost island Cayman Brac is the Jardines de la Reina archipelago (about 160 km or 100 miles away) – both of which are part of Cuba.
All three islands were formed by large coral heads covering submerged ice-age peaks of western extensions of the Cuban Sierra Maestra range and are mostly flat. One notable exception to this is The Bluff on Cayman Brac's eastern part, which rises to above sea level, the highest point on the islands.
The terrain is mostly a low-lying limestone base surrounded by coral reefs. The portions of prehistoric coral reef that line the coastline and protrude from the water are referred to as ironshore.
Flora
In Cayman Islands forest cover is around 53% of the total land area, equivalent to 12,720 hectares (ha) of forest in 2020, down from 13,130 hectares (ha) in 1990. In 2020, naturally regenerating forest covered 12,720 hectares (ha) and planted forest covered 0 hectares (ha). Of the naturally regenerating forest 0% was reported to be primary forest (consisting of native tree species with no clearly visible indications of human activity). For the year 2015, 0% of the forest area was reported to be under public ownership, 12% private ownership and 88% with ownership listed as other or unknown.
Fauna
The mammalian species in the Cayman Islands include the introduced Central American agouti and eight species of bats. At least three now extinct native rodent species were present until the discovery of the islands by Europeans. Marine life around the island of the Grand Cayman includes tarpon, silversides (Atheriniformes), French angelfish (Pomacanthus paru), and giant barrel sponges. A number of cetaceans are found in offshore waters. These species include the goose-beaked whale (Ziphius cavirostris), Blainville's beaked whale (Mesoplodon densirostris) and sperm whale (Physeter macrocephalus).
Cayman avian fauna includes two endemic subspecies of Amazona parrots: Amazona leucocephala hesterna or Cuban amazon, presently restricted to the island of Cayman Brac, but formerly also on Little Cayman, and Amazona leucocephala caymanensis or Grand Cayman parrot, which is native to the Cayman Islands, forested areas of Cuba, and the Isla de la Juventud. Little Cayman and Cayman Brac are also home to red-footed and brown boobies.Red-footed Boobies of Little Cayman – National Trust for the Cayman Islands . Nationaltrust.org.ky. Retrieved on 12 April 2014.Cayman Brac | Caribbean Diving, Cayman Islands Vacation | Cayman Islands . Caymanislands.ky. Retrieved on 12 April 2014. Although the barn owl (Tyto alba) occurs in all three of the islands they are not commonplace. The Cayman Islands also possess five endemic subspecies of butterflies.Askew, R. R. and Stafford, P. A. van B. (2008) Butterflies of the Cayman Islands. Apollo Books, Stenstrup. . These butterfly breeds can be viewed at the Queen Elizabeth II Botanic Park on the Grand Cayman.
Among other notable fauna at the Queen Elizabeth II Botanic Park is the critically threatened blue iguana, which is also known as the Grand Cayman iguana (Cyclura lewisi). The blue iguana is endemic to the Grand CaymanGrand Cayman Blue Iguana takes step back from extinction . IUCN (20 October 2012). Retrieved on 12 April 2014. particularly because of rocky, sunlit, open areas near the island's shores that are advantageous for the laying of eggs. Nevertheless, habitat destruction and invasive mammalian predators remain the primary reasons that blue iguana hatchlings do not survive naturally.
The Cuban crocodile (Crocodylus rhombifer) once inhabited the islands. And the American crocodile (Crocodylus acutus) is also believed to be slowly repopulating the islands from Cuba. The name "Cayman" is derived from a Carib word for the various crocodilians that inhabited the islands.
Climate
The Cayman Islands has a tropical wet and dry climate, with a wet season from May to October, and a dry season that runs from November to April. Seasonally, there is little temperature change.
A major natural hazard is the tropical cyclones that form during the Atlantic hurricane season from June to November.
On 11 and 12 September 2004, Hurricane Ivan struck the Cayman Islands. The storm resulted in two deaths and caused significant damage to the infrastructure on the islands. The total economic impact of the storms was estimated to be $3.4 billion.
Climate data for George TownMonthJanFebMarAprMayJunJulAugSepOctNovDecYearAverage sea temperature °C (°F)26.6(79.9)26.6(79.9)26.8(80.2)27.7(81.9)28.3(82.9)28.7(83.7)29.2(84.6)30.0(86.0)29.9(85.8)29.3(84.7)28.6(83.5)28.0(82.4)27.9(82.2)Source #1: seatemperature.orgSource #2: Weather Atlas
Demographics
Demographics and immigration
While there are a large number of generational Caymanians, many Caymanians today have roots in almost every part of the world. Similarly to countries like the United States, the Cayman Islands is a melting pot with citizens of every background. 52.5% of the population is Non-Caymanian, while 47.5% is Caymanian. Jamaicans, who make up 24% of the population, form the largest immigrant community in the country, attributable to not only the close proximity of the Cayman Islands and Jamaica, but also the close cultural, economic and social ties that go back centuries between the two nations, with the Cayman Islands once being a dependency of Jamaica from 1863 until Jamaica's independence from the United Kingdom in 1962, resulting in the Cayman Islands choosing to separate from Jamaica and remain under British rule to this day.
According to the Economics and Statistics Office of the Government of the Cayman Islands, the Cayman Islands had a population of 71,432 at the Census of 10 October 2021, but was estimated by them to have risen to 81,546 as of December 2022, making it the most populous British Overseas Territory. It was revealed in the 2021 census that 56% of the workforce is Non-Caymanian; this is the first time in the territory's history that the number of working immigrants has overtaken the number of working Caymanians. Most Caymanians are of mixed African and European ancestry. Slavery occurred but was not as common compared to other Caribbean islands, and once it was abolished, black and white communities seemed to integrate more compliantly than other Caribbean nations and territories resulting in a more mixed-race population.
The country's demographics are changing rapidly. Immigration plays a large role, and the changing demographics in age have sounded alarm bells in the most recent census. In comparison to the 2010 census, the 2021 census has shown that 36% of Cayman's population growth has been in persons over age 65, while 8% growth was recorded in groups under age 15. This is due to extremely low birth rates among Caymanians, which almost forces the government to seek workers from overseas to sustain the country's economy. This has raised concerns among many young Caymanians, who worry about the workforce becoming increasingly competitive with the influx of workers, as well as rent and property prices going up.
Because the population has skyrocketed over the last decade, former government officials have stressed that the islands need more careful and managed growth. Many have worried that the country's infrastructure and services cannot cope with the surging population. It is believed that given current trends, the population will reach 100,000 before 2030.
District populations
According to the Economics and Statistics Office, the final result of the 20 October 2021 Census was 71,432; however, according to a late 2022 population report by the same body, the estimated population at the end of 2022 was 81,546, broken down as follows:
Name ofdistrict Area(km2) PopulationCensus2010 Census2021 Estimatelate 2022 West Bay 17.411,22215,33516,943 George Town 38.528,08934,92140,957 Bodden Town 50.510,54314,84516,957 North Side 39.41,4791,9022,110 East End 51.11,4071,8462,274 Total Grand Cayman197.053,16069,17579,242 Little Cayman 26.0197182 ?? Cayman Brac 36.02,0992,0752,304 Total Cayman Islands259.055,45671,432??81,546
Religion
The predominant religion on the Cayman Islands is Christianity (67% in 2021, down from over 80% in 2010). Popular denominations include the United Church of Christ, the Church of God, the Anglican Church, the Baptist Church, the Catholic Church, the Seventh-day Adventist Church, and the Pentecostal Church. The Roman Catholic churches in the islands are St. Ignatius Church in George Town, and Christ the Redeemer Church, West Bay and Stella Maris Churches in Cayman Brac. There is also a significant number of Latter-Day Saints in Grand Cayman. The majority of citizens are religious, however, atheism has been on the rise throughout the islands since 2000, with 16.7% now identifying as non-believers, according to the 2021 census. Ports are closed on Sundays and Christian holidays. There is also an active synagogue and Jewish communityThe Jewish Community of the Cayman Islands www.jewish.ky on the island as well as places of worship in George Town for Jehovah's Witnesses and followers of the Baháʼí faith.
+Religion in the Cayman Islands (2021 census)ReligionPopulation%Church of God13,42419.5None11,50216.7Roman Catholic9,34813.6Seventh Day Adventist 5,9928.7Nondenominational Christian 5,7438.3Baptist4,7656.9Pentecostal4,6896.8Presbyterian3,9145.7Anglican1,9462.8Hinduism1,1911.7Wesleyan Church 1,0301.5Jehovah's Witnesses6370.9Methodist3470.5Islam2580.4Rastafari2130.3Judaism1670.2Other religion 2.6783.9Unknown9671.468.811100.0
Languages
The official language of the Cayman Islands is English (90%). Islanders' accents retain elements passed down from English, Scottish, and Welsh settlers (among others) in a language variety known as Cayman Creole. Caymanians of Jamaican origin speak in their own vernacular (see Jamaican Creole and Jamaican English). It is quite common to hear residents converse in Spanish as many citizens have relocated from Latin America to work and live on Grand Cayman. The Latin American nations with the greatest representation are Honduras, Cuba, Colombia, Nicaragua, and the Dominican Republic. Spanish speakers comprise approximately between 10 and 12% of the population and are predominantly of the Caribbean dialect. Tagalog is spoken by about 8% of inhabitants, most of whom are Filipino residents on work permits.
Economy
The economy of the Cayman Islands is dominated by financial services and tourism, together accounting for 50–60% of Gross Domestic Product. The nation's zero tax rate on income and storage of funds has led to it being used as a tax haven for corporations; there are 100,000 companies registered in the Cayman Islands, more than the population itself. The Cayman Islands have come under criticism for allegations of money laundering and other financial crimes, including a 2016 statement by then US president Barack Obama that described a particular building which was the registered address of over 12,000 corporations as a "tax scam".
The Cayman Islands holds a relatively low unemployment rate of about 4.24% as of 2015, lower than the value of 4.7% that was recorded in 2014.
With an average income of US$109,684 Caymanians have the highest standard of living in the Caribbean. According to the CIA World Factbook, the Cayman Islands' real GDP per capita is the tenth highest in the world, but the CIA's data for Cayman dates to 2018 and is likely to be lower than present-day values. The territory prints its own currency, the Cayman Islands dollar (KYD), which is pegged to the US dollar US$1.227 to 1 KYD. However, in many retail stores throughout the islands, the KYD is typically traded at US$1.25.
Cayman Islands have a high cost of living, even when compared to UK and US. For example, a loaf of multigrain bread is $5.49 (KYD), while a similar loaf sells for $2.47 (KYD) in the US and $1.36 (KYD) in the UK.
The minimum wage (as of February 2021) is $6 KYD for standard positions, and $4.50 for workers in the service industry, where tips supplement income. This contributes to wealth disparity. A small segment of the population lives in condemned properties lacking power and running water.
The government has established a Needs Assessment Unit to relieve poverty in the islands. Local charities, including Cayman's Acts of Random Kindness (ARK) also provide assistance.
The government's primary source of income is indirect taxation: there is no income tax, capital gains tax, or corporation tax. A tariff of 5% to 22% (automobiles 29.5% to 100%) is levied against goods imported into the islands. Few goods are exempt; notable exemptions include books, cameras, and perfume.
Tourism
+Tourist arrivals of 2024 in %
One of Grand Cayman's main attractions is Seven Mile Beach, site of a number of the island's hotels and resorts. Named one of the Ultimate Beaches by Caribbean Travel and Life, Seven Mile Beach (due to erosion over the years, the number has decreased to 5.5 miles) is a public beach on the western shore of Grand Cayman Island.Seven Mile Beach | Grand Cayman, Caribbean Vacation | Cayman Islands . Caymanislands.ky. Retrieved on 12 April 2014. Historical sites in Grand Cayman, such as Pedro St. James Castle in Savannah, also attract visitors.Pedro St. James | Grand Cayman, Grand Cayman Island | Cayman Islands . Caymanislands.ky. Retrieved on 12 April 2014. thumb|upright=1.3|Observation Tower at Camana Bay, north George Townthumb|Stingray passing through Stingray City, Grand Cayman
All three islands offer scuba diving, and the Cayman Islands are home to several snorkelling locations where tourists can swim with stingrays. The most popular area to do this is Stingray City, Grand Cayman. Stingray City is a top attraction in Grand Cayman and originally started in the 1980s when divers started feeding squid to stingrays. The stingrays started to associate the sound of the boat motors with food, and thus visit this area year-round.Stingray City | Grand Cayman, Grand Cayman Vacation | Cayman Islands . Caymanislands.ky. Retrieved on 12 April 2014.
There are two shipwrecks off the shores of Cayman Brac, including the MV Captain Keith Tibbetts;Tim Rock, Lonely Planet Diving & Snorkeling Cayman Islands (2nd edn, 2007, ), p. 99 Grand Cayman also has several shipwrecks off its shores, including one deliberate one. On 30 September 1994, the was decommissioned and struck from the Naval Vessel Register. In November 2008 her ownership was transferred for an undisclosed amount to the government of the Cayman Islands, which had decided to sink the Kittiwake in June 2009 to form a new artificial reef off Seven Mile Beach, Grand Cayman. Following several delays, the ship was finally scuttled according to plan on 5 January 2011. The Kittiwake has become a dynamic environment for marine life. While visitors are not allowed to take anything, there are endless sights. Each of the five decks of the ship offers squirrelfish, rare sponges, Goliath groupers, urchins, and more. Experienced and beginner divers are invited to swim around the Kittiwake.Kittiwake | Cayman Dive, Cayman Islands Vacation | Cayman Islands . Caymanislands.ky (5 January 2011). Retrieved on 12 April 2014. Pirates Week is an annual 11-day November festival started in 1977 by the then-Minister of Tourism Jim Bodden to boost tourism during the country's tourism slow season.
Other Grand Cayman tourist attractions include the ironshore landscape of Hell; the marine theme park "Cayman Turtle Centre: Island Wildlife Encounter", previously known as "Boatswain's Beach"; the production of gourmet sea salt; and the Mastic Trail, a hiking trail through the forests in the centre of the island. The National Trust for the Cayman Islands provides guided tours weekly on the Mastic Trail and other locations.
Another attraction to visit on Grand Cayman is the Observation Tower, located in Camana Bay. The Observation Tower is 75 feet tall and provides 360-degree views across Seven Mile Beach, George Town, the North Sound, and beyond. It is free to the public and climbing the tower has become a popular thing to do in the Cayman Islands.Observation Tower | Camana Bay . CamanaBay.com. Retrieved on 1 August 2014.
Points of interest include the East End Light (sometimes called Gorling Bluff Light), a lighthouse at the east end of Grand Cayman island. The lighthouse is the centrepiece of East End Lighthouse Park, managed by the National Trust for the Cayman Islands; the first navigational aid on the site was the first lighthouse in the Cayman Islands.
Shipping
, 360 commercial vessels and 1,674 pleasure craft were registered in the Cayman Islands totalling 4.3 million GT.
Labour
The Cayman Islands has a population of 69,656 () and therefore a limited workforce. Work permits may, therefore, be granted to foreigners. On average, there have been more than 24,000+ foreigners holding valid work permits.
Work permits for non-citizens
To work in the Cayman Islands as a non-citizen, a work permit is required. This involves passing a police background check and a health check. A prospective immigrant worker will not be granted a permit unless certain medical conditions are met, including testing negative for syphilis and HIV. A permit may be granted to individuals on special work.
A foreigner must first have a job to move to the Cayman Islands. The employer applies and pays for the work permit. Work permits are not granted to foreigners who are in the Cayman Islands (unless it is a renewal). The Cayman Islands Immigration Department requires foreigners to remain out of the country until their work permit has been approved.
The Cayman Islands presently imposes a controversial "rollover" in relation to expatriate workers who require a work permit. Non-Caymanians are only permitted to reside and work within the territory for a maximum of nine years unless they satisfy the criteria of key employees. Non-Caymanians who are "rolled over" may return to work for additional nine-year periods, subject to a one-year gap between their periods of work. The policy has been the subject of some controversy within the press. Law firms have been particularly upset by the recruitment difficulties that it has caused.Row brews over rollover, 22 January 2007, Cayman net News Other less well-remunerated employment sectors have been affected as well. Concerns about safety have been expressed by diving instructors, and realtors have also expressed concerns. Others support the rollover as necessary to protect Caymanian identity in the face of immigration of large numbers of expatriate workers.Government takes up permit issue , Editorial, 5 March 2006, Camanian Compass
Concerns have been expressed that in the long term, the policy may damage the preeminence of the Cayman Islands as an offshore financial centre by making it difficult to recruit and retain experienced staff from onshore financial centres. Government employees are no longer exempt from this "rollover" policy, according to this report in a local newspaper. The governor has used his constitutional powers, which give him absolute control over the disposition of civil service employees, to determine which expatriate civil servants are dismissed after seven years service and which are not.
This policy is incorporated in the Immigration Law (2003 revision), written by the United Democratic Party government, and subsequently enforced by the People's Progressive Movement Party government. Both governments agree to the term limits on foreign workers, and the majority of Caymanians also agree it is necessary to protect local culture and heritage from being eroded by a large number of foreigners gaining residency and citizenship.
CARICOM Single Market Economy
In recognition of the CARICOM (Free Movement) Skilled Persons Act which came into effect in July 1997 in some of the CARICOM countries such as Jamaica and which has been adopted in other CARICOM countries, such as Trinidad and Tobago it is possible that CARICOM nationals who hold the "A Certificate of Recognition of Caribbean Community Skilled Person" will be allowed to work in the Cayman Islands under normal working conditions.
Government and politics
The Cayman Islands are a British overseas territory, listed by the UN Special Committee of 24 as one of the 17 non-self-governing territories. The current Constitution, incorporating a Bill of Rights, was ordained by a statutory instrument of the United Kingdom in 2009. A 19-seat (not including two non-voting members appointed by the Governor which brings the total to 21 members) Parliament is elected by the people every four years to handle domestic affairs. Of the elected Members of the Parliament (MPs), seven are chosen to serve as government Ministers in a Cabinet headed by the Governor. The Premier is appointed by the Governor.Cayman Islands Constitution, 2009, part III article 49 Although geographically remote, the Islands (like other British Overseas Territories) share a direct connection with elements of supervisory governance (as did the now independent Commonwealth Nations) still exercisable by the UK's Government in London, UK.
A Governor is appointed by the King of the United Kingdom on the advice of the British Government to represent the monarch.Cayman Islands Constitution, 2009, part II Governors can exercise complete legislative and executive authority if they wish through blanket powers reserved to them in the constitution.Constitution, articles 55 and 81 Bills which have passed the Parliament require royal assent before becoming effective. The Constitution empowers the Governor to withhold royal assent in cases where the legislation appears to be repugnant to or inconsistent with the Constitution or affects the rights and privileges of the Parliament or the Royal Prerogative, or matters reserved to the Governor by article 55.Constitution article 78 The executive authority of the Cayman Islands is vested in the , , and is exercised by the Government, consisting of the Governor and the Cabinet.Constitution article 43 There is an office of the Deputy Governor, who must be a Caymanian and have served in a senior public office. The Deputy Governor is the acting Governor when the office of Governor is vacant, or the Governor is not able to discharge their duties or is absent from the Cayman Islands.Constitution article 35 The current Governor of the Cayman Islands is Jane Owen.
The Cabinet is composed of two official members and seven elected members, called Ministers; one of whom is designated Premier. The premier can serve for two consecutive terms. After two terms the premier is barred from attaining the office again. Although an MP can only be premier twice any person who meets the qualifications and requirements for a seat in the Parliament can be elected to the Parliament indefinitely.The Constitution of the Cayman Islands, Part VI The Legislature
There are two official members of the Parliament, the Deputy Governor and the Attorney General. They are appointed by the Governor in accordance with His Majesty's instructions, and although they have seats in the Parliament, under the 2009 Constitution, they do not vote. They serve in a professional and advisory role to the MPs, the Deputy Governor represents the Governor who is a representative of the King and the British Government. While the Attorney General serves to advise on legal matters and has special responsibilities in Parliament, they are generally responsible for changes to the Penal code.
The seven Ministers are voted into office by the 19 elected members of the Parliament of the Cayman Islands. One of the Ministers, the leader of the majority political party, is appointed Premier by the Governor.
After consulting the Premier, the Governor allocates a portfolio of responsibilities to each Cabinet Minister. Under the principle of collective responsibility, all Ministers are obliged to support in the Parliament any measures approved by Cabinet.
Almost 80 departments, sections and units carry out the business of government, joined by a number of statutory boards and authorities set up for specific purposes, such as the Port Authority, the Civil Aviation Authority, the Immigration Board, the Water Authority, the University College Board of Governors, the National Pensions Board and the Health Insurance Commission.
Since 2000, there have been two official major political parties: The Cayman Democratic Party (CDP) and the People's Progressive Movement (PPM). While there has been a shift to political parties, many contending for office still run as independents. The two parties are notably similar, though they consider each other rivals in most cases, their differences are generally in personality and implementation rather than actual policy. The Cayman Islands generally lacks any form of organised political parties. As of the May 2017 General Election, members of the PPM and CDP have joined with three independent members to form a government coalition despite many years of enmity.
Before the 2021 Caymanian general election, leader of the CDP McKeeva Bush received a two-month suspended jail sentence for assaulting a woman in February 2020 leading to a no-confidence motion against him. Premier McLaughlin asked Governor Martyn Roper to dissolve Parliament on 14 February, triggering early elections instead of having the vote on the motion. In the lead-up to the election, the Democratic Party was described as "[appearing] to be defunct" as figures previously of the party (including Bush) instead contested as independents.
Police
Policing in the country is provided chiefly by the RCIPS or Royal Cayman Islands Police Service and the CICBC or Cayman Islands Customs & Border Control. These two agencies co-operate in aspects of law enforcement, including their joint marine unit.
Military and defence
The defence of the Cayman Islands is the responsibility of the United Kingdom. The Royal Navy maintains a ship on permanent station in the Caribbean (HMS Medway (P223)) and, from time-to-time, the Royal Navy or Royal Fleet Auxiliary may deploy another ship as a part of Atlantic Patrol (NORTH) tasking. These ships' main mission in the region is to maintain British sovereignty for the overseas territories, provide humanitarian aid and disaster relief during disasters such as hurricanes, which are common in the area, and to conduct counter-narcotic operations. In July 2024, the patrol vessel HMS Trent (which had temporarily replaced her sister ship HMS Medway on her normal Caribbean tasking) deployed to the islands to provide assistance in the aftermath of Hurricane Beryl.
Cayman Islands Regiment
On 12 October 2019, the government announced the formation of the Cayman Islands Regiment, a new British Armed Forces unit. The Cayman Islands Regiment which became fully operational in 2020, with an initial 35–50 personnel of mostly reservists. Between 2020 through 2021 the Regiment grew to over a hundred personnel and over the next several years expected to grow to over several hundred personnel.
In mid-December 2019, recruitment for commanding officers and junior officers began, with the commanding officers expected to begin work in January 2020 and the junior officers expected to begin in February 2020.
In January 2020, the first officers were chosen for the Cayman Islands Regiment.
Since the formation of the Regiment, it has been deployed on a few operational tours providing HADR, or Humanitarian Aid and Disaster Relief as well as assisting with the COVID-19 Pandemic.
Cadet Corps
The Cayman Islands Cadet Corps was formed in March 2001 and carries out military-type training with teenage citizens of the country.
Coast Guard
In 2018, the PPM-led Coalition government pledged to form a coast guard to protect the interests of the Cayman Islands, especially in terms of illegal immigration and illegal drug importation as well as search and rescue. In mid-2018, the Commander and second-in-Command of the Cayman Islands Coast Guard were appointed. Commander Robert Scotland was appointed as the first commanding officer and Lieutenant Commander Leo Anglin was appointed as Second-in-Command.
In mid-2019, the commander and second-in-command took part in international joint operations with the United States Coast Guard and the Jamaica Defense Force Coast Guard called Operation Riptide. This makes it the first deployment for the Cayman Islands Coast Guard and the first in ten years any Cayman Representative has been on a foreign military ship for a counternarcotic operation.
In late November 2019, it was announced that the Cayman Islands Coast Guard would become operational in January 2020, with initial total of 21 Coast Guardsmen half of which would come from the joint marine unit, with further recruitment in the new year. One of the many taskings of the Coast Guard will be to push enforcement of all laws that apply to the designated Wildlife Interaction Zone.
On 5 October 2021, the Cayman Islands Parliament passed the Cayman Islands Coast Guard Act thus establishing the Cayman Islands Coast Guard as a uniformed and disciplined department of Government.
Taxation
No direct taxation is imposed on residents and Cayman Islands companies. The government receives the majority of its income from indirect taxation. Duty is levied against most imported goods, which is typically in the range of 22% to 25%. Some items are exempted, such as baby formula, books, cameras, and electric vehicles, while certain items are taxed at 5%. Duty on automobiles depends on their value. The duty can amount to 29.5% up to $20,000.00 KYD CIF (cost, insurance and freight) and up to 42% over $30,000.00 KYD CIF for expensive models. The government charges flat licensing fees on financial institutions that operate in the islands and there are work permit fees on foreign labour. A 13% government tax is placed on all tourist accommodations in addition to a US$37.50 airport departure tax which is built into the cost of an airline ticket. There is a 7.5% sales tax on the proceeds of the sale of the property, payable by the purchaser. There are no taxes on corporate profits, capital gains, or personal income. There are no estate or death inheritance taxes payable on Cayman Islands real estate or other assets held in the Cayman Islands.
The legend behind the lack of taxation comes from the Wreck of the Ten Sail, when multiple ships ran aground on the reef off the north coast of Grand Cayman. Local fishermen are said to have then sailed out to rescue the crew and salvage goods from the wrecks. It is said that out of gratitude, and due to their small size, King George III then issued the edict that the citizens of the country of the Cayman Islands would never pay tax. There is, however, no documented evidence for this story besides oral tradition.
Foreign relations
Foreign policy is controlled by the United Kingdom, as the islands remain an overseas territory of the United Kingdom. Although in its early days, the Cayman Islands' most important relationships were with Britain and Jamaica, in recent years, as a result of economic dependence, a relationship with the United States has developed.
Though the Cayman Islands is involved in no major international disputes, they have come under some criticism due to the use of their territory for narcotics trafficking and money laundering. In an attempt to address this, the government entered into the Narcotics Agreement of 1984 and the Mutual Legal Assistance Treaty of 1986 with the United States, to reduce the use of their facilities associated with these activities. In more recent years, they have stepped up the fight against money laundering, by limiting banking secrecy, introducing requirements for customer identification and record keeping, and requiring banks to co-operate with foreign investigators.
Due to their status as an overseas territory of the UK, the Cayman Islands has no separate representation either in the United Nations or in most other international organisations. However, the Cayman Islands still participates in some international organisations, being an associate member of CARICOM and UNESCO, and a member of a sub-bureau of Interpol.
Emergency services
Access to emergency services is available using 9-1-1, the emergency telephone number, the same number as is used in Canada and the United States. The Cayman Islands Department of Public Safety's Communications Centre processes 9-1-1 and non-emergency police assistance, ambulance service, fire service and search and rescue calls for all three islands. The Communications Centre dispatches RCIP and EMS units directly; the Cayman Islands Fire Service maintains their own dispatch room at the airport fire station.
The police services are handled by the Royal Cayman Islands Police Service. The fire services are handled by the Cayman Islands Fire Service. There are 4 main hospitals in the Cayman Islands, private and public health in the Cayman Islands with various localised health clinics around the islands.
Infrastructure
Ports
George Town is the port capital of Grand Cayman. There are no berthing facilities for cruise ships, but up to four cruise ships can anchor in designated anchorages. There are three cruise terminals in George Town, the North, South, and Royal Watler Terminals. The ride from the ship to the terminal is about 5 minutes.
Airports and airlines
There are three airports which serve the Cayman Islands. The islands' national flag carrier is Cayman Airways, with Owen Roberts International Airport hosting the airline as its hub.
• Owen Roberts International Airport
• Charles Kirkconnell International Airport
• Edward Bodden Airfield
Main highways
There are three highways, as well as crucial feeder roads that serve the Cayman Islands capital city, George Town. Residents in the east of the city will rely on the East-West Arterial Bypass to go into George Town; as well as Shamrock Road coming from Bodden Town and the eastern districts.
Other main highways and carriageways include:
• Linford Pierson Highway (most popular roadway into George Town from the east)
• Esterly Tibbetts Highway (serves commuters to the north of the city and West Bay)
• North Sound Road (main road for Central George Town)
• South Sound Road (used by commuters to the south of the city)
• Crewe Road (alternative to taking Linford Pierson Highway)
Education
Primary and secondary schools
The Cayman Islands Education Department operates state schools. Caymanian children are entitled to free primary and secondary education. There are two public high schools on Grand Cayman, John Gray High School and Clifton Hunter High School, and one on Cayman Brac, Layman E. Scott High School. Various churches and private foundations operate several private schools.
Colleges and universities
The University College of the Cayman Islands has campuses on Grand Cayman and Cayman Brac and is the only government-run university on the Cayman Islands.
The International College of the Cayman Islands is a private college in Grand Cayman. The college was established in 1970 and offers associate's, bachelor's and master's degree programmes. Grand Cayman is also home to St. Matthew's University, which includes a medical school and a school of veterinary medicine. Truman Bodden Law School, a branch of the University of Liverpool, is based on Grand Cayman.
The Cayman Islands Civil Service College, a unit of the Cayman Islands government organised under the Portfolio of the Civil Service, is in Grand Cayman. Co-situated with University College of the Cayman Islands, it offers both degree programs and continuing education units of various sorts. The college opened in 2007 and is also used as a government research centre.
There is a University of the West Indies Open campus in the territory.
Sports
Truman Bodden Sports Complex is a multi-use complex in George Town. The complex is separated into an outdoor, six-lane swimming pool, full purpose track and field and basketball/netball courts. The field surrounded by the track is used for association football matches as well as other field sports.
Association football is the national and most popular sport, with the Cayman Islands national football team representing the Cayman Islands in FIFA.
The Cayman Islands Basketball Federation joined the international basketball governing body FIBA in 1976.FIBA National Federations – Cayman Islands , fiba.com, accessed 28 October 2015. The country's national team attended the Caribbean Basketball Championship for the first time in 2011. Cayman Islands National Male National Team has won back-to-back Gold Medal victories in 2017 and 2019 Natwest Island Games.
Rugby union is a developing sport, and has its own national men's team, women's team, and Sevens team.
The Cayman Islands are a member of FIFA, the International Olympic Committee and the Pan American Sports Organisation, and also competes in the biennial Island Games.
The Cayman Islands are a member of the International Cricket Council which they joined in 1997 as an Affiliate, before becoming an Associate member in 2002. The Cayman Islands national cricket team represents the islands in international cricket. The team has previously played the sport at first-class, List A and Twenty20 level. It competes in Division Five of the World Cricket League.
Squash is popular in the Cayman Islands with a vibrant community of mostly ex-pats playing out of the 7-court South Sound Squash Club. In addition, the women's professional squash association hosts one of their major events each year in an all-glass court being set up in Camana Bay. In December 2012, the former Cayman Open will be replaced by the Women's World Championships, the largest tournament in the world.
Flag football (CIFFA) has men's, women's, and mixed-gender leagues.
Other organised sports leagues include softball, beach volleyball, Gaelic football and ultimate frisbee.
The Cayman Islands Olympic Committee was founded in 1973 and was recognised by the IOC (International Olympic Committee) in 1976.
In April 2005 Black Pearl Skate Park was opened in Grand Cayman by Tony Hawk. At the time the park was the largest in the Western Hemisphere.
In February 2010, the first purpose-built track for kart racing in the Cayman Islands was opened."Go-karting track up to speed" , Caymanian Compass, 23 February 2010 Corporate karting leagues at the track have involved widespread participation with 20 local companies and 227 drivers taking part in the 2010 Summer Corporate Karting League."Parker's eased into top gear" , Caymanian Compass, 24 September 2010.
In December 2022, swimmer Jordan Crooks became the first Caymanian athlete to become world champion in any sport, after winning the gold medal in the 50 m freestyle event at the 2022 FINA World Swimming Championships (25 m). In addition, during the 2024 World Aquatics Swimming Championships (25 m), he established a new world record in the 50 m freestyle event with a time of 19.90, becoming the first swimmer in history to break the 20-second barrier.
Arts and culture
Music
The Cayman National Cultural Foundation manages the F.J. Harquail Cultural Centre and the US$4 million Harquail Theatre. The Cayman National Cultural Foundation, established in 1984, helps to preserve and promote Cayman folk music, including the organisation of festivals such as the Cayman Islands International Storytelling Festival, the Cayman JazzFest, Seafarers Festival and Cayfest. The jazz, calypso and reggae genres of music styles feature prominently in Cayman music as celebrated cultural influences.
Art
The National Gallery of the Cayman Islands is an art museum in George Town. Founded in 1996, NGCI is an arts organisation that seeks to fulfil its mission through exhibitions, artist residencies, education/outreach programmes and research projects in the Cayman Islands. The NGCI is a non-profit institution, part of the Ministry of Health and Culture.
Media
There are two print newspapers currently in circulation throughout the islands: the Cayman Compass and The Caymanian Times. Online news services include Cayman Compass, Cayman News Service, Cayman Marl Road, The Caymanian Times and Real Cayman News. Olive Hilda Miller was the first paid reporter to work for a Cayman Islands newspaper, beginning her career on the Tradewinds newspaper, which her work helped to establish.
Local radio stations are broadcast throughout the islands.
Feature films that have been filmed in the Cayman Islands include: The Firm, Haven, Cayman Went and Zombie Driftwood.
Television in the Cayman Islands consist of four over-the-air broadcast stations, CompassTV (subsidiary of Compass Media, which also runs Cayman Compass) – Trinity Broadcasting Network – CIGTV (the government-owned channel) – Seventh Day Adventist Network. Cable television is available in the Cayman Islands through three providers, C3 Pure Fibre – FLOW TV – Logic TV. Satellite television is provided by Dish Direct TV. In the past, between 1992 and 2019, there was also Cayman 27.
Broadband is widely available on the Cayman Islands, with Digicel, C3 Pure Fibre, FLOW and Logic all providing super fast fibre broadband to the islands.
Notable Caymanians
See also
Outline of the Cayman Islands
Index of Cayman Islands–related articles
List of isolated islands and archipelagos
Notes
References
Further reading
Originally from the CIA World Factbook 2000.
External links
Cayman Islands Government
Cayman Islands Department of Tourism
Cayman National Cultural Foundation
Cayman Islands Film Commission (archived 22 July 2011)
Cayman Islands. The World Factbook. Central Intelligence Agency.
Cayman Islands from UCB Libraries GovPubs (archived 7 April 2008)
Category:1962 establishments in North America
Category:Island countries
.Cayman
Category:Dependent territories in the Caribbean
Category:Countries and territories where English is an official language
Category:Former English colonies
Category:Former Spanish colonies
Category:Greater Antilles
Category:States and territories established in 1962
Category:Offshore finance
Category:Tax avoidance
Category:Tax investigation
|
geography
| 7,716
|
5500
|
Christmas Island
|
https://en.wikipedia.org/wiki/Christmas_Island
|
Christmas Island, officially the Territory of Christmas Island, is an Australian external territory in the Indian Ocean comprising the island of the same name. It is about south of Java and Sumatra and about north-west of the closest point on the Australian mainland. It has an area of .Shire of Christmas Island Christmas Island's geographic isolation and history of minimal human disturbance has led to a high level of endemism among its flora and fauna, which is of interest to scientists and naturalists. The territory derives its name from its discovery on Christmas Day 1643 by Captain William Mynors.
The first European to sight Christmas Island was Richard Rowe of the Thomas in 1615. Mynors gave it its name. It was first settled in the late 19th century, after abundant phosphate deposits were found (originally deposited as guano) which led Britain to annex the island in 1888 and begin commercial mining in 1899. The Japanese invaded the island in 1942 to secure its phosphate deposits. After the end of Japanese occupation, the island's administration was restored to Singapore, but it was transferred to Australia in 1958, where it has remained since.
Christmas Island had a population of 1,692 , with most living in settlements on its northern edge. The main settlement is Flying Fish Cove, known simply as The Settlement. Other settlements include Poon Saan, Drumsite, and Silver City. Historically, Asian Australians of Chinese, Malay, and Indian descent were the majority of the population. Today, around two-thirds of the island's population is estimated to have Straits Chinese origin (though just 22.2% of the population declared Chinese ancestry in 2021), with significant numbers of Malays and European Australians and smaller numbers of Straits Indians and Eurasians. Several languages are in use, including English, Malay, and various Chinese dialects.
Religious beliefs vary geographically. The Anglo-Celtic influence in the capital is closely tied to Catholicism, whereas Buddhism is common in Poon Saan, and Sunni Islam is generally observed in the shoreline water village where the Malays live.
The majority (63%) of the island is made up of Christmas Island National Park, which features several areas of primary monsoonal forest.
History
First visits by Europeans, 1643
The first European to sight the island was Richard Rowe of the Thomas in 1615. Captain William Mynors of the East India Company vessel Royal Mary named the island when he sailed past it on Christmas Day in 1643. The island was included on English and Dutch navigation charts early in the 17th century, but it was not until 1666 that a map published by Dutch cartographer Pieter Goos included the island. Goos labelled the island "Mony" or "Moni", the meaning of which is unclear.
English navigator William Dampier, aboard the privateer Charles Swan's ship Cygnet, made the earliest recorded visit to the sea around the island in March 1688. In writing his account, he found the island uninhabited. Dampier was trying to reach the Cocos Islands from New Holland. His ship was blown off course in an easterly direction, arriving at Christmas Island 28 days later. Dampier landed on the west coast, at "the Dales". Two of his crewmen became the first Europeans to set foot on Christmas Island.
Captain Daniel Beeckman of the Eagle passed the island on 5 April 1714, chronicled in his 1718 book, A Voyage to and from the Island of Borneo, in the East-Indies.
Exploration and annexation
The first attempt at exploring the island was made in 1857 by Captain Sidney Grenfell of the frigate . An expedition crew were sent ashore with instructions to reach the summit of the plateau, but they failed to find a route up the inland cliff and were forced to turn back. During the 1872–1876 Challenger expedition to Indonesia, naturalist John Murray carried out extensive surveys.
In 1886, Captain John Maclear of , having discovered an anchorage in a bay that he named "Flying Fish Cove", landed a party and made a small collection of the flora and fauna. In the next year, Pelham Aldrich, on board , visited the island for 10 days, accompanied by J. J. Lister, who gathered a larger biological and mineralogical collection. Among the rocks then obtained and submitted to Murray for examination were many of nearly pure phosphate of lime. This discovery led to annexation of the island by the British Crown on 6 June 1888.
Settlement and exploitation
Soon afterwards, a small settlement was established in Flying Fish Cove by G. Clunies-Ross, the owner of the Cocos (Keeling) Islands some to the southwest, to collect timber and supplies for the growing industry on Cocos. In 1897 the island was visited by Charles W. Andrews, who did extensive research on the natural history of the island, on behalf of the British Museum.
Phosphate mining began in 1899 using indentured workers from Singapore, British Malaya, and China. John Davis Murray, a mechanical engineer and recent graduate of Purdue University, was sent to supervise the operation on behalf of the Phosphate Mining and Shipping Company. Murray was known as the "King of Christmas Island" until 1910, when he married and settled in London.
The island was administered jointly by the British Phosphate commissioners and district officers from the United Kingdom Colonial Office through the Straits Settlements, and later the Crown Colony of Singapore. Hunt (2011) provides a detailed history of Chinese indentured labour on the island during those years. In 1922, scientists unsuccessfully attempted to view a solar eclipse in late September from the island to test Albert Einstein's theory of relativity.
Japanese invasion
From the outbreak of the South-East Asian theatre of World War II in December 1941, Christmas Island was a target for Japanese occupation because of its rich phosphate deposits. A naval gun was installed under a British officer, four non-commissioned officers (NCOs) and 27 Indian soldiers. The first attack was carried out on 20 January 1942 by the , which torpedoed the Norwegian freighter Eidsvold. The vessel drifted and eventually sank off West White Beach. Most of the European and Asian staff and their families were evacuated to Perth.
In late February and early March 1942, there were two aerial bombing raids. Shelling from a Japanese naval group on 7 March led the district officer to hoist the white flag. But after the Japanese naval group sailed away, the British officer raised the Union Flag once more. During the night of 10–11 March, mutinous Indian troops, abetted by Sikh policemen, killed an officer and the four British NCOs in their quarters as they were sleeping. "Afterwards all Europeans on the island, including the district officer, who governed it, were lined up by the Indians and told they were going to be shot. But after a long discussion between the district officer and the leaders of the mutineers the executions were postponed and the Europeans were confined under armed guard in the district officer's house".
At dawn on 31 March 1942, a dozen Japanese bomber aircraft launched an attack, destroying the radio station. The same day, a Japanese fleet of nine vessels arrived, and the island was surrounded. About 850 men of the Japanese 21st and 24th Special Base Forces and 102nd Construction Unit came ashore at Flying Fish Cove and occupied the island. They rounded up the workforce, most of whom had fled to the jungle. Sabotaged equipment was repaired, and preparations were made to resume the mining and export of phosphate. Only 20 men from the 21st Special Base Force were left as a garrison.
Isolated acts of sabotage and the torpedoing of the cargo ship at the wharf on 17 November 1942 meant that only small amounts of phosphate were exported to Japan during the occupation. In November 1943, over 60% of the island's population were evacuated to Surabaya prison camps, leaving a population of just under 500 Chinese and Malays and 15 Japanese to survive as best they could. In October 1945, re-occupied Christmas Island.Public Record Office, England War Office and Colonial Office Correspondence/Straits Settlements.Interviews conducted by J. G. Hunt with Island residents, 1973–1977.Correspondence J. G. Hunt with former Island residents, 1973–1979.
After the war, seven mutineers were traced and prosecuted by the Military Court in Singapore. In 1947, five of them were sentenced to death. However, following representations made by the newly independent government of India, their sentences were reduced to penal servitude for life.
Transfer to Australia
The United Kingdom transferred sovereignty of Christmas Island to Australia at the latter's request, with a $20 million payment from the Australian government to Singapore as compensation for the loss of earnings from the phosphate revenue. The United Kingdom's Christmas Island Act was given royal assent on 14 May 1958 by Queen Elizabeth II, enabling Britain to transfer authority over Christmas Island from Singapore to Australia by an order-in-council. Australia's Christmas Island Act was passed in September 1958, and the island was officially placed under the authority of the Commonwealth of Australia on 1 October 1958. This transfer did not see any process involving the local population, who could remain Singaporean citizens or obtain Australian citizenship. Links between Singapore and Christmas Island have occasionally reemerged in Singaporean politics and in Australia–Singapore relations.
Under Commonwealth Cabinet Decision 1573 of 9 September 1958, D.E. Nickels was appointed the first official representative of the new territory. In a media statement on 5 August 1960, the minister for territories, Paul Hasluck, said, among other things, that, "His extensive knowledge of the Malay language and the customs of the Asian people ... has proved invaluable in the inauguration of Australian administration ... During his two years on the island he had faced unavoidable difficulties ... and constantly sought to advance the island's interests."
John William Stokes succeeded Nickels and served from 1 October 1960 to 12 June 1966. On his departure, he was lauded by all sectors of the island community. In 1968, the official secretary was retitled an administrator and, since 1997, Christmas Island and the Cocos (Keeling) Islands together are called the Australian Indian Ocean Territories and share a single administrator resident on Christmas Island.
The village of Silver City was built in the 1970s, with aluminium-clad houses that were supposed to be cyclone-proof. The 2004 Indian Ocean earthquake and tsunami, centred off the western shore of Sumatra in Indonesia, resulted in no reported casualties, but some swimmers were swept some out to sea for a time before being swept back in.Main article: Countries affected by the 2004 Indian Ocean earthquake
Refugee and immigration detention
The Howard government operated the "Pacific Solution" from 2001 to 2007, excising Christmas Island from Australia's migration zone so that asylum seekers on the island could not apply for refugee status. Asylum seekers were relocated from Christmas Island to Manus Island and Nauru. In 2006, an immigration detention centre, containing approximately 800 beds, was constructed on the island for the Department of Immigration and Multicultural Affairs. Originally estimated to cost million, the final cost was over $400 million. In 2007, the Rudd government decommissioned Manus Regional Processing Centre and Nauru detention centre; processing would then occur on Christmas Island itself.
In December 2010, 48 asylum-seekers died just off the coast of the island in what became known as the Christmas Island boat disaster when their boat hit the rocks near Flying Fish Cove, and then smashed against nearby cliffs. In the case Plaintiff M61/2010E v Commonwealth of Australia, the High Court of Australia ruled, in a 7–0 joint judgment, that asylum seekers detained on Christmas Island were entitled to the protections of the Migration Act. Accordingly, the Commonwealth was obliged to afford asylum seekers a minimum of procedural fairness when assessing their claims. , after the interception of four boats in six days, carrying 350 people, the Immigration Department stated that there were 2,960 "irregular maritime arrivals" being held in the island's five detention facilities, which exceeded not only the "regular operating capacity" of 1,094 people, but also the "contingency capacity" of 2,724.
The Christmas Island Immigration Reception and Processing Centre closed in September 2018. The Morrison government announced it would re-open the centre in February the following year, after Australia's parliament passed legislation giving sick asylum seekers easier access to mainland hospitals. In the early days of the COVID-19 pandemic, the government opened parts of the Immigration Reception and Processing Centre to be used as a quarantine facility to accommodate Australian citizens who had been in Wuhan, the point of origin of the pandemic. The evacuees arrived on 3 February. They left 14 days later to their homes on the mainland.
Geography
The island is about in greatest length and in breadth. The total land area is , with of coastline. Steep cliffs along much of the coast rise abruptly to a central plateau. Elevation ranges from sea level to at Murray Hill. The island is mainly tropical rainforest, 63% of which is national parkland. The narrow fringing reef surrounding the island poses a maritime hazard.
Christmas Island lies northwest of Perth, Western Australia, south of Indonesia, east-northeast of the Cocos (Keeling) Islands, and west of Darwin, Northern Territory. Its closest point to the Australian mainland is from the town of Exmouth, Western Australia.
Only small parts of the shoreline are easily accessible. The island's perimeter is dominated by sharp cliff faces, making many of the island's beaches difficult to get to. Some of the easily accessible beaches include Flying Fish Cove (main beach), Lily Beach, Ethel Beach, and Isabel Beach, while the more difficult beaches to access include Greta Beach, Dolly Beach, Winifred Beach, Merrial Beach, and West White Beach, which all require a vehicle with four wheel drive and a difficult walk through dense rainforest.
Geology
The volcanic island is the flat summit of an underwater mountain more than high, which rises from about below the sea and only about above it. The mountain was originally a volcano, and some basalt is exposed in places such as The Dales and Dolly Beach, but most of the surface rock is limestone accumulated from coral growth. The karst terrain supports numerous anchialine caves. The summit of this mountain peak is formed by a succession of Tertiary limestones ranging in age from the Eocene or Oligocene up to recent reef deposits, with intercalations of volcanic rock in the older beds.
Marine Park
Reefs near the islands have healthy coral and are home to several rare species of marine life. The region, along with the Cocos (Keeling) Islands reefs, have been described as "Australia's Galapagos Islands".
In the 2021 budget the Australian Government committed A$39.1 million to create two new marine parks off Christmas Island and the Cocos (Keeling) Islands. The parks will cover up to of Australian waters. After months of consultation with local people, both parks were approved in March 2022, with a total coverage of . The park will help to protect spawning of bluefin tuna from illegal international fishers, but local people will be allowed to practise fishing sustainably inshore in order to source food.
Climate
Christmas Island lies near the southern edge of the equatorial region. It has a tropical monsoon climate (Köppen Am) and temperatures vary little throughout the year. The highest temperature is usually around in March and April, while the lowest temperature is and occurs in August. There is a dry season from July to October with only occasional showers. The wet season is between November and June and includes monsoons, with downpours of rain at random times of the day. Tropical cyclones also occur in the wet season, bringing very strong winds, heavy rain, wave action, and storm surge.
Demographics
As of the 2021 Australian census, the population of Christmas Island was 1,692. 22.2% of the population had Chinese ancestry (up from 18.3% in 2001), 17.0% had generic Australian ancestry (11.7% in 2001), 16.1% had Malay ancestry (9.3% in 2001), 12.5% had English ancestry (8.9% in 2001), and 3.8% of the population was of Indonesian origin. Most were born on Christmas Island. 40.8% of people were born in Australia. The next most common country of birth was Malaysia at 18.6%. 29.3% of the population spoke English as their family language, while 18.4% spoke Malay, 13.9% spoke Mandarin Chinese, 3.7% Cantonese, and 2.1% Southern Min (Minnan). Additionally, there are small local populations of Malaysian Indians and Indonesians.
The 2016 Australian census recorded that the population of Christmas Island was 40.5% female and 59.5% male, while in 2011 the figures had been 29.3% female and 70.7% male. In contrast, the 2021 figures for the whole of Australia were 50.7% female, 49.3% male. Since 1998 there has been no provision for childbirth on the island; expectant mothers travel to mainland Australia approximately one month before their expected due date to give birth.
Ethnicity
Historically, the majority of Christmas Islanders were those of Chinese, Malay, and Indian origins, the initial permanent settlers. Today, the plurality of residents are Chinese, with significant numbers of European Australians and Malays as well as a smaller Indian community, alongside more recent Filipino arrivals. Since the turn of the 21st century and right up to the present, Europeans have mainly confined themselves to The Settlement, where there is a small supermarket and several restaurants; the Malays live in their coastal kampong; and the Chinese reside in Poon Saan (Cantonese for "in the middle of the hill").
Language
The main languages spoken at home on Christmas Island, according to respondents, are English (28%), Mandarin (17%), and Malay (17%), with smaller numbers of speakers of Cantonese (4%) and Hokkien (2%). 27% did not specify a language.
While English is the lingua franca on the island, many people do not speak it. In the 2016 census, while only 14% of residents reported speaking English "not well or at all", only 59% of residents reported either speaking only English or speaking English "well or very well", with 27% not answering the question.
Religion
Religious practices differ by geography across the island and effectively correspond to the island's three primary settlements: the capital (known simply as The Settlement), the Cantonese village Poon Saan, and the Malay water village (which is often referred to as the Kampong).
Major religious affiliation in Christmas Island (2021):
374 people or 22.1% are Muslim, up from 19.3% in 2016
333 people or 19.7% are Irreligion, up from 15.3% in 2016
258 people or 15.2% are Buddhists, down from 18.2% in 2016
123 people or 7.3% are Catholic, down from 8.8% in 2016
35 people or 2.1% are Anglican, down from 3.6% in 2016
The Settlement
Due to the large numbers of English and Australians who make up the bulk of the island's capital, there is a strong Anglo-Celtic influence in The Settlement which has contributed to the strong presence of Catholicism. This has been further reinforced by recent Filipino arrivals.
Poon Saan
In the village of Poon Saan, which functions like the island's Chinatown, Buddhism is commonplace. Traditional Cantonese folk practices also are represented in this area. Chinese temples and shrines include seven Buddhist temples (like Guan Yin Monastery (观音寺) at Gaze Road), ten Taoist temples (like Soon Tian Kong (顺天宫) in South Point and Grants Well Guan Di Temple) and shrines dedicated to Na Tuk Kong or Datuk Keramat on the island.
Kampong
Malays who have settled on the island's edge in their shoreline kampong tend to follow Sunni Islam. The kampong has a mosque but it is in a state of decay and disrepair with rotting timbers and cracks.
Other groups
Other smaller and less geographically concentrated groups include Anglicans who make up 3.6%, Uniting Church adherents who make up 1.2%, other Protestants who make up 1.7%, and other Christian groups with 3.3%. Other religious communities collectively constitute 0.6% of the island's population.
Holidays and festivals
As an external territory of Australia, the two religious festivals which are official holidays are Christmas and Easter. Other non-official festivals include Spring Festival, Lantern Festival, Qingming Festival, Zhong Yuan Festival, Hari Raya Puasa, and Hari Raya Haji.
Government
Christmas Island is a non-self-governing external territory of Australia , part of the Australian Indian Ocean Territories administered by the Department of Infrastructure, Transport, Regional Development and Communications (from 29 November 2007 until 14 September 2010, administration was carried out by the Attorney-General's Department, and prior to this by the Department of Transport and Regional Services).
The legal system is under the authority of the Governor-General of Australia and Australian law. An administrator appointed by the governor-general represents the monarch and Australia and lives on the island. The territory falls under no formal state jurisdiction, but the Western Australian government provides many services as established by the Christmas Island Act.
The Australian government provides services through the Christmas Island Administration and the Department of Infrastructure and Regional Development. Under the federal government's Christmas Island Act 1958, Western Australian laws are applied to Christmas Island; non-application or partial application of such laws is at the discretion of the federal government. The act also gives Western Australian courts judicial power over Christmas Island. Christmas Island remains constitutionally distinct from Western Australia, however; the power of the state to legislate for the territory is delegated by the federal government. The kind of services typically provided by a state government elsewhere in Australia are provided by departments of the Western Australian government, and by contractors, with the costs met by the federal government. A unicameral Shire of Christmas Island with nine seats provides local government services and is elected by popular vote to serve four-year terms. Elections are held every two years, with four or five of the members standing for election. women held one of the nine seats in the Christmas Island Shire Council. Its second president was Lillian Oh, from 1993 to 1995.
The most recent local election took place on 18 October 2025 alongside elections in the Cocos (Keeling) Islands.
In 2025 Shire Elections, the incumbent lost the vote, installing - KREPP,Tracey (1st) & LAI, Stephanie (2nd). THOMSON, Gordon (3rd).
MASLI, Hafiz & FOO, Kee Heng lost their positions as councillors.
The overall majority of the Unity Party was lost, for the first time in many years resulting in Steven Pereira being elected as President, Swee (Mel) Tung Deputy President at the October 20 Special meeting.
Christmas Island residents who are Australian citizens vote in Australian federal elections. Christmas Island residents are represented in the House of Representatives by the Division of Lingiari in the Northern Territory and in the Senate by Northern Territory senators. At the 2019 federal election, the Labor Party received majorities from Christmas Island electors in both the House of Representatives and the Senate.
Defence and police
While there is no permanent Australian military presence on Christmas Island, the Royal Australian Navy and Australian Border Force deploy and patrol boats to conduct surveillance and counter migrant smuggling patrols in adjacent waters. the Navy's Armidale-class boats are in the process of being replaced by larger s. Christmas Island is increasingly perceived as a strategic location for monitoring Chinese submarine activity in the Indian Ocean.
The airfield on Christmas Island has a 2100m long runway, while that on Cocos (West Island, to the west) is in length. Both airfields have scheduled jet services; however, the airfield on Cocos is being upgraded by the Australian Defence Force for the purpose of acting as a forward operating base for Australian surveillance and electronic warfare aircraft in the region.
The Australian Federal Police provides community policing services to Christmas Island and also carries out duties related to immigration enforcement, the processing of visiting aircraft and ships, and in coordinating emergency operations.
Residents' views
Residents find the system of administration frustrating, with the island run by bureaucrats in the federal government, but subject to the laws of Western Australia and enforced by federal police. There is a feeling of resignation that any progress on local issues is hampered by the confusing governance system. A number of islanders support self-governance, including ex-shire President Gordon Thompson, who also believes that a lack of news media to cover local affairs had contributed to political apathy among residents.
Flag
In early 1986, the Christmas Island Assembly held a design competition for an island flag; the winning design was adopted as the informal flag of the territory for over a decade, and in 2002 it was made the official flag of Christmas Island. At the centre of the flag is a yellow roundel showing an image of the island in green.
Economy
Phosphate mining had been the only significant economic activity, but in December 1987 the Australian government closed the mine. In 1991, the mine was reopened by Phosphate Resources Limited, a consortium that included many of the former mine workers as shareholders and is the largest contributor to the Christmas Island economy.
With the support of the government, the $34 million Christmas Island Casino and Resort opened in 1993 but was closed in 1998.
The Australian government in 2001 agreed to support the creation of a commercial spaceport on the island; however, this has not yet been constructed and appears that it will not proceed. The Howard government built a temporary immigration detention centre on the island in 2001 and planned to replace it with a larger, modern facility at North West Point until Howard's defeat in the 2007 elections.
Tourism is a growing industry on the island. The peak periods for tourism are the red crab migration in October–December and bird/nature week in August/September. In the University of Queensland study "Strengthening sustainability of the Indian Ocean Territories (IOT) Marine Parks & local economy through collaborative world-class ecotourism" with results released in May 2025, it was established there is significant community support for tourism on Christmas Island with 77% of the population wanting to see more tourists and 84% of residents believing that the development of tourism is crucial for the economic future of the island.
Culture
Christmas Island cuisine can best be described as an eclectic combination of traditional Australian cuisine and Asian cuisine.
The main local organisation that promotes and supports the status and interests of female Christmas Islanders is the Christmas Island Women's Association which was established in 1989 and is a member organisation of the Associated Country Women of the World. thumb|Trekking across the backroads of Christmas Island
Christmas Island is well known for its biological diversity. There are many rare species of animals and plants on the island, making nature-walking a popular activity. Along with the diversity of species, many different types of caves exist, such as plateau caves, coastal caves, raised coastal caves and alcoves, sea caves, fissure caves, collapse caves, and basalt caves; most of these are near the sea and have been formed by the action of water. Altogether, there are approximately 30 caves on the island, with Lost Lake Cave, Daniel Roux Cave, and Full Frontal Cave being the most well-known. The many freshwater springs include Hosnies Spring Ramsar, which also has a mangrove stand. The Dales is a rainforest in the western part of the island and consists of seven deep valleys, all of which were formed by spring streams. Hugh's Dale waterfall is part of this area and is a popular attraction. The annual breeding migration of the Christmas Island red crabs is a popular event.
Fishing is another common activity. There are many distinct species of fish in the oceans surrounding Christmas Island. Snorkelling and swimming in the ocean are two other activities that are extremely popular. Walking trails are also very popular, for there are many beautiful trails surrounded by extravagant flora and fauna. 63% of the island is covered by the Christmas Island National Park.
Sport
Cricket and rugby league are the two main organised sports on the island.
The Christmas Island Cricket Club was founded in 1959, and is now known as the Christmas Island Cricket and Sporting Club. Aussie rules was popular from 1995 to 2014 and games were played between the visiting Royal Australian Navy and the locals. There was also one international game representing Australia, which was played in Jakarta, Indonesia, in 2006 against the Jakarta Bintangs. Auskick was also presented for the kids and they participated in two years as represented in AFL games of half time entertainment between 2006 and 2010. In 2019 the club celebrated its 60-year anniversary. The club entered its first representative team into the WACA Country Week in 2020, where they were runners up in the F-division.
Rugby league is growing in the island: the first game was played in 2016, and a local committee, with the support of NRL Western Australia, is willing to organise matches with the nearby Cocos Islands and to create a rugby league competition in the Indian Ocean region.
Unlike Norfolk Island, another external territory of Australia, Christmas Island does not participate in the Commonwealth Games or the Pacific Games, though Pacific Games participation has been discussed.
Scuba diving and snorkelling are popular on the island, with many locals and visitors partaking in the activity. Historically, there was the Christmas Island Divers Association (now defunct) with a shed near the Old European Cemetery. Now there are two marine diving and snorkler operators, offering courses SSI and PADI, try dives, and boat trips. Most of the dive sites are on the northern coastline. During the swell season, the east coast is available via Ethel Beach Boat Ramp.
Some notable dive sites include: Flying Fish Cove, the Eidsvold wreck (a phosphate ship torpedoed by the Japanese in 1942), Thundercliff Cave, West White Beach, Perpendicular Wall, Million Dollar Bommie, Chicken Farm, and Coconut Point.
Flora and fauna
Christmas Island was uninhabited until the late 19th century, allowing many species to evolve without human interference. Two-thirds of the island has been declared a National Park, which is managed by the Australian Department of Environment and Heritage through Parks Australia. Christmas Island contains unique species, both of flora and fauna, some of which are threatened or have become extinct.
Flora
The dense rainforest has grown in the deep soils of the plateau and on the terraces. The forests are dominated by 25 tree species. Ferns, orchids, and vines grow on the branches in the humid atmosphere beneath the canopy. The 135 plant species include at least 18 endemic species. The rainforest is in great condition despite the mining activities over the last 100 years. Areas that have been damaged by mining are now a part of an ongoing rehabilitation project.
Christmas Island's endemic plants include the trees Arenga listeri, Pandanus elatus, and Dendrocnide peltata var. murrayana; the shrubs Abutilon listeri, Colubrina pedunculata, Grewia insularis, and Pandanus christmatensis; the vines Hoya aldrichii and Zehneria alba; the herbs Asystasia alba, Dicliptera maclearii, and Peperomia rossii; the grass Ischaemum nativitatis; the fern Asplenium listeri; and the orchids Brachypeza archytas, Dendrobium nativitatis, Phreatia listeri, and Zeuxine exilis.Christmas Island National Park: Flora.
Fauna
Two species of native rats, the Maclear's and bulldog rats, have become extinct since the island was settled, while the Javan rusa deer has been introduced. The endemic Christmas Island shrew has not been seen since the mid-1980s and was recently declared extinct, while the Christmas Island pipistrelle (a small bat) is presumed to be extinct.
The fruit bat (flying fox) species Pteropus natalis is only found on Christmas Island; its epithet natalis is a reference to that name. The species is probably the last native mammal, and is an important pollinator and rainforest seed-disperser; the population is also in decline and under increasing pressure from land clearing and introduced pest species. The flying fox's low rate of reproduction (one pup each year) and high infant mortality rate makes it especially vulnerable, and its conservation status is critically endangered. Flying foxes are an 'umbrella' species helping forests regenerate and other species survive in stressed environments.
The land crabs and seabirds are the most noticeable fauna on the island. Christmas Island has been identified by BirdLife International as both an Endemic Bird Area and an Important Bird Area because it supports five endemic species and five subspecies as well as over 1% of the world populations of five other seabirds.
Twenty terrestrial and intertidal species of crab have been described here, of which thirteen are regarded as true land crabs, being dependent on the ocean only for larval development. Robber crabs, known elsewhere as coconut crabs, also exist in large numbers on the island. The annual red crab mass migration to the sea to spawn has been called one of the wonders of the natural world. This takes place each year around November – after the start of the wet season and in synchronisation with the cycle of the moon. Once at the ocean, the mothers release the embryos where they can survive and grow until they are able to live on land.
The island is a focal point for seabirds of various species. Eight species or subspecies of seabirds nest on it. The most numerous is the red-footed booby, which nests in colonies, using trees on many parts of the shore terrace. The widespread brown booby nests on the ground near the edge of the seacliff and inland cliffs. Abbott's booby (listed as endangered) nests on tall emergent trees of the western, northern and southern plateau rainforest, the only remaining nesting habitat for this bird in the world.
Of the ten native land birds and shorebirds, seven are endemic species or subspecies. This includes the Christmas island thrush and the Christmas imperial pigeon. Some 86 migrant bird species have been recorded as visitors to the island. The Christmas frigatebird has nesting areas on the northeastern shore terraces. The more widespread great frigatebirds nest in semi-deciduous trees on the shore terrace, with the greatest concentrations being in the North West and South Point areas. The common noddy and two species of bosun or tropicbirds also nest on the island, including the golden bosun (P. l. fulvus), a subspecies of the white-tailed tropicbird that is endemic to the island.
Six species of butterfly are known to occur on Christmas Island. These are the Christmas swallowtail (Papilio memnon), striped albatross (Appias olferna), Christmas emperor (Polyura andrewsi), king cerulean (Jamides bochus), lesser grass-blue (Zizina otis), and Papuan grass-yellow (Eurema blanda).
Insect species include the yellow crazy ant (Anoplolepis gracilipes), introduced to the island and since subjected to attempts to destroy the supercolonies that emerged with aerial spraying of the insecticide Fipronil.
Media
Radio broadcasts to Christmas Island from Australia include ABC Radio National, ABC Kimberley, Triple J, and Hit WA (Formerly Red FM). All services are provided by satellite links from the mainland. Broadband internet became available to subscribers in urban areas in mid-2005 through the local internet service provider, CIIA (formerly dotCX). Because of its proximity to South East Asia, Christmas Island falls within many of the satellite footprints throughout the region. This results in ideal conditions for receiving various Asian broadcasts, which locals sometimes prefer to those emanating from Western Australia. Additionally, ionospheric conditions are conducive to terrestrial radio transmissions, from HF through VHF and sometimes into UHF. The island is home to a small array of radio equipment that spans a good chunk of the usable spectrum. A variety of government-owned and operated antenna systems are employed on the island to take advantage of this.
Local radio
There is a local community radio station run by volunteers. Christmas Island Radio, known as 6RCI, serves as the backing track of the island's community, broadcasting across 102.1 FM and 105.3 FM. The radio began streaming online in 2025. There is a large record collection stored in the building of old Chinese and Malay records. The station also broadcasts emergency messages.
Television
Free-to-air digital television stations from Australia are broadcast in the same time zone as Perth and are broadcast from three separate locations:
Broadcaster Drumsite Phosphate Hill Rocky Point ABC ABC 6 ABC 34 ABC 40 SBS SBS 7 SBS 35 SBS 41 WAW WAW 8 WAW 36 WAW 42 WOW WOW 10 WOW 36 WOW 43 WDW WDW 11 WDW 38 WDW 44
Cable television from Australia, Malaysia, Singapore, and the United States commenced in January 2013.
Telecommunications
Telephone and internet services on Christmas Island are provided by multiple operators. Telstra remains a major provider and integrates the island into the Australian telecommunications network, using the same prefix as Western Australia, South Australia, and the Northern Territory (08). In February 2005, a 900 MHz band GSM based 2G mobile telephone system replaced the old analogue network. In 2022, a 4,600 kilometre long, 60 terabits per second, high-capacity backhaul sole submarine cable connection between Australia and Christmas Island was implemented to replace the existing satellite based 2G mobile network with 4GX technology with enhanced mobile and data services on Christmas Island.Vocus and Telstra team up on Christmas Island mobile project, ARN News, 11 May 2022.
CiFi, a local mobile phone and internet services provider, launched operations in 2020. It has established a carrier-grade 4G LTE mobile network and a fixed wireless broadband service, offering high-speed internet connectivity to both residents and visitors.
In 2025, Google is planning sub sea cables to the Island, with an AI data centre, potentially for military use.
Newspapers
The Shire of Christmas Island publishes a fortnightly newsletter, The Islander. There are no independent newspapers.
Postage stamps
A postal agency was opened on the island in 1901 and sold stamps of the Strait Settlements. After the Japanese occupation (1942–1945), postage stamps of the British Military Administration in Malaya were in use, followed by stamps of Singapore. In 1958, the island received its own postage stamps after being put under Australian custody. It had a large philatelic and postal independence, managed first by the Phosphate Commission (1958–1969) and then by the island's administration (1969–1993). This ended on 2 March 1993 when Australia Post became the island's postal operator; Christmas Island stamps may be used in Australia and Australian stamps may be used on the island.
Transport
A container port exists at Flying Fish Cove with an alternative container-unloading point to the east of the island at Norris Point, intended for use during the December-to-March "swell season" of rough seas. The now-defunct standard gauge Christmas Island Phosphate Co.'s Railway from Flying Fish Cove to the phosphate mine was constructed in 1914. It was closed in December 1987, when the Australian government closed the mine, and since has been recovered as scrap, leaving only earthworks in places.
Virgin Australia provided two weekly flights up to 31st October 2025, whereupon Qantaslink took over) to Christmas Island from Perth in Western Australia, with the service connecting to Cocos (Keeling) Islands in both directions. On the announcement of the change of contract, all forward flight bookings were cancelled by Virgin on Thursday 21st August 2025, with all passengers to get a refund and told to rebook.
A fortnightly freight flight provides fresh supplies to the island. Rental cars are available from the Christmas Island Airport; however, no franchised companies are represented. Visitors are advised to prebook as there is limited quantity. Road conditions across the island can vary, though inclement weather can cause the roads to become slippery or damaged. Many of the tracks on the island are restricted to four-wheel-drive vehicles.
QantasLink now operates a service to Christmas Island from 3rd November 2025, and will schedule flights on Monday, Friday, and Saturday. Flights are available online. Offering 32k of luggage.
Qantas provised services under a government contract.
Education
The island-operated crèche is in the Recreation Centre. Christmas Island District High School, catering to students in grades P-12, is run by the Western Australian Education Department. There are no universities on Christmas Island. The island has one public library.
See also
.cx, top-level domain country code for Christmas Island
Index of Christmas Island–related articles
Outline of Christmas Island
Notes
References
Further reading
96 pages, including many b&w photographs.
197 pages including many photographs and plates.
263 pages including photographs.
112 pages including many photographs.
60 pages including colour photographs.
133 pages including many colour photographs.
76 pages including colour photographs.
207 pages including many b&w photographs.
288 pages pictorial illustration of crabs.
238 pages.
External links
Christmas Island Archives – Featuring historical stories, articles and more.
Christmas Island Stories - Conserving and sharing the authentic stories of the people of Christmas Island.
Christmas Island Virtual Museum – Presenting historical artifacts, coins and ephemera.
Christmas Island Tourism Association - Tourism information about the island
Category:Island countries of the Indian Ocean
Category:Islands of Australia
Category:Islands of Southeast Asia
Category:Important Bird Areas of Australian External Territories
Category:British rule in Singapore
.
Category:States and territories of Australia
Category:States and territories established in 1957
Category:1957 establishments in Australia
Category:Important Bird Areas of Indian Ocean islands
Category:Endemic Bird Areas
Category:Prison islands
|
geography
| 6,767
|
5510
|
Clipperton Island
|
https://en.wikipedia.org/wiki/Clipperton_Island
|
Clipperton Island ( ; ), also known as Clipperton Atoll and previously as Clipperton's Rock, is an uninhabited French coral atoll in the eastern Pacific Ocean. The only French territory in the North Pacific, Clipperton is from Paris, France; from Papeete, French Polynesia; and from Acapulco, Mexico.
Clipperton was documented by French merchant-explorers in 1711 and formally claimed as part of the French protectorate of Tahiti in 1858. Despite this, American guano miners began working the island in the early 1890s. As interest in the island grew, Mexico asserted a claim to the island based upon Spanish records from the 1520s that may have identified the island. Mexico established a small military colony on the island in 1905, but during the Mexican Revolution contact with the mainland became infrequent, most of the colonists died, and lighthouse keeper Victoriano Álvarez instituted a short, brutal reign as "king" of the island. Eleven survivors were rescued in 1917 and Clipperton was abandoned.
The dispute between Mexico and France over Clipperton was taken to binding international arbitration in 1909. Victor Emmanuel III, King of Italy, was chosen as arbitrator and decided in 1931 that the island was French territory. Despite the ruling, Clipperton remained largely uninhabited until 1944 when the U.S. Navy established a weather station on the island to support its war efforts in the Pacific. France protested and, as concerns about Japanese activity in the eastern Pacific waned, the U.S. abandoned the site in late 1945.
Since the end of World War II, Clipperton has primarily been the site for scientific expeditions to study the island's wildlife and marine life, including its significant masked and brown booby colonies. It has also hosted climate scientists and amateur radio DX-peditions. Plans to develop the island for trade and tourism have been considered, but none have been enacted and the island remains mostly uninhabited with periodic visits from the French Navy.
Geography
The coral island is located at in the East Pacific, southwest of Mexico, west of Nicaragua, west of Costa Rica and northwest of the Galápagos Islands in Ecuador. The nearest land is Socorro Island, about to the northwest in the Revillagigedo Archipelago. The nearest French-owned island is Hiva Oa in the Marquesas Islands of French Polynesia which is about 3,300 km (1,781 nmi) southeast from Clipperton.
Despite its proximity to North America, Clipperton is often considered one of the eastern-most points of Oceania due to being part of the French Indo-Pacific, and to commonalities between its marine fauna and the marine fauna of Hawaii and Kiribati's Line Islands, with the island sitting along the migration path for animals in the Eastern Tropical Pacific region. The island is the only emerged part of the East Pacific Rise, as well as the only feature in the Clipperton fracture zone that breaks the ocean's surface, and it is one of the few islands in the Pacific that lacks an underwater archipelagic apron.
The atoll is low-lying and largely barren, with some scattered grasses, and a few clumps of coconut palms (Cocos nucifera). The land ring surrounding the lagoon measures in area with an average elevation of , although a small volcanic outcropping, referred to as Clipperton Rock (), rises to on its southeast side. The surrounding reef hosts an abundance of corals and is partly exposed at low tide. In 2001 a geodetic marker was placed to evaluate if the land is rising or sinking.
Clipperton Rock is the remains of the island's now extinct volcano's rim; because it includes this rocky outcropping, Clipperton is not a true atoll and is sometimes referred to as a 'near-atoll'. The surrounding reef in combination with the weather makes landing on the island difficult and anchoring offshore hazardous for larger ships; in the 1940s American ships reported active problems in this regard.
Environment
The environment of Clipperton Island has been studied extensively with the first recordings and sample collection being done in the 1800s. Modern research on Clipperton is focused primarily on climate science and migratory wildlife.
The SURPACLIP oceanographic expedition, a joint undertaking by the National Autonomous University of Mexico and the University of New Caledonia Nouméa, made extensive studies of the island in 1997. In 2001, French National Centre for Scientific Research geographer Christian Jost extended the 1997 studies through the French Passion 2001 expedition, which focused on the evolution of Clipperton's ecosystem. In 2003, cinematographer Lance Milbrand stayed on the island for 41 days, recording the adventure for the National Geographic Explorer and plotting a GPS map of Clipperton for the National Geographic Society.
In 2005, a four-month scientific mission organised by Jean-Louis Étienne made a complete inventory of Clipperton's mineral, plant, and animal species; studied algae as deep as below sea level; and examined the effects of pollution. A 2008 expedition from the University of Washington's School of Oceanography collected sediment cores from the lagoon to study climate change over the past millennium.
Lagoon
The closure of the lagoon approximately 170 years ago and prevention of seawater from entering the lagoon has formed a meromictic lake. The bottom of the shallow parts of the lake contain eroded coral heads from when the lagoon was last connected with the ocean. During visits in 1897 and 1898 the depth at the middle of the lagoon was recorded as being between two inches and two feet due to the dead coral. The surface of the lagoon has a high concentration of phytoplankton that vary slightly with the seasons. As a result of this the water columns are stratified and do not mix leaving the lagoon with an oxic and brackish upper water layer and a deep sulfuric anoxic saline layer. At a depth of approximately the water shifts with salinity rising and both pH and oxygen quickly decreasing. The deepest levels of the lagoon record waters enriched with hydrogen sulfide which prevent the growth of coral. Before the lagoon was closed off to seawater, coral and clams were able to survive in the area as evident by fossilized specimens.
Studies of the water have found that microbial communities on the water's surface are similar to other water samples from around the world with deeper water samples showing a great diversity of both bacteria and archaea. In 2005, a group of French scientists discovered three dinoflagellate microalgae species in the lagoon: Peridiniopsis cristata, which was abundant; Durinskia baltica, which was known to exist previously in other locations, but was new to Clipperton; and Peridiniopsis cristata var. tubulifera, which is unique to the island. The lagoon also harbours millions of isopods, which are reported to deliver a painful bite.
While some sources have rated the lagoon water as non-potable, testimony from the crew of the tuna clipper M/V Monarch, stranded for 23 days in 1962 after their boat sank, indicates otherwise. Their report reveals that the lagoon water, while "muddy and dirty", was drinkable, despite not tasting very good. Several of the castaways drank it, with no apparent ill effects. Survivors of a Mexican military colony in 1917 (see below) indicated that they were dependent upon rain for their water supply, catching it in old boats. American servicemen on the island during World War II had to use evaporators to desalinate the lagoon's water. Aside from the lagoon and water caught from rain, no freshwater sources are known to exist.
Climate
The island has a tropical oceanic climate, with average temperatures of and highs up to . Annual rainfall is , and the humidity level is generally between 85 per cent and 95 per cent with December to March being the drier months. The prevailing winds are the southeast trade winds. The rainy season occurs from May to October, and the region is subject to tropical cyclones from April to September, but such storms often pass to the northeast of Clipperton. In 1997 Clipperton was in the path of the start of Hurricane Felicia, as well as Hurricane Sandra in 2015. In addition, Clipperton has been subjected to multiple tropical storms and depressions, including Tropical Storm Andres in 2003. Surrounding ocean waters are warm, pushed by equatorial and counter-equatorial currents and have seen temperature increases due to global warming.
Flora and fauna
When Snodgrass and Heller visited in 1898, they reported that "no land plant is native to the island". Historical accounts from 1711, 1825, and 1839 show a low grassy or suffrutescent (partially woody) flora. During Marie-Hélène Sachet visit in 1958, the vegetation was found to consist of a sparse cover of spiny grass and low thickets, a creeping plant (Ipomoea spp.), and stands of coconut palm. This low-lying herbaceous flora seems to be a pioneer in nature, and most of it is believed to be composed of recently introduced species. Sachet suspected that Heliotropium curassavicum, and possibly Portulaca oleracea, were native. Coconut palms and pigs introduced in the 1890s by guano miners were still present in the 1940s. The largest coconut grove is Bougainville Wood () on the southwestern end of the island. On the northwest side of the atoll, the most abundant plant species are Cenchrus echinatus, Sida rhombifolia, and Corchorus aestuans. These plants compose a shrub cover up to in height, and are intermixed with Eclipta, Phyllanthus, and Solanum, as well as the taller Brassica juncea. The islets in the lagoon are primarily vegetated with Cyperaceae, Scrophulariaceae, and Ipomoea pes-caprae. A unique feature of Clipperton is that the vegetation is arranged in parallel rows of species, with dense rows of taller species alternating with lower, more open vegetation. This was assumed to be a result of the trench-digging method of phosphate mining used by guano hunters.
The only land animals known to exist are two species of reptiles (the Pacific stump-toed gecko and the copper-tailed skink), bright-orange land crabs known as Clipperton crabs (Johngarthia oceanica, prior to 2019 classified as Johngartia planata), birds, and ship rats. The rats probably arrived when large fishing boats wrecked on the island in 1999 and 2000.
The pigs introduced in the 1890s reduced the crab population, which in turn allowed grassland to gradually cover about 80 per cent of the land surface. The elimination of these pigs in 1958, the result of a personal project by Kenneth E. Stager, caused most of this vegetation to disappear as the population of land crabs recovered. As a result, Clipperton is mostly a sandy desert with only 674 palms counted by Christian Jost during the Passion 2001 French mission and five islets in the lagoon with grass that the terrestrial crabs cannot reach. A 2005 report by the National Oceanic and Atmospheric Administration Southwest Fisheries Science Center indicated that after the introduction of rats and their increased presence has led to a decline in both crab and bird populations, causing a corresponding increase in both vegetation and coconut palms. This report urgently recommended eradication of rats, which have been destroying bird nesting sites and the crab population, so that vegetation might be reduced, and the island might return to its 'pre-human' state.
In 1825, Benjamin Morrell reported finding green sea turtles nesting on Clipperton, but later expeditions have not found nesting turtles there, possibly due to disruption from guano extraction, as well as the introduction of pigs and rats. Sea turtles found on the island appear to have been injured due to fishing practices. Morrell also reported fur and elephant seals on the island in 1825, but they too have not been recorded by later expeditions.
Birds are common on the island; Morrell noted in 1825: "The whole island is literally covered with sea-birds, such as gulls, whale-birds, gannets, and the booby". Thirteen species of birds are known to breed on the island and 26 others have been observed as visitors. The island has been identified as an Important Bird Area by BirdLife International because of the large breeding colony of masked boobies, with 110,000 individual birds recorded. Observed bird species include white terns, masked boobies, sooty terns, brown boobies, brown noddies, black noddies, great frigatebirds, coots, martins (swallows), cuckoos, and yellow warblers. Ducks and moorhens have been reported in the lagoon.
The coral reef on the north side of the island includes colonies more than high. The 2018 Tara Pacific expedition located five colonies of Millepora platyphylla at depths of , the first of this fire coral species known in the region. Among the Porites spp. stony corals, some bleaching was observed, along with other indications of disease or stress, including parasitic worms and microalgae.
The reefs that surround Clipperton have some of the highest concentration of endemic species found anywhere with more than 115 species identified. Many species are recorded in the area, including five or six endemics, such as Clipperton angelfish (Holacanthus limbaughi), Clipperton grouper (Epinephelus clippertonensis), Clipperton damselfish (Stegastes baldwini) and Robertson's wrasse (Thalassoma robertsoni). Widespread species around the reefs include Pacific creolefish, blue-and-gold snapper, and various species of goatfish. In the water column, trevallies are predominant, including black jacks, bigeye trevally, and bluefin trevally. Also common around Clipperton are black triggerfish;, several species of groupers, including leather bass and starry groupers; Mexican hogfish; whitecheek, convict, and striped-fin surgeonfish; yellow longnose and blacknosed butterflyfish; coral hawkfish; golden pufferfish; Moorish idols; parrotfish; and moray eels, especially speckled moray eels. The waters around the island are an important nursery for sharks, particularly the white tip shark. Galapagos sharks, reef sharks, whale sharks, and hammerhead sharks are also present around Clipperton.
Three expeditions to Clipperton have collected sponge specimens, including U.S. President Franklin Roosevelt's visit in 1938. Of the 190 specimens collected, 20 species were noted, including nine found only at Clipperton. One of the endemic sponges, collected during the 1938 visit, was named Callyspongia roosevelti in honor of Roosevelt.
In April 2009, Steven Robinson, a tropical fish dealer from Hayward, California, traveled to Clipperton to collect Clipperton angelfish. Upon his return to the United States, he described the 52 illegally collected fish to federal wildlife authorities as king angelfish, not the rarer Clipperton angelfish, which he intended to sell for $10,000. On 15 December 2011, Robinson was sentenced to 45 days of incarceration, one year of probation, and a $2,000 fine.
Environmental threats
During the night of 10 February 2010, the Sichem Osprey, a Maltese chemical tanker, ran aground en route from the Panama Canal to South Korea. The ship contained of xylene, of soybean oil, and of tallow. All 19 crew members were reported safe, and the vessel reported no leaks. The vessel was re-floated on 6 March and returned to service.
In mid-March 2012, the crew from the Clipperton Project noted the widespread presence of refuse, particularly on the northeast shore, and around the Clipperton Rock. Debris, including plastic bottles and containers, create a potentially harmful environment for the island's flora and fauna. This trash is common to only two beaches (northeast and southwest), and the rest of the island is fairly clean. Other refuse has been left after the occupations by Americans 1944–1945, French 1966–1969, and the 2008 scientific expedition. During a 2015 scientific and amateur radio expedition to Clipperton, the operating team discovered a package that contained of cocaine. It is suspected that the package washed up after being discarded at sea. In April 2023, the Passion 23 mission by France's and the surveillance frigate Germinal collected more than of plastic waste from the island's beaches along with a bale of cocaine.
The Sea Around Us Project estimates the Clipperton EEZ produces a harvest of of fish per year; however, because French naval patrols in the area are infrequent, this includes a significant amount of illegal fishing, along with lobster harvesting and shark finning, resulting in estimated losses for France of €0.42 per kilogram of fish caught.
As deep-sea mining of polymetallic nodules increases in the adjacent Clarion–Clipperton zone, similar mining activity within France's exclusive economic zone surrounding the atoll may have an impact on marine life around Clipperton. Polymetallic nodules were discovered in the Clipperton EEZ during the Passion 2015 expedition.
Politics and government
The island is an overseas state private property of France under direct authority of the Minister of the Overseas.Article 9 – Although the island is French territory, it has no status within the European Union. Ownership of Clipperton Island was disputed in the 19th and early 20th centuries between France and Mexico, but was finally settled through arbitration in 1931; the Clipperton Island Case remains widely studied in international law textbooks.
In the late 1930s, as flying boats opened the Pacific to air travel, Clipperton Island was noted as a possible waypoint for a trans-Pacific route from the Americas to Asia via the Marquesas Islands in French Polynesia, bypassing Hawaii. However, France indicated no interest in developing commercial air traffic in the corridor.
After France ratified the United Nations Convention on the Law of the Sea (UNCLOS) in 1996, they reaffirmed the exclusive economic zone off Clipperton island which had been established in 1976. After changes were made to the area nations were allowed to claim under the third convention of UNCLOS, France in 2018 expanded the outer limits of the territorial sea to and the exclusive economic zone off Clipperton Island to , encompassing of ocean.
On 21 February 2007, administration of Clipperton was transferred from the High Commissioner of the Republic in French Polynesia to the Minister of Overseas France.
In 2015, French MP Philippe Folliot set foot on Clipperton becoming the first elected official from France to do so. Folliot noted that visiting Clipperton was something he had wanted to do since he was nine years old. Following the visit, Folliot reported to the National Assembly on the pressing need to reaffirm French sovereignty over the atoll and its surrounding maritime claims. He also proposed establishing an international scientific research station on Clipperton and administrative reforms surrounding the oversight of the atoll.
In 2022, France passed legislation officially referring to the island as "La Passion–Clipperton".
History
Discovery and early claims
There are several claims to the first discovery of the island. The earliest recorded possible sighting is 24 January 1521 when Portuguese-born Spanish explorer Ferdinand Magellan discovered an island he named San Pablo after turning westward away from the American mainland during his circumnavigation of the globe. On 15 November 1528, Spaniard Álvaro de Saavedra Cerón discovered an island he called Isla Médanos in the region while on an expedition commissioned by his cousin, the Spanish conquistador Hernán Cortés, to find a route to the Philippines.
Although both San Pablo and Isla Médanos are considered to be possible sightings of Clipperton, the island was first charted by French merchant Michel Dubocage, commanding La Découverte, who arrived at the island on Good Friday, 3 April 1711; he was joined the following day by fellow ship captain and La Princesse. The island was given the name ('Passion Island') as the date of rediscovery fell within Passiontide. They drew up the first map of the island and claimed it for France.
In August 1825, American sea captain Benjamin Morrell made the first recorded landing on Clipperton, exploring the island and making a detailed report of its vegetation.
The common name for the island comes from John Clipperton, an English pirate and privateer who fought the Spanish during the early 18th century, and who is said to have passed by the island. Some sources claim that he used it as a base for his raids on shipping.
19th century
Mexican claim 1821–1858
After its declaration of independence in 1821, Mexico took possession of the lands that had once belonged to Spain. As Spanish records noted the existence of the island as early as 1528, the territory was incorporated into Mexico. The Mexican constitution of 1917 explicitly includes the island, using the Spanish name , as Mexican territory. This would be amended on January 18, 1934, after the sovereignty dispute over the island was settled in favor of France.
French claim (1858)
In April 1858, French minister Eugène Rouher reached an agreement with a Mr. Lockhard of Le Havre to claim oceanic islands in the Pacific for the exploitation of guano deposits. On 17 November 1858, Emperor Napoleon III formally annexed Clipperton as part of the French protectorate of Tahiti. Sailing aboard Lockhart's ship Amiral, Ship-of-the-line Lieutenant Victor Le Coat de Kervéguen published a notice of this annexation in Hawaiian newspapers to further cement France's claim to the island.
Guano mining claims (1892–1905)
In 1892, a claim on the island was filed with the U.S. State Department under the U.S. Guano Islands Act by Frederick W. Permien of San Francisco on behalf of the Stonington Phosphate Company. In 1893, Permien transferred those rights to a new company, the Oceanic Phosphate Company. In response to the application, the State Department rejected the claim, noting France's prior claim on the island and that the claim was not bonded as was required by law. Additionally during this time there were concerns in Mexico that the British or Americans would lay claim to the island.
Despite the lack of U.S. approval of its claim, the Oceanic Phosphate Company began mining guano on the island in 1895. Although the company had plans for as many as 200 workers on the island, at its peak only 25 men were stationed there. The company shipped its guano to Honolulu and San Francisco where it sold for between US$10 and US$20 per ton. In 1897, the Oceanic Phosphate Company began negotiations with the British Pacific Islands Company to transfer its interest in Clipperton; this drew the attention of both French and Mexican officials.
On 24 November 1897, French naval authorities arrived on the Duguay Trouin and found three Americans working on the island. The French ordered the American flag to be lowered. At that time, U.S. authorities assured the French that they did not intend to assert American sovereignty over the island. A few weeks later, on 13 December 1897, Mexico sent the gunboat La Demócrata and a group of marines to assert its claim on the island, evicting the Americans, raising the Mexican flag, and drawing a protest from France. From 1898 to 1905, the Pacific Islands Company worked the Clipperton guano deposits under a concession agreement with Mexico. In 1898, Mexico made a US$1.5 million claim against the Oceanic Phosphate Company for the guano shipped from the island from 1895 to 1897.
20th century
Mexican colonization (1905–1917)
In 1905, the Mexican government renegotiated its agreement with the British Pacific Islands Company, establishing a military garrison on the island a year later and erecting a lighthouse under the orders of Mexican President Porfirio Díaz. Captain Ramón Arnaud was appointed governor of Clipperton. At first he was reluctant to accept the post, believing it amounted to exile from Mexico, but he relented after being told that Díaz had personally chosen him to protect Mexico's interests in the international conflict with France. It was also noted that because Arnaud spoke English, French, and Spanish, he would be well equipped to help protect Mexico's sovereignty over the territory. He arrived on Clipperton as governor later that year.
By 1914 around 100 men, women, and children lived on the island, resupplied every two months by a ship from Acapulco. With the escalation of fighting in the Mexican Revolution, regular resupply visits ceased, and the inhabitants were left to their own devices. On 28 February 1914, the schooner Nokomis wrecked on Clipperton; with a still seaworthy lifeboat, four members of the crew volunteered to row to Acapulco for help. The arrived months later to rescue the crew. While there, the captain offered to transport the survivors of the colony back to Acapulco; Arnaud refused as he believed a supply ship would soon arrive.
By 1917, all but one of the male inhabitants had died. Many had perished from scurvy, while others, including Arnaud, died during an attempt to sail after a passing ship to fetch help. Lighthouse keeper Victoriano Álvarez was the last man on the island, together with 15 women and children. Álvarez proclaimed himself 'king', and began a campaign of rape and murder, before being killed by Tirza Rendón, who was his favourite victim. Almost immediately after Álvarez's death, four women and seven children, the last survivors, were picked up by the U.S. Navy gunship on 18 July 1917.
Final arbitration of ownership (1931)
Throughout Mexico's occupation of Clipperton, France insisted on its ownership of the island, and lengthy diplomatic correspondence between the two countries led to a treaty on 2 March 1909, agreeing to seek binding international arbitration by Victor Emmanuel III of Italy, with each nation promising to abide by his determination. In 1931, Victor Emmanuel III issued his arbitral decision in the Clipperton Island Case, declaring Clipperton a French possession. Mexican President Pascual Ortiz Rubio, in response to public opinion that considered the Italian king biased towards France, consulted international experts on the validity of the decision, but ultimately Mexico accepted Victor Emmanuel's findings. The Mexican press at the time raised the issue of the Monroe Doctrine with the United States, stating that the French claim had preceded its issuance. France formally took possession of Clipperton on January 26, 1935.
U.S. presidential visit
President Franklin D. Roosevelt made a stop over at Clipperton in July 1938 aboard the as part of a fishing expedition to the Galápagos Islands and other points along the Central and South American coasts. At the island, Roosevelt and his party spent time fishing for sharks, and afterwards Dr. Waldo L. Schmitt of the Smithsonian Institution went ashore with some crew to gather scientific samples and make observations of the island.
Roosevelt had previously tried to visit Clipperton in July 1934 after transiting through the Panama Canal en route to Hawaii on the Houston; he had heard the area was good for fishing, but heavy seas prevented them from lowering a boat when they reached the island. On 19 July 1934, soon after the stop at Clipperton, the rigid airship rendezvoused with the Houston, and one of the Macon Curtiss F9C biplanes delivered mail to the president.
American occupation (1944–1945)
Once the weather station was completed and sailors garrisoned on the island, the U.S. government informed the British, French, and Mexican governments of the station and its purpose. Every day at 9 a.m., the 24 sailors stationed at the Clipperton weather station sent up weather balloons to gather information. Later, Clipperton was considered for an airfield to shift traffic between North America and Australia far from the front lines of Pacific Theater.
In April 1943, during a meeting between presidents Roosevelt of the U.S. and Avila Camacho of Mexico, the topic of Mexican ownership of Clipperton was raised. The American government seemed interested in Clipperton being handed over to Mexico due to the importance the island might play in both commercial and military air travel, as well as its proximity to the Panama Canal.
Although these talks were informal, the U.S. backed away from any Mexican claim on Clipperton as Mexico had previously accepted the 1931 arbitration decision. The U.S. government also felt it would be easier to obtain a military base on the island from France. However, after the French government was notified about the weather station, relations on this matter deteriorated rapidly with the French government sending a formal note of protest in defense of French sovereignty. In response, the U.S. extended an offer for the French military to operate the station or to have the Americans agree to leave the weather station under the same framework previously agreed to with other weather stations in France and North Africa. There were additional concerns within the newly formed Provisional Government of the French Republic that notification of the installation was made to military and not civilian leadership.
French Foreign Minister Georges Bidault said of the incident: "This is very humiliating to us we are anxious to cooperate with you, but sometimes you do not make it easy". French Vice Admiral Raymond Fenard requested during a meeting with U.S. Admiral Lyal A. Davidson that civilians be given access to Clipperton and the surrounding waters, but the U.S. Navy denied the request because there was an active military installation on the island. Instead Davidson offered to transport a French officer to the installation and reassured the French government that the United States did not wish to claim sovereignty over the island. During these discussions between the admirals, French diplomats in Mexico attempted to hire the Mexican vessel Pez de Plata out of Acapulco to bring a military attaché to Clipperton under a cover story that they were going on a shark fishing trip. At the request of the Americans, the Mexican government refused to allow the Pez De Plata to leave port. French officials then attempted to leave in another smaller vessel and filed a false destination with the local port authorities but were also stopped by Mexican officials.
During this period, French officials in Mexico leaked information about their concerns, as well as about the arrival of seaplanes at Clipperton, to The New York Times and Newsweek; both stories were refused publishing clearance on national security grounds. In February 1945, the U.S. Navy transported French Officer Lieutenant Louis Jampierre on a 4-day trip to Clipperton out of San Diego where he visited the installation and that afternoon returned to the United States. As the war in the Pacific progressed, concerns about Japanese incursions into the Eastern Pacific were reduced and in September 1945 the U.S. Navy began removing from Clipperton. During the evacuation, munitions were destroyed, but significant matériel was left on the island. By 21 October 1945, the last U.S. Navy staff at the weather station left Clipperton.
Post-World War II developments
Since the island was abandoned by American forces at the end of World War II, the island has been visited by sports fishermen, French naval patrols, and Mexican tuna and shark fishermen. There have been infrequent scientific and amateur radio expeditions and, in 1978, Jacques-Yves Cousteau visited with a team of divers and a survivor from the 1917 evacuation to film a television special called Clipperton: The Island that Time Forgot.
The island was visited by ornithologist Ken Stager of the Los Angeles County Museum in 1958. Appalled at the depredations visited by feral pigs upon the island's brown booby and masked booby colonies (reduced to 500 and 150 birds, respectively), Stager procured a shotgun and killed all 58 pigs. By 2003, the booby colonies had grown to 25,000 brown boobies and 112,000 masked boobies, making Clipperton home to the world's second-largest brown booby colony, and its largest masked booby colony. In 1994, Stager's story inspired Bernie Tershy and Don Croll, both professors at the University of California, Santa Cruz Long Marine Lab, to found the non-profit Island Conservation, which works to prevent extinctions through the removal of invasive species from islands.
When the independence of Algeria in 1962 threatened French nuclear testing sites in North Africa, the French Ministry of Defence considered Clipperton as a possible replacement site. This was eventually ruled out due to the island's hostile climate and remote location, but the island was used to house a small scientific mission to collect data on nuclear fallout from other nuclear tests. From 1966 to 1969, the French military sent a series of missions, called "Bougainville", to the island. The Bougainville missions unloaded some 25 tons of equipment, including sanitary facilities, traditional Polynesian dwellings, drinking water treatment tanks, and generators. The missions sought to surveil the island and its surrounding waters, observe weather conditions, and evaluate potential rehabilitation of the World War II era airstrip. By 1978, the structures built during the Bougainville missions had become quite derelict. The French explored reopening the lagoon and developing a harbour for trade and tourism during the 1970s, but this too was abandoned. An automatic weather installation was completed on 7 April 1980, with data collected by the station transmitted via the Argos satellite system to the Lannion Space Meteorology Center in Brittany France.
In 1981, the Académie des sciences d'outre-mer recommended the island have its own economic infrastructure, with an airstrip and a fishing port in the lagoon. This would mean opening the lagoon to the ocean by creating a passage in the atoll rim. To oversee this, the French government reassigned Clipperton from the High Commissioner for French Polynesia to the direct authority of the French government, classifying the island as an overseas state private property administered by France's Overseas Minister. In 1986, the Company for the Study, Development and Exploitation of Clipperton Island (French acronym, SEDEIC) and French officials began outlining a plan for the development of Clipperton as a fishing port, but due to economic constraints, the distance from markets, and the small size of the atoll, nothing beyond preliminary studies was undertaken and plans for the development were abandoned. In the mid-1980s, the French government began efforts to enlist citizens of French Polynesia to settle on Clipperton; these plans were ultimately abandoned as well.
In November 1994, the French Space Agency requested the help of NASA to track the first stage breakup of the newly designed Ariane 5 rocket. After spending a month on Clipperton setting up and calibrating radar equipment to monitor Ariane flight V88, the mission ended in disappointment when the rocket disintegrated 37 seconds after launch due to a software bug.
Despite Mexico accepting the 1931 arbitration decision that Clipperton was French territory, the right of Mexican fishing vessels to work Clipperton's territorial waters have remained a point of contention. A 2007 treaty, reaffirmed in 2017, grants Mexican access to Clipperton's fisheries so long as authorization is sought from the French government, conservation measures are followed, and catches are reported; however, the lack of regular monitoring of the fisheries by France makes verifying compliance difficult.
Castaways
In May 1893, Charles Jensen and "Brick" Thurman of the Oceanic Phosphate Company were left on the island by the company's ship Compeer with 90 days worth of supplies in order to prevent other attempts to claim the island and its guano. Before sailing for Clipperton, Jensen wrote a letter to the Secretary of the Coast Seamen's Union, Andrew Furuseth, instructing him that if the Oceanic Phosphate Company had not sent a vessel to Clipperton six weeks after the return of the Compeer to make it known that they had been stranded there. The Oceanic Phosphate Company denied it had left the men without adequate supplies and contracted the schooner Viking to retrieve them in late August. The Viking rescued the men, who had used seabirds' eggs to supplement their supplies, and returned them to San Francisco on 31 October.
In May 1897, the British cargo vessel Kinkora wrecked on Clipperton; the crew was able to salvage food and water from the ship, allowing them to survive on the island in relative comfort. During the crew's time on the island, a passing vessel offered to take the men to the mainland for $1,500, which the crew refused. Instead eight of the men loaded up a lifeboat and rowed to Acapulco for help. After the first mate of the Kinkora, Mr. McMarty, arrived in Acapulco, HMS Comus set sail from British Columbia to rescue the sailors.
In 1947, five American fishermen from San Pedro, California, were rescued from Clipperton after surviving on the island for six weeks.
In early 1962, the island provided a home to nine crewmen of the sunken tuna clipper MV Monarch, stranded for 23 days from 6 February to 1 March. They reported that the lagoon water was drinkable, although they preferred to drink water from the coconuts they found. Unable to use any of the dilapidated buildings, they constructed a crude shelter from cement bags and tin salvaged from Quonset huts built by the American military 20 years earlier. Wood from the huts was used for firewood, and fish caught off the fringing reef combined with potatoes and onions they had saved from their sinking vessel augmented the island's meager supply of coconuts. The crewmen reported they tried eating bird's eggs, but found them to be rancid, and they decided after trying to cook a 'little black bird' that it did not have enough meat to make the effort worthwhile. Pigs had been eradicated, but the crewmen reported seeing their skeletons around the atoll. The crewmen were eventually discovered by another fishing boat, and rescued by the U.S. Navy destroyer .
Amateur radio DX-peditions
Clipperton has long been an attractive destination for amateur radio groups due to its remoteness, permit requirements, history, and interesting environment. While some radio operation has been part of other visits to the island, major DX-peditions have included FO0XB (1978), FO0XX (1985), FO0CI (1992), FO0AAA (2000), TX5C (2008), and TX5S (2024).
In March 2014, the Cordell Expedition, organised and led by Robert Schmieder, combined a radio DX-pedition using callsign TX5K with environmental and scientific investigations. The team of 24 radio operators made more than 114,000 contacts, breaking the previous record of 75,000. The activity included extensive operation in the 6-meter band, including Earth–Moon–Earth communication (EME) or 'moonbounce' contacts. A notable accomplishment was the use of DXA, a real-time satellite-based online graphic radio log web page, allowing anyone with a browser to see the radio activity. Scientific work conducted during the expedition included the first collection and identification of foraminifera and extensive aerial imaging of the island using kite-borne cameras. The team included two scientists from the University of Tahiti and a French TV documentary crew from Thalassa.
In April 2015, Alain Duchauchoy, F6BFH, operated from Clipperton using callsign TX5P as part of the Passion 2015 scientific expedition to Clipperton Island. Duchauchoy also researched Mexican use of the island during the early 1900s as part of the expedition.
See also
Uninhabited island
Lists of islands
Notes
References
External links
Isla Clipperton o 'Los náufragos mexicanos − 1914/1917' [Clipperton or 'The Mexican Castaways – 1914/1917']
Photo galleries
The first dive trip to Clipperton Island aboard the Nautilus Explorer – pictures taken during a 2007 visit
Clipperton Island 2008 – Flickr gallery containing 94 large photos from a 2008 visit
3D photos of Clipperton Island 2010 – 3D anaglyphs
Visits and expeditions
2000 DXpedition to Clipperton Island – website of a visit by amateur radio enthusiasts in 2000
Diving trips to Clipperton atoll – from NautilusExplorer.com
Category:States and territories established in 1931
Category:1931 establishments in the French colonial empire
Category:1931 establishments in North America
Category:1931 in Mexico
Category:Islands of Overseas France
Category:Pacific Ocean atolls of France
Category:Uninhabited islands of France
Category:Islands of Central America
Category:Dependent territories in North America
Category:Dependent territories in Oceania
Category:French colonization of the Americas
Category:Former populated places in North America
Category:Former populated places in Oceania
Category:Former disputed islands
Category:Arbitration cases
Category:Territorial disputes of France
Category:Territorial disputes of Mexico
Category:Tropical Eastern Pacific
Category:Uninhabited islands of the Pacific Ocean
Category:Pacific islands claimed under the Guano Islands Act
Category:Coral reefs
Category:Reefs of the Pacific Ocean
Category:Neotropical ecoregions
Category:Ecoregions of Central America
Category:Important Bird Areas of Overseas France
Category:Important Bird Areas of Oceania
Category:Seabird colonies
Category:Island restoration
Category:Victor Emmanuel III
|
geography
| 6,494
|
5617
|
Creutzfeldt–Jakob disease
|
https://en.wikipedia.org/wiki/Creutzfeldt–Jakob_disease
|
Creutzfeldt–Jakob disease (CJD) is an incurable, invariably fatal, neurodegenerative disease belonging to the transmissible spongiform encephalopathy (TSE) group. Early symptoms include memory problems, behavioral changes, poor coordination, visual disturbances and auditory disturbances. Later symptoms include dementia, involuntary movements, blindness, deafness, weakness, and coma. About 70% of sufferers die within a year of diagnosis. The name "Creutzfeldt–Jakob disease" was introduced by Walther Spielmeyer in 1922, after the German neurologists Hans Gerhard Creutzfeldt and Alfons Maria Jakob.Creutzfeldt–Jakob disease @ Who Named It
CJD is caused by a prion, an infectious abnormal folding of a protein. Infectious prions are misfolded proteins that can cause normally folded proteins to also become misfolded. About 85% of cases of CJD occur for unknown reasons, while about 7.5% of cases are inherited in an autosomal dominant manner. Exposure to brain or spinal tissue from an infected person may also result in spread. There is no evidence that sporadic CJD can spread among people via normal contact or blood transfusions, although this is possible in variant Creutzfeldt–Jakob disease. Diagnosis involves ruling out other potential causes. An electroencephalogram, spinal tap, or magnetic resonance imaging may support the diagnosis. Another diagnosis technique is the real-time quaking-induced conversion assay, which can detect the disease in early stages.
There is no specific treatment for CJD. Opioids may be used to help with pain, while clonazepam or sodium valproate may help with involuntary movements. CJD affects about one person per million people per year. Onset is typically around 60 years of age. The condition was first described in 1920. It is classified as a type of transmissible spongiform encephalopathy. Inherited CJD accounts for about 10% of prion disease cases. Sporadic CJD is different from bovine spongiform encephalopathy (mad cow disease) and variant Creutzfeldt–Jakob disease (vCJD).
Signs and symptoms
The first symptom of CJD is usually rapidly progressive dementia, leading to memory loss, personality changes, and hallucinations. Myoclonus (jerky movements) typically occurs in 90% of cases, but may be absent at initial onset. Other frequently occurring features include anxiety, depression, paranoia, obsessive-compulsive symptoms, and psychosis.Murray ED, Buttner N, Price BH. (2012) Depression and Psychosis in Neurological Practice. In: Neurology in Clinical Practice, 6th Edition. Bradley WG, Daroff RB, Fenichel GM, Jankovic J (eds.) Butterworth Heinemann. April 12, 2012. This is accompanied by physical problems such as speech impairment, balance and coordination dysfunction (ataxia), changes in gait, and rigid posture. In most people with CJD, these symptoms are accompanied by involuntary movements. Rarely, unusual symptoms like the alien limb phenomenon have been observed. The duration of the disease varies greatly, but sporadic (non-inherited) CJD can be fatal within months or even weeks. Most affected people die six months after initial symptoms appear, often of pneumonia due to impaired coughing reflexes. About 15% of people with CJD survive for two or more years.
The symptoms of CJD are caused by the progressive death of the brain's nerve cells, which are associated with the build-up of abnormal prion proteins forming in the brain. When brain tissue from a person with CJD is examined under a microscope, many tiny holes can be seen where the nerve cells have died. Parts of the brain may resemble a sponge where the prions were infecting the areas of the brain.
Cause
CJD is a type of transmissible spongiform encephalopathy (TSE), which is caused by prions. Prions are misfolded proteins that occur in the neurons of the central nervous system (CNS). The CJD prion is dangerous because it promotes refolding of cellular prion proteins into the diseased state. The number of misfolded protein molecules will increase exponentially, and the process leads to a large quantity of insoluble proteins in affected cells. This mass of misfolded proteins disrupts neuronal cell function and causes cell death. Mutations in the gene for the prion protein can cause a misfolding of the dominantly alpha helical regions into beta pleated sheets. This change in conformation disables the protein's ability to undergo digestion. Once the prion is transmitted, the defective proteins invade the brain and induce other prion protein molecules to misfold in a self-sustaining feedback loop. These neurodegenerative diseases are commonly called prion diseases.
PrPC, the normal fibril cellular proteins responsible for a wide range of CNS functions, are misfolded by what current research suggests are small, highly neurotoxic oligomeric aggregates, known as PrPSc, which interact with cell surfaces to disrupt neuronal function. The binding of prion oligomers to normal prion protein on neurons may trigger toxic signals similar to how oligomeric β-amyloid causes synaptic damage in Alzheimer’s disease. Different conformations of PrPSc (often termed prion “strains”) are thought to cause the distinct subtypes of prion disease, explaining variations in clinical features and progression. They are thought to affect signaling processes, damaging neurons and resulting in degeneration that causes the spongiform appearance in the affected brain. Other forms of TSEs that are found in humans are Gerstmann–Sträussler–Scheinker syndrome, fatal familial insomnia, kuru, and variably protease-sensitive prionopathy. Susceptibility and disease phenotype are influenced by a common polymorphism at codon 129 of the PRNP gene (methionine/valine). Notably, individuals homozygous at codon 129 are over-represented in sporadic CJD cases and tend to have shorter incubation periods.
Transmission
The defective protein can be transmitted by contaminated harvested human brain products, corneal grafts, dural grafts, or electrode implants and pituitary human growth hormone, which has been replaced by recombinant human growth hormone that poses no such risk.
It can be familial (fCJD) or it may appear without clear risk factors (sporadic form: sCJD). In the familial form, a mutation has occurred in the gene for PrP, PRNP, in that family. All types of CJD are transmissible irrespective of how they occur in the person.
It is thought that humans can contract the variant form of the disease by eating food from animals infected with bovine spongiform encephalopathy (BSE), the bovine form of TSE, also known as mad cow disease. However, it can also cause sCJD in some cases.
Cannibalism has also been implicated as a transmission mechanism for abnormal prions, causing the disease known as kuru, once found primarily among women and children of the Fore people in Papua New Guinea, who previously engaged in funerary cannibalism. While the men of the tribe ate the muscle tissue of the deceased, women and children consumed other parts, such as the brain, and were more likely than men to contract kuru from infected tissue.
Prions, the infectious agent of CJD, may not be inactivated using routine surgical instrument sterilization procedures. The World Health Organization and the US Centers for Disease Control and Prevention recommend that instrumentation used in such cases be immediately destroyed after use; short of destruction, it is recommended that heat and chemical decontamination be used in combination to process instruments that come in contact with high-infectivity tissues. Thermal depolymerization also destroys prions in infected organic and inorganic matter, since the process chemically attacks protein at the molecular level, although more effective and practical methods involve destruction by combinations of detergents and enzymes similar to biological washing powders.
Genetics
People can also develop CJD because they carry a mutation of the gene that codes for the prion protein (PRNP), located on chromosome 202p12-pter. This occurs in only 10–15% of all CJD cases. In sporadic cases, the misfolding of the prion protein is a process that is hypothesized to occur as a result of the effects of aging on cellular machinery, explaining why the disease often appears later in life. An EU study determined that "87% of cases were sporadic, 8% genetic, 5% iatrogenic and less than 1% variant."
Diagnosis
Testing for CJD has historically been problematic, due to the nonspecific nature of early symptoms and difficulty in safely obtaining brain tissue for confirmation. The diagnosis may initially be suspected in a person with rapidly progressing dementia, particularly when it is also found with the characteristic medical signs and symptoms such as involuntary muscle jerking, difficulty with coordination/balance and walking, and visual disturbances. Further testing can support the diagnosis and may include:
Electroencephalography – may have a characteristic generalized periodic sharp wave pattern. Periodic sharp wave complexes develop in half of the people with sporadic CJD, particularly in the later stages.
Cerebrospinal fluid (CSF) analysis for elevated levels of 14-3-3 protein and tau protein could be supportive in the diagnosis of sCJD. The two proteins are released into the CSF by damaged nerve cells. Increased levels of tau or 14-3-3 proteins are seen in 90% of prion diseases. The markers have a specificity of 95% in clinical symptoms suggestive of CJD, but specificity is 70% in other, less characteristic cases. 14-3-3 and tau proteins may also be elevated in the CSF after ischemic strokes, inflammatory brain diseases, or seizures. The protein markers are also less specific in early CJD, genetic CJD or the bovine variant. However, a positive result should not be regarded as sufficient for the diagnosis. The real-time quaking-induced conversion (RT-QuIC) assay, which amplifies misfolded PrPSc, now plays a central role in CJD diagnosis. Second-generation RT-QuIC on cerebrospinal fluid has sensitivity in the 90–97% range and ~100% specificity in sporadic CJD, far superior to earlier CSF tests. A positive RT-QuIC (on CSF or other tissues) is now included as a criterion for probable CJD in many national surveillance centers. Studies have shown RT-QuIC can also be done on olfactory mucosa swabs obtained via nasal brushing and on skin biopsies, with high diagnostic accuracy (reported sensitivities ~90–100%).
MRI with diffusion weighted inversion (DWI) and fluid-attenuated inversion recovery (FLAIR) shows a high signal intensity in certain parts of the cortex (a cortical ribboning appearance), the basal ganglia, and the thalami. The most common presenting patterns are simultaneous involvement of the cortex and striatum (60% of cases), cortical involvement without the striatum (30%), thalamus (21%), cerebellum (8%) and striatum without cortical involvement (7%). In populations with a rapidly progressive dementia (early in the disease process), MRI has a sensitivity of 91% and specificity of 97% for diagnosing CJD. The MRI changes characteristic of CJD may also be seen in the immediate aftermath (hours after the event) of autoimmune encephalitis or focal seizures.
In recent years, studies have shown that the tumour marker neuron-specific enolase (NSE) is often elevated in CJD cases; however, its diagnostic utility is seen primarily when combined with a test for the 14-3-3 protein. , screening tests to identify infected asymptomatic individuals, such as blood donors, are not yet available, though methods have been proposed and evaluated.
Imaging
Imaging of the brain may be performed during medical evaluation, both to rule out other causes and to obtain supportive evidence for diagnosis. Imaging findings are variable in their appearance and also variable in sensitivity and specificity. While imaging plays a lesser role in diagnosis of CJD, characteristic findings on brain MRI in some cases may precede onset of clinical manifestations.
Brain MRI is the most useful imaging modality for changes related to CJD. Of the MRI sequences, diffuse-weighted imaging sequences are most sensitive. Characteristic findings are as follows:
Focal or diffuse diffusion-restriction involving the cerebral cortex or basal ganglia. The most characteristic and striking cortical abnormality has been called "cortical ribboning" or "cortical ribbon sign" due to hyperintensities resembling ribbons appearing in the cortex on MRI. The involvement of the thalamus can be found in sCJD, is even stronger and constant in vCJD.
Varying degree of symmetric T2 hyperintense signal changes in the basal ganglia (i.e., caudate and putamen), and to a lesser extent globus pallidus and occipital cortex.
Brain FDG PET-CT tends to be markedly abnormal, and is increasingly used in the investigation of dementias.
Patients with CJD will normally have hypometabolism on FDG PET.
Histopathology
Testing of tissue remains the most definitive way of confirming the diagnosis of CJD, although even a biopsy is not always conclusive.
In one-third of people with sporadic CJD, deposits of "prion protein (scrapie)", PrPSc, can be found in the skeletal muscle or the spleen. Diagnosis of vCJD can be supported by biopsy of the tonsils, which harbor significant amounts of PrPSc; however, biopsy of brain tissue is the definitive diagnostic test for all other forms of prion disease. Due to its invasiveness, a biopsy will not be done if clinical suspicion is sufficiently high or low. A negative biopsy does not rule out CJD, since it may predominate in a specific part of the brain.Sternberg's Diagnostic Surgical Pathology, 5th edition.
The classic histologic appearance is spongiform change in the gray matter: the presence of many round vacuoles from one to 50 micrometers in the neuropil, in all six cortical layers in the cerebral cortex, or with diffuse involvement of the cerebellar molecular layer. These vacuoles appear glassy or eosinophilic and may coalesce. Neuronal loss and gliosis are also seen. Plaques of amyloid-like material can be seen in the neocortex in some cases of CJD.
However, extra-neuronal vacuolization can also be seen in other disease states. Diffuse cortical vacuolization occurs in Alzheimer's disease, and superficial cortical vacuolization occurs in ischemia and frontotemporal dementia. These vacuoles appear clear and punched out. Larger vacuoles encircling neurons, vessels, and glia are a possible processing artifact.
Classification
Types of CJD include:
Sporadic (sCJD), caused by the spontaneous misfolding of the prion protein in an individual. This accounts for 85% of cases of CJD. Sporadic CJD is can be further sub-classified by molecular profile into subtypes (MM1, MV2, etc.), which correlate with certain clinical-pathologic features.
MM1 / MV1 Subtype:
Clinical Features: Accounts for approximately 75% of sCJD cases. Characterized by rapidly progressive dementia, myoclonus, and typical EEG findings.
Neuropathology: Synaptic-type PrPSc deposition predominantly in the cerebral cortex. Spongiform changes are widespread, with significant neuronal loss and gliosis.
MM2 Subtype:
MM2C (Cortical): Presents with a more prolonged disease course and prominent cortical involvement. Neuropathology reveals PrPSc deposits in the cortex with less spongiform change compared to MM1.
MM2T (Thalamic): Rare; characterized by predominant thalamic involvement, leading to sleep disturbances and autonomic dysfunction. Neuropathology shows significant PrPSc deposition and neuronal loss in the thalamus.
VV1 Subtype:
Clinical Features: Rare; presents at a younger age with a slower disease progression.
Neuropathology: Predominant cortical involvement with synaptic-type PrPSc deposition.
VV2 Subtype:
Clinical Features: Second most common subtype. Patients often present with ataxia and other cerebellar signs.
Neuropathology: Significant PrPSc deposition in the cerebellum and basal ganglia, with prominent spongiform changes and neuronal loss.
Familial (fCJD), caused by an inherited mutation in the prion-protein gene. This accounts for the majority of the other 15% of cases of CJD.
Acquired CJD, caused by contamination with tissue from an infected person, usually as the result of a medical procedure (iatrogenic CJD). Medical procedures that are associated with the spread of this form of CJD include blood transfusion from the infected person, use of human-derived pituitary growth hormones, gonadotropin hormone therapy, and corneal and meningeal transplants. Variant Creutzfeldt–Jakob disease (vCJD) is a type of acquired CJD potentially acquired from bovine spongiform encephalopathy or caused by consuming food contaminated with prions. Sporadic CJD, while transmissible through tissue transplants, may not be transmitted through blood transfusion.
+Clinical and pathologic characteristics Characteristic Classic CJD Variant CJD Median age at death 68 years 28 years Median duration of illness 4–5 months 13–14 months Clinical signs and symptoms Dementia; early neurologic signs Prominent psychiatric/behavioral symptoms; painful dysesthesias; delayed neurologic signs Periodic sharp waves on electroencephalogram Often present Often absent Signal hyperintensity in the caudate nucleus and putamen on diffusion-weighted and FLAIR MRI Often present Often absent Pulvinar sign-bilateral high signal intensities on axial FLAIR MRI. Also, posterior thalamic involvement on sagittal T2 sequences Not reported Present in >75% of cases Immunohistochemical analysis of brain tissue Variable accumulation. Marked accumulation of protease-resistant prion protein Presence of agent in lymphoid tissue Not readily detected Readily detected Increased glycoform ratio on immunoblot analysis of protease-resistant prion protein Not reported Marked accumulation of protease-resistant prion protein Presence of amyloid plaques in brain tissue May be present May be present
Treatment
As of 2025, there is no cure or effective treatment for CJD. Some of the symptoms, like twitching, can be managed, but otherwise treatment is palliative care. Psychiatric symptoms like anxiety and depression can be treated with sedatives and antidepressants. Myoclonic jerks can be handled with clonazepam or sodium valproate. Opiates can help with pain. Seizures are very uncommon but can nevertheless be treated with antiepileptic drugs.
In 2022, results of an early-stage trial of PRN100, a monoclonal antibody against PrP, were reported: the drug appeared safe and reached the brain, but treated patients did not show clearly improved survival compared to historical controls. While not curative, this trial demonstrated the feasibility of immunotherapy for prion disease.
Prognosis
Life expectancy is greatly reduced for people with Creutzfeldt–Jakob disease, and the average is less than six months. As of 1981, no one was known to have lived longer than 2.5 years after the onset of CJD symptoms. One of the world's longest survivors of vCJD was Jonathan Simms, a Northern Irish man who lived for 10 years after his diagnosis and received experimental treatment with pentosan polysulfate. Simms died in 2011.
Epidemiology
The CDC monitors the occurrence of CJD in the United States through periodic reviews of national mortality data. According to the CDC:
CJD occurs worldwide at roughly 1–1.5 cases per million people per year. Recent surveillance reports indicate a slight increase in recorded incidence in many countries over time. For example, a study made in 2020 noted that sporadic CJD incidence in the U.K. rose from 1990 to 2018, and several other countries also reported increases in CJD cases in the 2000s.
On the basis of mortality surveillance from 1979 to 1994, the annual incidence of CJD remained stable at approximately 1 case per million people in the United States.
In the United States, CJD deaths among people younger than 30 years of age are extremely rare (fewer than five deaths per billion per year).
The disease is found most frequently in people 55–65 years of age, but cases can occur in people older than 90 years and younger than 55 years of age.
In more than 85% of cases, the duration of CJD is less than one year (median: four months) after the onset of symptoms.
Further information from the CDC:"Occurrence and Transmission | Creutzfeldt–Jakob Disease, Classic (CJD) | Prion Disease | CDC". www.cdc.gov. 2019-05-08. Retrieved 2020-11-04.
Risk of developing CJD increases with age.
CJD incidence was 3.5 cases per million among those over 50 years of age between 1979 and 2017.
Approximately 85% of CJD cases are sporadic, and 10–15% of CJD cases are due to inherited mutations of the prion protein gene.
CJD deaths and age-adjusted death rate in the United States indicate an increasing trend in the number of deaths between 1979 and 2017.
Although not fully understood, additional information suggests that CJD rates in nonwhite groups are lower than in whites. While the mean onset is approximately 67 years of age, cases of sCJD have been reported as young as 17 years and over 80 years of age. Mental capabilities rapidly deteriorate and the average amount of time from onset of symptoms to death is 7 to 9 months.
According to a 2020 systematic review on the international epidemiology of CJD:
Surveillance studies from 2005 and later show the estimated global incidence is 1–2 cases per million population per year.
Sporadic CJD (sCJD) incidence increased from the years 1990–2018 in the UK.
Probable or definite sCJD deaths also increased from the years 1996–2018 in twelve additional countries.
CJD incidence is greatest in those over the age of 55 years old, with an average age of 67 years old.
The intensity of CJD surveillance increases the number of reported cases, often in countries where CJD epidemics have occurred in the past and where surveillance resources are greatest. An increase in surveillance and reporting of CJD is most likely in response to BSE and vCJD. Possible factors contributing to an increase in CJD incidence are an aging population, population increase, clinician awareness, and more accurate diagnostic methods. Since CJD symptoms are similar to other neurological conditions, it is also possible that CJD is mistaken for stroke, acute nephropathy, general dementia, and hyperparathyroidism.
History
The disease was first described by German neurologist Hans Gerhard Creutzfeldt in 1920 and shortly afterward by Alfons Maria Jakob, giving it the name Creutzfeldt–Jakob disease. Some of the clinical findings described in their first papers do not match current criteria for Creutzfeldt–Jakob disease, and it has been speculated that at least two of the people in the initial studies had a different illness. An early description of familial CJD stems from the German psychiatrist and neurologist Friedrich Meggendorfer (1880–1953). A study published in 1997 counted more than 100 cases worldwide of transmissible CJD and new cases continued to appear at the time.
The first report of suspected iatrogenic CJD was published in 1974. Animal experiments showed that corneas of infected animals could transmit CJD, and the causative agent spreads along visual pathways. A second case of CJD associated with a corneal transplant was reported without details. In 1977, CJD transmission caused by silver electrodes previously used in the brain of a person with CJD was first reported. Transmission occurred despite the decontamination of the electrodes with ethanol and formaldehyde. Retrospective studies identified four other cases likely of a similar cause. The rate of transmission from a single contaminated instrument is unknown, although it is not 100%. In some cases, the exposure occurred weeks after the instruments were used on a person with CJD. In the 1980s, it was discovered that Lyodura, a dura mater transplant product, was shown to transmit CJD from the donor to the recipient. This led to the product being banned in Canada, but it was used in other countries such as Japan until 1993. A review article published in 1979 indicated that 25 dura mater cases had occurred by that date in Australia, Canada, Germany, Italy, Japan, New Zealand, Spain, the United Kingdom, and the United States.
By 1985, a series of case reports in the United States showed that when injected, cadaver-extracted pituitary human growth hormone could transmit CJD to humans.
In 1992, it was recognized that human gonadotropin administered by injection could also transmit CJD from person to person.
Stanley B. Prusiner of the University of California, San Francisco (UCSF) was awarded the Nobel Prize in Physiology or Medicine in 1997 "for his discovery of Prions—a new biological principle of infection".
Yale University neuropathologist Laura Manuelidis has challenged the prion protein (PrP) explanation for the disease. In January 2007, she and her colleagues reported that they had found a virus-like particle in naturally and experimentally infected animals. "The high infectivity of comparable, isolated virus-like particles that show no intrinsic PrP by antibody labeling, combined with their loss of infectivity when nucleic acid–protein complexes are disrupted, make it likely that these 25-nm particles are the causal TSE virions".
Australia
Australia has documented 10 cases of healthcare-acquired CJD (iatrogenic or ICJD). Five of the deaths resulted after the patients, who were in treatment either for infertility or short stature, were treated using contaminated pituitary extract hormone but no new cases have been noted since 1991. The other five deaths occurred due to dura mater grafting procedures that were performed during brain surgery, in which the covering of the brain is repaired. There have been no other ICJD deaths documented in Australia due to transmission during healthcare procedures.Creutzfeldt–Jakob Disease (CJD) – the facts. Infectious Diseases Epidemiology & Surveillance – Department of Health, Victoria, Australia
New Zealand
A case was reported in 1989 in a 25-year-old man from New Zealand, who also received dura mater transplant. Five New Zealanders have been confirmed to have died of the sporadic form of Creutzfeldt–Jakob disease (CJD) in 2012.
United States
In 1988, there was a confirmed death from CJD of a person from Manchester, New Hampshire. Massachusetts General Hospital believed the person acquired the disease from a surgical instrument at a podiatrist's office. In 2007, Michael Homer, former Vice President of Netscape, had been experiencing consistent memory problems, which led to his diagnosis.via Bloomberg News, "Mike Homer dies at 50; a former vice president of Netscape", Los Angeles Times, February 5, 2009. Accessed February 6, 2009. In August 2013 the British journalist Graham Usher died in New York of CJD.
In September 2013, another person in Manchester was posthumously determined to have died of the disease. The person had undergone brain surgery at Catholic Medical Center three months before his death, and a surgical probe used in the procedure was subsequently reused in other operations. Public health officials identified thirteen people at three hospitals who may have been exposed to the disease through the contaminated probe but said the risk of anyone contracting CJD is "extremely low".
In January 2015, former speaker of the Utah House of Representatives Rebecca D. Lockhart died of the disease within a few weeks of diagnosis. John Carroll, former editor of The Baltimore Sun and Los Angeles Times, died of CJD in Kentucky in June 2015, after having been diagnosed in January. American actress Barbara Tarbuck (General Hospital, American Horror Story) died of the disease on December 26, 2016. José Baselga, clinical oncologist having headed the AstraZeneca oncology division, died in Cerdanya, March 21, 2021, from CJD. In April 2024, a report was published regarding two hunters from the same lodge who, in 2022, were found to be afflicted with sporadic CJD after eating deer meat infected with chronic wasting disease (CWD), suggesting a potential link between CWD and CJD.
Research
Diagnosis
In 2010, a team from New York described detection of PrPSc in sheep's blood, even when initially present at only one part in one hundred billion (10−11) in sheep's brain tissue. The method combines amplification with a novel technology called surround optical fiber immunoassay (SOFIA) and some specific antibodies against PrPSc. The technique allowed improved detection and testing time for PrPSc.
In 2014, a human study showed a nasal brushing method that can accurately detect PrP in the olfactory epithelial cells of people with CJD.
Treatment
Pentosan polysulfate (PPS) may slow the progression of the disease, and may have contributed to the longer than expected survival of the seven people studied. The CJD Therapy Advisory Group to the UK Health Departments advises that data are not sufficient to support claims that pentosan polysulfate is an effective treatment and suggests that further research in animal models is appropriate. A 2007 review of the treatment of 26 people with PPS finds no proof of efficacy because of the lack of accepted objective criteria, but it was unclear to the authors whether that was caused by PPS itself. In 2012 it was claimed that the lack of significant benefits has likely been caused because of the drug being administered very late in the disease in many patients.
Use of RNA interference to slow the progression of scrapie has been studied in mice. The RNA blocks the production of the protein that the CJD process transforms into prions.
Both amphotericin B and doxorubicin have been investigated as treatments for CJD, but as yet there is no strong evidence that either drug is effective in stopping the disease. Further study has been conducted with other medical drugs, but none are effective. However, anticonvulsants and anxiolytic agents, such as valproate or a benzodiazepine, may be administered to relieve associated symptoms.
Quinacrine, a medicine originally created for malaria, has been evaluated as a treatment for CJD. The efficacy of quinacrine was assessed in a rigorous clinical trial in the UK and the results were published in Lancet Neurology, and concluded that quinacrine had no measurable effect on the clinical course of CJD.
Astemizole, a medication approved for human use, has been found to have anti-prion activity and may lead to a treatment for Creutzfeldt–Jakob disease.
A monoclonal antibody (code name PRN100) targeting the prion protein (PrP) was given to six people with Creutzfeldt–Jakob disease in an early-stage clinical trial conducted from 2018 to 2022. The treatment appeared to be well-tolerated and was able to access the brain, where it might have helped to clear PrPC. While the treated patients still showed progressive neurological decline, and while none of them survived longer than expected from the normal course of the disease, the scientists at University College London who conducted the study see these early-stage results as encouraging and suggest conducting a larger study, ideally at the earliest possible intervention.
See also
Transmissible spongiform encephalopathy
Chronic wasting disease
Kuru
References
External links
Category:Transmissible spongiform encephalopathies
Category:Neurodegenerative disorders
Category:Dementia
Category:Rare infectious diseases
Category:Wikipedia medicine articles ready to translate
Category:Wikipedia neurology articles ready to translate
Category:Rare diseases
Category:1920 in biology
Category:Diseases named after discoverers
|
medicine_health
| 4,770
|
5750
|
Cognitive behavioral therapy
|
https://en.wikipedia.org/wiki/Cognitive_behavioral_therapy
|
Cognitive behavioral therapy (CBT) is a form of psychotherapy that aims to reduce symptoms of various mental health conditions, primarily depression, and disorders such as PTSD and anxiety disorders. This therapy focuses on challenging unhelpful and irrational negative thoughts and beliefs, referred to as 'self-talk' and replacing them with more rational positive self-talk. This alteration in a person's thinking produces less anxiety and depression. It was developed by psychoanalyst Aaron Beck in the 1950's.
Cognitive behavioral therapy focuses on challenging and changing cognitive distortions (thoughts, beliefs, and attitudes) and their associated behaviors in order to improve emotional regulation and help the individual develop coping strategies to address problems. Though originally designed as an approach to treat depression, CBT is often prescribed for the evidence-informed treatment of many mental health and other conditions, including OCD, generalized anxiety disorder, substance use disorders, marital problems, ADHD, and eating disorders. CBT includes a number of cognitive or behavioral psychotherapies that treat defined psychopathologies using evidence-based techniques and strategies.
CBT is a common form of talk therapy based on the combination of the basic principles from behavioral and cognitive psychology. It is different from other approaches to psychotherapy, such as the psychoanalytic approach, where the therapist looks for the unconscious meaning behind the behaviors and then formulates a diagnosis. Instead, CBT is a "problem-focused" and "action-oriented" form of therapy, meaning it is used to treat specific problems related to a diagnosed mental disorder. The therapist's role is to assist the client in finding and practicing effective strategies to address the identified goals and to alleviate symptoms of the disorder. CBT is based on the belief that thought distortions and maladaptive behaviors play a role in the development and maintenance of many psychological disorders and that symptoms and associated distress can be reduced by teaching new information-processing skills and coping mechanisms.
When compared to psychoactive medications, review studies have found CBT alone to be as effective for treating less severe forms of depression, and borderline personality disorder. Some research suggests that CBT is most effective when combined with medication for treating mental disorders such as major depressive disorder. CBT is recommended as the first line of treatment for the majority of psychological disorders in children and adolescents, including aggression and conduct disorder. Researchers have found that other bona fide therapeutic interventions were equally effective for treating certain conditions in adults. Along with interpersonal psychotherapy (IPT), CBT is recommended in treatment guidelines as a psychosocial treatment of choice. It is recommended by the American Psychiatric Association, the American Psychological Association, and the British National Health Service. Critics argue CBT's benefits are overstated, citing small or declining effects, high dropouts, neoliberal influences, and methodological flaws, though it remains widely used and considered safe.
History
Philosophy
Precursors of certain fundamental aspects of CBT have been identified in various ancient philosophical traditions, particularly Stoicism. Stoic philosophers, particularly Epictetus, believed logic could be used to identify and discard false beliefs that lead to destructive emotions, which has influenced the way modern cognitive-behavioral therapists identify cognitive distortions that contribute to depression and anxiety. Aaron T. Beck's original treatment manual for depression states, "The philosophical origins of cognitive therapy can be traced back to the Stoic philosophers". Another example of Stoic influence on cognitive theorists is the influence of Epictetus on Albert Ellis. A key philosophical figure who influenced the development of CBT was John Stuart Mill through his creation of Associationism, a predecessor of classical conditioning and behavioral theory.
Principles originating from Buddhism have significantly impacted the evolution of various new forms of CBT, including dialectical behavior therapy, mindfulness-based cognitive therapy, spirituality-based CBT, and compassion-focused therapy.
The modern roots of CBT can be traced to the development of behavior therapy in the early 20th century, the development of cognitive therapy in the 1960s, and the subsequent merging of the two.
Behavioral therapy
Groundbreaking work in behaviorism began with John B. Watson and Rosalie Rayner's studies of conditioning in 1920. Behaviorally-centered therapeutic approaches appeared as early as 1924 with Mary Cover Jones' work dedicated to the unlearning of fears in children. These were the antecedents of the development of Joseph Wolpe's behavioral therapy in the 1950s. It was the work of Wolpe and Watson, which was based on Ivan Pavlov's work on learning and conditioning, that influenced Hans Eysenck and Arnold Lazarus to develop new behavioral therapy techniques based on classical conditioning.
During the 1950s and 1960s, behavioral therapy became widely used by researchers in the United States, the United Kingdom, and South Africa. Their inspiration was by the behaviorist learning theory of Ivan Pavlov, John B. Watson, and Clark L. Hull.
In Britain, Joseph Wolpe, who applied the findings of animal experiments to his method of systematic desensitization, applied behavioral research to the treatment of neurotic disorders. Wolpe's therapeutic efforts were precursors to today's fear reduction techniques. British psychologist Hans Eysenck presented behavior therapy as a constructive alternative.
At the same time as Eysenck's work, B. F. Skinner and his associates were beginning to have an impact with their work on operant conditioning. Skinner's work was referred to as radical behaviorism and avoided anything related to cognition. However, Julian Rotter in 1954 and Albert Bandura in 1969 contributed to behavior therapy with their works on social learning theory by demonstrating the effects of cognition on learning and behavior modification. The work of Claire Weekes in dealing with anxiety disorders in the 1960s is also seen as a prototype of behavior therapy.
The emphasis on behavioral factors has been described as the "first wave" of CBT.
Cognitive therapy
One of the first therapists to address cognition in psychotherapy was Alfred Adler, notably with his idea of basic mistakes and how they contributed to creation of unhealthy behavioral and life goals.Abraham Low believed that someone's thoughts were best changed by changing their actions. Adler and Low influenced the work of Albert Ellis, who developed the earliest cognitive-based psychotherapy called rational emotive behavioral therapy, or REBT. The first version of REBT was announced to the public in 1956.
In the late 1950s, Aaron Beck was conducting free association sessions in his psychoanalytic practice. During these sessions, Beck noticed that thoughts were not as unconscious as Freud had previously theorized, and that certain types of thinking may be the culprits of emotional distress. It was from this hypothesis that Beck developed cognitive therapy, and called these thoughts "automatic thoughts". He first published his new methodology in 1967, and his first treatment manual in 1979. Beck has been referred to as "the father of cognitive behavioral therapy".
It was these two therapies, rational emotive therapy, and cognitive therapy, that started the "second wave" of CBT, which emphasized cognitive factors.
Merger of behavioral and cognitive therapies
Although the early behavioral approaches were successful in many so-called neurotic disorders, they had little success in treating depression. Behaviorism was also losing popularity due to the cognitive revolution. The therapeutic approaches of Albert Ellis and Aaron T. Beck gained popularity among behavior therapists, despite the earlier behaviorist rejection of mentalistic concepts like thoughts and cognitions. Both of these systems included behavioral elements and interventions, with the primary focus being on problems in the present.
In initial studies, cognitive therapy was often contrasted with behavioral treatments to see which was most effective. During the 1980s and 1990s, cognitive and behavioral techniques were merged into cognitive behavioral therapy. Pivotal to this merging was the successful development of treatments for panic disorder by David M. Clark in the UK and David H. Barlow in the US.
Over time, cognitive behavior therapy came to be known not only as a therapy, but as an umbrella term for all cognitive-based psychotherapies. These therapies include, but are not limited to, REBT, cognitive therapy, acceptance and commitment therapy, dialectical behavior therapy, metacognitive therapy, metacognitive training, reality therapy/choice theory, cognitive processing therapy, EMDR, and multimodal therapy.
This blending of theoretical and technical foundations from both behavior and cognitive therapies constituted the "third wave" of CBT. The most prominent therapies of this third wave are dialectical behavior therapy and acceptance and commitment therapy. Despite the increasing popularity of third-wave treatment approaches, reviews of studies reveal there may be no difference in the effectiveness compared with non-third wave CBT for the treatment of depression.
In the late 1990s, Melanie Fennell published a refined model in Behavioural and Cognitive Psychotherapy on a cognitive approach to low self-esteem. In line with Beck's 1976 general cognitive approach, it proposed that life experiences interact with temperament in the development of beliefs about the self.
Medical uses
In adults, CBT has been shown to be an effective part of treatment plans for anxiety disorders, body dysmorphic disorder, depression, eating disorders, chronic low back pain, personality disorders, psychosis, schizophrenia, substance use disorders, and bipolar disorder. It is also effective as part of treatment plans in the adjustment, depression, and anxiety associated with fibromyalgia, and as part of the treatment after spinal cord injuries.
In children or adolescents, CBT is an effective part of treatment plans for anxiety disorders, body dysmorphic disorder, depression and suicidality, eating disorders and obesity, obsessive–compulsive disorder (OCD), post-traumatic stress disorder (PTSD), tic disorders, trichotillomania, and other repetitive behavior disorders. CBT has also been used to help improve a variety of childhood disorders, including depressive disorders and various anxiety disorders. CBT has shown to be the most effective intervention for people exposed to adverse childhood experiences in the form of abuse or neglect.
Criticism of CBT sometimes focuses on implementations (such as the UK IAPT) which may result initially in low quality therapy being offered by poorly trained practitioners. However, evidence supports the effectiveness of CBT for anxiety and depression.
Evidence suggests that the addition of hypnotherapy as an adjunct to CBT improves treatment efficacy for a variety of clinical issues.
The United Kingdom's National Institute for Health and Care Excellence (NICE) recommends CBT in the treatment plans for a number of mental health difficulties, including PTSD, OCD, bulimia nervosa, and clinical depression.
Depression and anxiety disorders
Cognitive behavioral therapy has been shown as an effective treatment for clinical depression. Among psychotherapeutic approaches for major depressive disorder, cognitive behavioral therapy and interpersonal psychotherapy are recommended by clinical practice guidelines including The American Psychiatric Association Practice (APA) Guidelines (April 2000), and the APA endorsed Veteran Affairs clinical practice guideline.
CBT has been shown to be effective in the treatment of adults with anxiety disorders. There is also evidence that using CBT to treat children and adolescents with anxiety disorders was probably more effective (in the short term) than wait list or no treatment and more effective than attention control treatment approaches. Some meta-analyses find CBT more effective than psychodynamic therapy and equal to other therapies in treating anxiety and depression. A 2013 meta-analysis suggested that CBT, interpersonal therapy, and problem-solving therapy outperformed psychodynamic psychotherapy and behavioral activation in the treatment of depression. According to a 2004 review by INSERM of three methods, cognitive behavioral therapy was either proven or presumed to be an effective therapy on several mental disorders. This included depression, panic disorder, post-traumatic stress, and other anxiety disorders.
A systematic review of CBT in depression and anxiety disorders concluded that "CBT delivered in primary care, especially including computer- or Internet-based self-help programs, is potentially more effective than usual care and could be delivered effectively by primary care therapists."
A 2024 systematic review found that exposure and response prevention (ERP), a specific form of cognitive behavioral therapy, is considered a first-line treatment for pediatric obsessive–compulsive disorder (OCD). Research indicates that ERP is effective in both in-person and remote settings, providing flexibility in treatment delivery without compromising efficacy.
According to The Anxiety and Worry Workbook: The Cognitive Behavioral Solution by Clark and Beck:
Theoretical approaches
One etiological theory of depression is Aaron T. Beck's cognitive theory of depression. His theory states that depressed people think the way they do because their thinking is biased towards negative interpretations. Beck's theory rests on the aspect of cognitive behavioral therapy known as schemata. Schemata are the mental maps used to integrate new information into memories and to organize existing information in the mind. An example of a schema would be a person hearing the word "dog" and picturing different versions of the animal that they have grouped together in their mind. According to this theory, depressed people acquire a negative schema of the world in childhood and adolescence as an effect of stressful life events, and the negative schema is activated later in life when the person encounters similar situations.
Beck also described a negative cognitive triad. The cognitive triad is made up of the depressed individual's negative evaluations of themselves, the world, and the future. Beck suggested that these negative evaluations derive from the negative schemata and cognitive biases of the person. According to this theory, depressed people have views such as "I never do a good job", "It is impossible to have a good day", and "things will never get better". A negative schema helps give rise to the cognitive bias, and the cognitive bias helps fuel the negative schema. Beck further proposed that depressed people often have the following cognitive biases: arbitrary inference, selective abstraction, overgeneralization, magnification, and minimization. These cognitive biases are quick to make negative, generalized, and personal inferences of the self, thus fueling the negative schema.
On the other hand, a positive cognitive triad relates to a person's positive evaluations of themself, the world, and the future. More specifically, a positive cognitive triad requires self-esteem when viewing oneself and hope for the future. A person with a positive cognitive triad has a positive schema used for viewing themself in addition to a positive schema for the world and for the future. Cognitive behavioral research suggests a positive cognitive triad bolsters resilience, or the ability to cope with stressful events. Increased levels of resilience is associated with greater resistance to depression.
Another major theoretical approach to cognitive behavioral therapy treatment is the concept of Locus of Control outlined in Julian Rotter's Social Learning Theory. Locus of control refers to the degree to which an individual's sense of control is either internal or external. An internal locus of control exists when an individual views an outcome of a particular action as being reliant on themselves and their personal attributes whereas an external locus of control exists when an individual views other's or some outside, intangible force such as luck or fate as being responsible for the outcome of a particular action.
A basic concept in some CBT treatments used in anxiety disorders is in vivo exposure. CBT-exposure therapy refers to the direct confrontation of feared objects, activities, or situations by a patient. For example, a woman with PTSD who fears the location where she was assaulted may be assisted by her therapist in going to that location and directly confronting those fears. Likewise, a person with a social anxiety disorder who fears public speaking may be instructed to directly confront those fears by giving a speech. This "two-factor" model is often credited to O. Hobart Mowrer. Through exposure to the stimulus, this harmful conditioning can be "unlearned" (referred to as extinction and habituation).
CBT for children with phobias is normally delivered over multiple sessions, but one-session treatment has been shown to be equally effective and is cheaper.
Specialized forms of CBT
CBT-SP, an adaptation of CBT for suicide prevention (SP), was specifically designed for treating youths who are severely depressed and who have recently attempted suicide within the past 90 days, and was found to be effective, feasible, and acceptable.
Acceptance and commitment therapy (ACT) is a specialist branch of CBT (sometimes referred to as contextual CBT). ACT uses mindfulness and acceptance interventions and has been found to have a greater longevity in therapeutic outcomes. In a study with anxiety, CBT and ACT improved similarly across all outcomes from pre- to post-treatment. However, during a 12-month follow-up, ACT proved to be more effective, showing that it is a highly viable lasting treatment model for anxiety disorders.
Computerized CBT (CCBT) has been proven to be effective by randomized controlled and other trials in treating depression and anxiety disorders, including children. Some research has found similar effectiveness to an intervention of informational websites and weekly telephone calls. CCBT was found to be equally effective as face-to-face CBT in adolescent anxiety.
Combined with other treatments
Studies have provided evidence that when examining animals and humans, that glucocorticoids may lead to a more successful extinction learning during exposure therapy for anxiety disorders. For instance, glucocorticoids can prevent aversive learning episodes from being retrieved and heighten reinforcement of memory traces creating a non-fearful reaction in feared situations. A combination of glucocorticoids and exposure therapy may be a better-improved treatment for treating people with anxiety disorders.
Prevention
For anxiety disorders, use of CBT with people at risk has significantly reduced the number of episodes of generalized anxiety disorder and other anxiety symptoms, and also given significant improvements in explanatory style, hopelessness, and dysfunctional attitudes. In another study, 3% of the group receiving the CBT intervention developed generalized anxiety disorder by 12 months postintervention compared with 14% in the control group. Individuals with subthreshold levels of panic disorder significantly benefitted from use of CBT. Use of CBT was found to significantly reduce social anxiety prevalence.
For depressive disorders, a stepped-care intervention (watchful waiting, CBT and medication if appropriate) achieved a 50% lower incidence rate in a patient group aged 75 or older. Another depression study found a neutral effect compared to personal, social, and health education, and usual school provision, and included a comment on potential for increased depression scores from people who have received CBT due to greater self recognition and acknowledgement of existing symptoms of depression and negative thinking styles. A further study also saw a neutral result. A meta-study of the Coping with Depression course, a cognitive behavioral intervention delivered by a psychoeducational method, saw a 38% reduction in risk of major depression.
Bipolar disorder
Many studies show CBT, combined with pharmacotherapy, is effective in improving depressive symptoms, mania severity and psychosocial functioning with mild to moderate effects, and that it is better than medication alone.
INSERM's 2004 review found that CBT is an effective therapy for several mental disorders, including bipolar disorder. This included schizophrenia, depression, bipolar disorder, panic disorder, post-traumatic stress, anxiety disorders, bulimia, anorexia, personality disorders and alcohol dependency.
Psychosis
In long-term psychoses, CBT is used to complement medication and is adapted to meet individual needs. Interventions particularly related to these conditions include exploring reality testing, changing delusions and hallucinations, examining factors which precipitate relapse, and managing relapses. Meta-analyses confirm the effectiveness of metacognitive training (MCT) for the improvement of positive symptoms (e.g., delusions).
For people at risk of psychosis, in 2014 the UK National Institute for Health and Care Excellence (NICE) recommended preventive CBT.
Schizophrenia
INSERM's 2004 review found that CBT is an effective therapy for several mental disorders, including schizophrenia.
A Cochrane review reported CBT had "no effect on long‐term risk of relapse" and no additional effect above standard care. A 2015 systematic review investigated the effects of CBT compared with other psychosocial therapies for people with schizophrenia and determined that there is no clear advantage over other, often less expensive, interventions but acknowledged that better quality evidence is needed before firm conclusions can be drawn.
Addiction and substance use disorders
Pathological and problem gambling
CBT is also used for pathological and problem gambling. The percentage of people who problem gamble is 1–3% around the world. Cognitive behavioral therapy develops skills for relapse prevention and someone can learn to control their mind and manage high-risk cases. There is evidence of efficacy of CBT for treating pathological and problem gambling at immediate follow up, however the longer term efficacy of CBT for it is currently unknown.
Smoking cessation
CBT looks at the habit of smoking cigarettes as a learned behavior, which later evolves into a coping strategy to handle daily stressors. Since smoking is often easily accessible and quickly allows the user to feel good, it can take precedence over other coping strategies, and eventually work its way into everyday life during non-stressful events as well. CBT aims to target the function of the behavior, as it can vary between individuals, and works to inject other coping mechanisms in place of smoking. CBT also aims to support individuals with strong cravings, which are a major reported reason for relapse during treatment.
A 2008 controlled study out of Stanford University School of Medicine suggested CBT may be an effective tool to help maintain abstinence. The results of 304 random adult participants were tracked over the course of one year. During this program, some participants were provided medication, CBT, 24-hour phone support, or some combination of the three methods. At 20 weeks, the participants who received CBT had a 45% abstinence rate, versus non-CBT participants, who had a 29% abstinence rate. Overall, the study concluded that emphasizing cognitive and behavioral strategies to support smoking cessation can help individuals build tools for long term smoking abstinence.
Mental health history can affect the outcomes of treatment. Individuals with a history of depressive disorders had a lower rate of success when using CBT alone to combat smoking addiction.
A 2019 Cochrane review was unable to find sufficient evidence to differentiate effects between CBT and hypnosis for smoking cessation and highlighted that a review of the current research showed variable results for both modalities.
Substance use disorders
Studies have shown CBT to be an effective treatment for substance use disorders. For individuals with substance use disorders, CBT aims to reframe maladaptive thoughts, such as denial, minimizing and catastrophizing thought patterns, with healthier narratives. Specific techniques include identifying potential triggers and developing coping mechanisms to manage high-risk situations. Research has shown CBT to be particularly effective when combined with other therapy-based treatments or medication.
INSERM's 2004 review found that CBT is an effective therapy for several mental disorders, including alcohol dependency.
Internet addiction
Research has identified Internet addiction as a new clinical disorder that causes relational, occupational, and social problems. CBT has been suggested as the treatment of choice for Internet addiction, and addiction recovery in general has used CBT as part of treatment planning.
Eating disorders
Though many forms of treatment can support individuals with eating disorders, CBT is proven to be a more effective treatment than medications and interpersonal psychotherapy alone. CBT aims to combat major causes of distress such as negative cognitions surrounding body weight, shape and size. CBT therapists also work with individuals to regulate strong emotions and thoughts that lead to dangerous compensatory behaviors. CBT is the first line of treatment for bulimia nervosa, and non-specific eating disorders. While there is evidence to support the efficacy of CBT for bulimia nervosa and binging, the evidence is somewhat variable and limited by small study sizes. INSERM's 2004 review found that CBT is an effective therapy for several mental disorders, including bulimia and anorexia nervosa.
With autistic adults
Emerging evidence for cognitive behavioral interventions aimed at reducing symptoms of depression, anxiety, and obsessive-compulsive disorder in autistic adults without intellectual disability has been identified through a systematic review. While the research was focused on adults, cognitive behavioral interventions have also been beneficial to autistic children. A 2021 Cochrane review found limited evidence regarding the efficacy of CBT for obsessive-compulsive disorder in adults with Autism Spectrum Disorder stating a need for further study.
Dementia and mild cognitive impairment
A Cochrane review in 2022 found that adults with dementia and mild cognitive impairment (MCI) who experience symptoms of depression may benefit from CBT, whereas other counselling or supportive interventions might not improve symptoms significantly. Across 5 different psychometric scales, where higher scores indicate severity of depression, adults receiving CBT reported somewhat lower mood scores than those receiving usual care for dementia and MCI overall. In this review, a sub-group analysis found clinically significant benefits only among those diagnosed with dementia, rather than MCI.
The likelihood of remission from depression also appeared to be 84% higher following CBT, though the evidence for this was less certain. Anxiety, cognition and other neuropsychiatric symptoms were not significantly improved following CBT, however this review did find moderate evidence of improved quality of life and daily living activity scores in those with dementia and MCI.
Post-traumatic stress
Cognitive behavioral therapy interventions may have some benefits for people who have post-traumatic stress related to surviving rape, sexual abuse, or sexual assault. There is strong evidence that CBT-exposure therapy can reduce PTSD symptoms and lead to the loss of a PTSD diagnosis. In addition, CBT has also been shown to be effective for post-traumatic stress disorder in very young children (3 to 6 years of age). There is lower quality evidence that CBT may be more effective than other psychotherapies in reducing symptoms of posttraumatic stress disorder in children and adolescents.
Other uses
Evidence suggests a possible role for CBT in the treatment of attention deficit hyperactivity disorder (ADHD), hypochondriasis, and bipolar disorder, but more study is needed and results should be interpreted with caution. Moderate evidence from a 2024 systematic review supports the effectiveness of CBT and neurofeedback as part of psychosocial interventions for improving ADHD symptoms in children and adolescents.
CBT has been studied as an aid in the treatment of anxiety associated with stuttering. Initial studies have shown CBT to be effective in reducing social anxiety in adults who stutter, but not in reducing stuttering frequency.
There is some evidence that CBT is superior in the long-term to benzodiazepines and the nonbenzodiazepines in the treatment and management of insomnia. Computerized CBT (CCBT) has been proven to be effective by randomized controlled and other trials in treating insomnia. Some research has found similar effectiveness to an intervention of informational websites and weekly telephone calls. CCBT was found to be equally effective as face-to-face CBT in insomnia.
A Cochrane review of interventions aimed at preventing psychological stress in healthcare workers found that CBT was more effective than no intervention but no more effective than alternative stress-reduction interventions.
Cochrane Reviews have found no convincing evidence that CBT training helps foster care providers manage difficult behaviors in the youths under their care, nor was it helpful in treating people who abuse their intimate partners.
CBT has been applied in both clinical and non-clinical environments to treat disorders such as personality disorders and behavioral problems. INSERM's 2004 review found that CBT is an effective therapy for personality disorders.
CBT has been used with other researchers as well to minimize chronic pain and help relieve symptoms from those suffering from irritable bowel syndrome (IBS).
Individuals with medical conditions
In the case of people with metastatic breast cancer, data is limited but CBT and other psychosocial interventions might help with psychological outcomes and pain management. There is also some evidence that CBT may help reduce insomnia in cancer patients.
There is some evidence that using CBT for symptomatic management of non-specific chest pain is probably effective in the short term. However, the findings were limited by small trials and the evidence was considered of questionable quality. Cochrane reviews have found no evidence that CBT is effective for tinnitus, although there appears to be an effect on management of associated depression and quality of life in this condition. CBT combined with hypnosis and distraction reduces self-reported pain in children.
There is limited evidence to support CBT's use in managing the impact of multiple sclerosis, sleep disturbances related to aging, and dysmenorrhea, but more study is needed and results should be interpreted with caution.
Previously CBT has been considered as moderately effective for treating myalgic encephalomyelitis/chronic fatigue syndrome (ME/CFS), however a National Institutes of Health Pathways to Prevention Workshop stated that in respect of improving treatment options for ME/CFS that the modest benefit from cognitive behavioral therapy should be studied as an adjunct to other methods. The Centres for Disease Control advice on the treatment of ME/CFS makes no reference to CBT while the National Institute for Health and Care Excellence states that cognitive behavioral therapy (CBT) has sometimes been assumed to be a cure for ME/CFS, however, it should only be offered to support people who live with ME/CFS to manage their symptoms, improve their functioning and reduce the distress associated with having a chronic illness.
Age
CBT is used to help people of all ages, but the therapy should be adjusted based on the age of the patient with whom the therapist is dealing. Older individuals in particular have certain characteristics that need to be acknowledged and the therapy altered to account for these differences thanks to age. Of the small number of studies examining CBT for the management of depression in older people, there is currently no strong support.
Description
Mainstream cognitive behavioral therapy assumes that changing maladaptive thinking leads to change in behavior and affect, but recent variants emphasize changes in one's relationship to maladaptive thinking rather than changes in thinking itself.
Cognitive distortions
Therapists use CBT techniques to help people challenge their patterns and beliefs and replace errors in thinking, known as cognitive distortions with "more realistic and effective thoughts, thus decreasing emotional distress and self-defeating behavior". Cognitive distortions can be either a pseudo-discrimination belief or an overgeneralization of something. CBT techniques may also be used to help individuals take a more open, mindful, and aware posture toward cognitive distortions so as to diminish their impact.
Mainstream CBT helps individuals replace "maladaptive... coping skills, cognitions, emotions and behaviors with more adaptive ones", by challenging an individual's way of thinking and the way that they react to certain habits or behaviors, but there is still controversy about the degree to which these traditional cognitive elements account for the effects seen with CBT over and above the earlier behavioral elements such as exposure and skills training.
Assumptions
Chaloult, Ngo, Cousineau and Goulet have attempted to identify the main assumptions of cognitive therapy used in CBT based on the research literature (Beck; Walen and Wessler; Beck, Emery and Greenberg, and Auger). They describe fourteen assumptions:
Human emotions are primarily caused by people's thoughts and perceptions rather than events.
Events, thoughts, emotions, behaviors, and physiological reactions influence each other.
Dysfunctional emotions are typically caused by unrealistic thoughts. Reducing dysfunctional emotions requires becoming aware of irrational thoughts and changing them.
Human beings have an innate tendency to develop irrational thoughts. This tendency is reinforced by their environment.
People are largely responsible for their own dysfunctional emotions, as they maintain and reinforce their own beliefs.
Sustained effort is necessary to modify dysfunctional thoughts, emotions, and behaviors.
Rational thinking usually causes a decrease in the frequency, intensity, and duration of dysfunctional emotions, rather than an absence of affect or feelings.
A positive therapeutic relationship is essential to successful cognitive therapy.
Cognitive therapy is based on a teacher-student relationship, where the therapist educates the client.
Cognitive therapy uses Socratic questioning to challenge cognitive distortions.
Homework is an essential aspect of cognitive therapy. It consolidates the skills learned in therapy.
The cognitive approach is active, directed, and structured.
Cognitive therapy is generally short.
Cognitive therapy is based on predictable steps.
These steps largely involve learning about the CBT model; making links between thoughts, emotions, behaviors, and physiological reactions; noticing when dysfunctional emotions occur; learning to question the thoughts associated with these emotions; replacing irrational thoughts with others more grounded in reality; modifying behaviors based on new interpretations of events; and, in some cases, learning to recognize and change the major beliefs and attitudes underlying cognitive distortions.
Chaloult, Ngo, Cousineau and Goulet have also described the assumptions of behavioral therapy as used in CBT. They refer to the work of Agras, Prochaska and Norcross, and Kirk. The assumptions are:
Behaviors play an essential role in the onset, perpetuation and exacerbation of psychopathology.
Learning theory is key in understanding the treatment of mental illness, as behaviors can be learned and unlearned.
A rigorous evaluation (applied behavior analysis) is essential at the start of treatment. It includes identifying behaviors; precipitating, moderating, and perpetuating factors; the consequences of the behaviors; avoidance, and personal resources.
The effectiveness of the treatment is monitored throughout its duration.
Behavior therapy is scientific and the different forms of treatment are evaluated with rigorous evidence.
Behavior therapy is active, directed, and structured.
Together, these sets of assumptions cover the cognitive and behavioral aspects of CBT.
Phases in therapy
CBT can be seen as having six phases:
Assessment or psychological assessment;
Reconceptualization;
Skills acquisition;
Skills consolidation and application training;
Generalization and maintenance;
Post-treatment assessment follow-up.
These steps are based on a system created by Kanfer and Saslow. After identifying the behaviors that need changing, whether they be in excess or deficit, and treatment has occurred, the psychologist must identify whether or not the intervention succeeded. For example, "If the goal was to decrease the behavior, then there should be a decrease relative to the baseline. If the critical behavior remains at or above the baseline, then the intervention has failed."
The steps in the assessment phase include:
Identify critical behaviors;
Determine whether critical behaviors are excesses or deficits;
Evaluate critical behaviors for frequency, duration, or intensity (obtain a baseline);
If excess, attempt to decrease frequency, duration, or intensity of behaviors; if deficits, attempt to increase behaviors.
The re-conceptualization phase makes up much of the "cognitive" portion of CBT.
Delivery protocols
There are different protocols for delivering cognitive behavioral therapy, with important similarities among them. Use of the term CBT may refer to different interventions, including "self-instructions (e.g. distraction, imagery, motivational self-talk), relaxation and/or biofeedback, development of adaptive coping strategies (e.g. minimizing negative or self-defeating thoughts), changing maladaptive beliefs about pain, and goal setting". Treatment is sometimes manualized, with brief, direct, and time-limited treatments for individual psychological disorders that are specific technique-driven. CBT is used in both individual and group settings, and the techniques are often adapted for self-help applications. Some clinicians and researchers are cognitively oriented (e.g. cognitive restructuring), while others are more behaviorally oriented (e.g. in vivo exposure therapy). Interventions such as imaginal exposure therapy combine both approaches.
Related techniques
CBT may be delivered in conjunction with a variety of diverse but related techniques such as exposure therapy, stress inoculation, cognitive processing therapy, cognitive therapy, metacognitive therapy, metacognitive training, relaxation training, dialectical behavior therapy, and acceptance and commitment therapy. Some practitioners promote a form of mindful cognitive therapy which includes a greater emphasis on self-awareness as part of the therapeutic process.
Methods of access
Therapist
A typical CBT program would consist of face-to-face sessions between patient and therapist, made up of 6–18 sessions of around an hour each with a gap of 1–3 weeks between sessions. This initial program might be followed by some booster sessions, for instance after one month and three months. CBT has also been found to be effective if patient and therapist type in real time to each other over computer links.
Cognitive-behavioral therapy is most closely allied with the scientist–practitioner model in which clinical practice and research are informed by a scientific perspective, clear operationalization of the problem, and an emphasis on measurement, including measuring changes in cognition and behavior and the attainment of goals. These are often met through "homework" assignments in which the patient and the therapist work together to craft an assignment to complete before the next session. The completion of these assignments – which can be as simple as a person with depression attending some kind of social event – indicates a dedication to treatment compliance and a desire to change. The therapists can then logically gauge the next step of treatment based on how thoroughly the patient completes the assignment. Effective cognitive behavioral therapy is dependent on a therapeutic alliance between the healthcare practitioner and the person seeking assistance. Unlike many other forms of psychotherapy, the patient is very involved in CBT. For example, an anxious patient may be asked to talk to a stranger as a homework assignment, but if that is too difficult, he or she can work out an easier assignment first. The therapist needs to be flexible and willing to listen to the patient rather than acting as an authority figure.
Computerized or Internet-delivered (CCBT)
Computerized cognitive behavioral therapy (CCBT) has been described by NICE as a "generic term for delivering CBT via an interactive computer interface delivered by a personal computer, internet, or interactive voice response system", instead of face-to-face with a human therapist. It is also known as internet-delivered cognitive behavioral therapy or ICBT. CCBT has potential to improve access to evidence-based therapies, and to overcome the prohibitive costs and lack of availability sometimes associated with retaining a human therapist. In this context, it is important not to confuse CBT with 'computer-based training', which nowadays is more commonly referred to as e-Learning.
Although improvements in both research quality and treatment adherence is required before advocating for the global dissemination of CCBT, it has been found in meta-studies to be cost-effective and often cheaper than usual care,MoodGYM was superior to informational websites in terms of psychological outcomes or service use including for anxiety and PTSD. Studies have shown that individuals with social anxiety and depression experienced improvement with online CBT-based methods. A study assessing an online version of CBT for people with mild-to-moderate PTSD found that the online approach was as effective as, and cheaper than, the same therapy given face-to-face. A review of current CCBT research in the treatment of OCD in children found this interface to hold great potential for future treatment of OCD in youths and adolescent populations. Additionally, most internet interventions for post-traumatic stress disorder use CCBT. CCBT is also predisposed to treating mood disorders amongst non-heterosexual populations, who may avoid face-to-face therapy from fear of stigma. However presently CCBT programs seldom cater to these populations.
In February 2006 NICE recommended that CCBT be made available for use within the NHS across England and Wales for patients presenting with mild-to-moderate depression, rather than immediately opting for antidepressant medication, and CCBT is made available by some health systems. The 2009 NICE guideline recognized that there are likely to be a number of computerized CBT products that are useful to patients, but removed endorsement of any specific product.
Smartphone app-delivered
Another new method of access is the use of mobile app or smartphone applications to deliver self-help or guided CBT. Technology companies are developing mobile-based artificial intelligence chatbot applications in delivering CBT as an early intervention to support mental health, to build psychological resilience, and to promote emotional well-being. Artificial intelligence (AI) text-based conversational application delivered securely and privately over smartphone devices have the ability to scale globally and offer contextual and always-available support. Active research is underway including real-world data studies that measure effectiveness and engagement of text-based smartphone chatbot apps for delivery of CBT using a text-based conversational interface. Recent market research and analysis of over 500 online mental healthcare solutions identified 3 key challenges in this market: quality of the content, guidance of the user and personalisation.
A study compared CBT alone with a mindfulness-based therapy combined with CBT, both delivered via an app. It found that mindfulness-based self-help reduced the severity of depression more than CBT self-help in the short-term. Overall, NHS costs for the mindfulness approach were £500 less per person than for CBT.
Reading self-help materials
Enabling patients to read self-help CBT guides has been shown to be effective by some studies. However one study found a negative effect in patients who tended to ruminate, and another meta-analysis found that the benefit was only significant when the self-help was guided (e.g. by a medical professional).
Group educational course
Patient participation in group courses has been shown to be effective. In a meta-analysis reviewing evidence-based treatment of OCD in children, individual CBT was found to be more efficacious than group CBT.
Types
Brief cognitive behavioral therapy
Brief cognitive behavioral therapy (BCBT) is a form of CBT which has been developed for situations in which there are time constraints on the therapy sessions and specifically for those struggling with suicidal ideation and/or making suicide attempts. BCBT was based on Rudd's proposed "suicidal mode", an elaboration of Beck's modal theory. BCBT takes place over a couple of sessions that can last up to 12 accumulated hours by design. This technique was first implemented and developed with soldiers on active duty by Dr. M. David Rudd to prevent suicide.
Breakdown of treatment
Orientation
Commitment to treatment
Crisis response and safety planning
Means restriction
Survival kit
Reasons for living card
Model of suicidality
Treatment journal
Lessons learned
Skill focus
Skill development worksheets
Coping cards
Demonstration
Practice
Skill refinement
Relapse prevention
Skill generalization
Skill refinement
Cognitive emotional behavioral therapy
Cognitive emotional behavioral therapy (CEBT) is a form of CBT developed initially for individuals with eating disorders but now used with a range of problems including anxiety, depression, obsessive compulsive disorder (OCD), post-traumatic stress disorder (PTSD) and anger problems. It combines aspects of CBT and dialectical behavioral therapy and aims to improve understanding and tolerance of emotions in order to facilitate the therapeutic process. It is frequently used as a "pretreatment" to prepare and better equip individuals for longer-term therapy.
Structured cognitive behavioral training
Structured cognitive-behavioral training (SCBT) is a cognitive-based process with core philosophies that draw heavily from CBT. Like CBT, SCBT asserts that behavior is inextricably related to beliefs, thoughts, and emotions. SCBT also builds on core CBT philosophy by incorporating other well-known modalities in the fields of behavioral health and psychology: most notably, Albert Ellis's rational emotive behavior therapy. SCBT differs from CBT in two distinct ways. First, SCBT is delivered in a highly regimented format. Second, SCBT is a predetermined and finite training process that becomes personalized by the input of the participant. SCBT is designed to bring a participant to a specific result in a specific period of time. SCBT has been used to challenge addictive behavior, particularly with substances such as tobacco, alcohol and food, and to manage diabetes and subdue stress and anxiety. SCBT has also been used in the field of criminal psychology in the effort to reduce recidivism.
Moral reconation therapy
Moral reconation therapy, a type of CBT used to help felons overcome antisocial personality disorder (ASPD), slightly decreases the risk of further offending. It is generally implemented in a group format because of the risk of offenders with ASPD being given one-on-one therapy reinforces narcissistic behavioral characteristics, and can be used in correctional or outpatient settings. Groups usually meet weekly for two to six months.
Stress inoculation training
This type of therapy uses a blend of cognitive, behavioral, and certain humanistic training techniques to target the stressors of the client. This is usually used to help clients better cope with their stress or anxiety after stressful events. This is a three-phase process that trains the client to use skills that they already have to better adapt to their current stressors. The first phase is an interview phase that includes psychological testing, client self-monitoring, and a variety of reading materials. This allows the therapist to individually tailor the training process to the client. Clients learn how to categorize problems into emotion-focused or problem-focused so that they can better treat their negative situations. This phase ultimately prepares the client to eventually confront and reflect upon their current reactions to stressors, before looking at ways to change their reactions and emotions to their stressors. The focus is conceptualization.
The second phase emphasizes the aspect of skills acquisition and rehearsal that continues from the earlier phase of conceptualization. The client is taught skills that help them cope with their stressors. These skills are then practiced in the space of therapy. These skills involve self-regulation, problem-solving, interpersonal communication skills, etc.
The third and final phase is the application and following through of the skills learned in the training process. This gives the client opportunities to apply their learned skills to a wide range of stressors. Activities include role-playing, imagery, modeling, etc. In the end, the client will have been trained on a preventive basis to inoculate personal, chronic, and future stressors by breaking down their stressors into problems they will address in long-term, short-term, and intermediate coping goals.
Activity-guided CBT: Group-knitting
A recently developed group therapy model, based on CBT, integrates knitting into the therapeutic process and has been proven to yield reliable and promising results. The foundation for this novel approach to CBT is the frequently emphasized notion that therapy success depends on how embedded the therapy method is in the patients' natural routine. Similar to standard group-based CBT, patients meet once a week in a group of 10 to 15 patients and knit together under the instruction of a trained psychologist or mental health professional. Central for the therapy is the patient's imaginative ability to assign each part of the wool to a certain thought. During the therapy, the wool is carefully knitted, creating a knitted piece of any form. This therapeutic process teaches the patient to meaningfully align thought, by (physically) creating a coherent knitted piece. Moreover, since CBT emphasizes the behavior as a result of cognition, the knitting illustrates how thoughts (which are tried to be imaginary tight to the wool) materialize into the reality surrounding us.Corkhill, B., Hemmings, J., Maddock, A., & Riley, J. (2014). "Knitting and Well-being". Textile, 12(1), 34–57."Dugas, M. J., Ladouceur, R., Léger, E., Freeston, M. H., Langolis, F., Provencher, M. D., & Boisvert, J. M. (2003). "Group cognitive-behavioral therapy for generalized anxiety disorder: treatment outcome and long-term follow-up". Journal of consulting and clinical psychology, 71(4), 821.
Mindfulness-based cognitive behavioral hypnotherapy
Mindfulness-based cognitive behavioral hypnotherapy (MCBH) is a form of CBT that focuses on awareness in a reflective approach, addressing subconscious tendencies. It is more the process that contains three phases for achieving wanted goals and integrates the principles of mindfulness and cognitive-behavioral techniques with the transformative potential of hypnotherapy.
Unified Protocol
The Unified Protocol for Transdiagnostic Treatment of Emotional Disorders (UP) is a form of CBT, developed by David H. Barlow and researchers at Boston University, that can be applied to a range of anxiety disorders. The rationale is that anxiety and depression disorders often occur together due to common underlying causes and can efficiently be treated together.
The UP includes a common set of components:
Psycho-education
Cognitive reappraisal
Emotion regulation
Changing behaviour
The UP has been shown to produce equivalent results to single-diagnosis protocols for specific disorders, such as OCD and social anxiety disorder.
Several studies have shown that the UP is easier to disseminate as compared to single-diagnosis protocols.
Culturally adapted CBT
The study of psychotherapy across races, religions, and cultures, or "ethno-psycho-therapy", is a relatively new discipline
Criticisms
Relative effectiveness
The research conducted for CBT has been a topic of sustained controversy. While some researchers write that CBT is more effective than other treatments, many other researchers and practitioners have questioned the validity of such claims. For example, one study determined CBT to be superior to other treatments in treating anxiety and depression. However, researchers responding directly to that study conducted a re-analysis and found no evidence of CBT being superior to other, non-CBT treatments, and conducted an analysis of thirteen other CBT clinical trials and determined that they failed to provide evidence of CBT superiority. Another example is the £5 million PACE trial, of which the results were claimed to show that CBT, and graded exercise therapy (GET) cured Myalgic encephalomyelitis/chronic fatigue syndrome. But, when the PACE results were later comprehensively reanalyzed and subjects followed-up, it was found that the CBT and GET treatments were not effective and possibly not safe. The PACE study's design was riddled with design flaws such as lacking both single and double blinding and engaging in the retrospective manipulation of primary outcome measures while systematically ignoring ME/CFS sufferers' negative health outcomes as a consequences of engaging in CBT and GET . In cases where CBT has been reported to be statistically better than other psychological interventions in terms of primary outcome measures, effect sizes were small and suggested that those differences were clinically meaningless and insignificant. Nonetheless, CBT remains widely recognized for its structured approach to identifying and modifying maladaptive cognitive appraisals, which has been associated with improved emotional regulation in individuals with mood and anxiety disorders.Hofmann, S.G., Asnaani, A., Vonk, I.J., Sawyer, A.T., Fang, A. (2012). The Efficacy of Cognitive Behavioral Therapy: A Review of Meta-analyses. Cognitive Therapy and Research, 36(5), 427–440. doi:10.1007/s10608-012-9476-1. Moreover, on secondary outcomes (i.e., measures of general functioning) no significant differences have been typically found between CBT and other treatments.
A major criticism has been that clinical studies of CBT efficacy (or any psychotherapy) are not double-blind (i.e., either the subjects or the therapists in psychotherapy studies are not blind to the type of treatment). They may be single-blinded, i.e. the rater may not know the treatment the patient received, but neither the patients nor the therapists are blinded to the type of therapy given (two out of three of the persons involved in the trial, i.e., all of the persons involved in the treatment, are unblinded). The patient is an active participant in correcting negative distorted thoughts, thus quite aware of the treatment group they are in.
The importance of double-blinding was shown in a meta-analysis that examined the effectiveness of CBT when placebo control and blindness were factored in. Pooled data from published trials of CBT in schizophrenia, major depressive disorder (MDD), and bipolar disorder that used controls for non-specific effects of intervention were analyzed. This study concluded that CBT is no better than non-specific control interventions in the treatment of schizophrenia and does not reduce relapse rates; treatment effects are small in treatment studies of MDD, and it is not an effective treatment strategy for prevention of relapse in bipolar disorder. For MDD, the authors note that the pooled effect size was very low.
Declining effectiveness
Additionally, a 2015 meta-analysis revealed that the positive effects of CBT on depression have been declining since 1977. The overall results showed two different declines in effect sizes: 1) an overall decline between 1977 and 2014, and 2) a steeper decline between 1995 and 2014. Additional sub-analysis revealed that CBT studies where therapists in the test group were instructed to adhere to the Beck CBT manual had a steeper decline in effect sizes since 1977 than studies where therapists in the test group were instructed to use CBT without a manual. The authors reported that they were unsure why the effects were declining but did list inadequate therapist training, failure to adhere to a manual, lack of therapist experience, and patients' hope and faith in its efficacy waning as potential reasons. The authors did mention that the current study was limited to depressive disorders only.
High drop-out rates
Furthermore, other researchers write that CBT studies have high drop-out rates compared to other treatments. One meta-analysis found that CBT drop-out rates were 17% higher than those of other therapies. This high drop-out rate is also evident in the treatment of several disorders, particularly the eating disorder anorexia nervosa, which is commonly treated with CBT. Those treated with CBT have a high chance of dropping out of therapy before completion and reverting to their anorexia behaviors.
Other researchers analyzing treatments for youths who self-injure found similar drop-out rates in CBT and DBT groups. In this study, the researchers analyzed several clinical trials that measured the efficacy of CBT administered to youths who self-injure. The researchers concluded that none of them were found to be efficacious.
Philosophical concerns with CBT methods
The methods employed in CBT research have not been the only criticisms; some individuals have called its theory and therapy into question.
Slife and Williams write that one of the hidden assumptions in CBT is that of determinism, or the absence of free will. They argue that CBT holds that external stimuli from the environment enter the mind, causing different thoughts that cause emotional states: nowhere in CBT theory is agency, or free will, accounted for.
Another criticism of CBT theory, especially as applied to major depressive disorder (MDD), is that it confounds the symptoms of the disorder with its causes. Other criticisms include that CBT may view people as stupid or can address only simple or superficial problems.
Side effects
CBT is generally regarded as having very few if any side effects. Calls have been made by some for more appraisal of possible side effects of CBT. Many randomized trials of psychological interventions like CBT do not monitor potential harms to the patient. In contrast, randomized trials of pharmacological interventions are much more likely to take adverse effects into consideration.
A 2017 meta-analysis revealed that adverse events are not common in children receiving CBT and, furthermore, that CBT is associated with fewer dropouts than either placebo or medications. Nevertheless, CBT therapists do sometimes report 'unwanted events' and side effects in their outpatients with "negative wellbeing/distress" being the most frequent.
Society and culture
The UK's National Health Service announced in 2008 that more therapists would be trained to provide CBT at government expense as part of an initiative called Improving Access to Psychological Therapies (IAPT). The NICE said that CBT would become the mainstay of treatment for non-severe depression, with medication used only in cases where CBT had failed. Therapists complained that the data does not fully support the attention and funding CBT receives. Psychotherapist and professor Andrew Samuels stated that this constitutes "a coup, a power play by a community that has suddenly found itself on the brink of corralling an enormous amount of money ... Everyone has been seduced by CBT's apparent cheapness."
The UK Council for Psychotherapy issued a press release in 2012 saying that the IAPT's policies were undermining traditional psychotherapy and criticized proposals that would limit some approved therapies to CBT, claiming that they restricted patients to "a watered-down version of cognitive behavioural therapy (CBT), often delivered by very lightly trained staff".
References
Further reading
External links
Association for Behavioral and Cognitive Therapies (ABCT)
British Association for Behavioural and Cognitive Psychotherapies
National Association of Cognitive-Behavioral Therapists
International Association of Cognitive Psychotherapy
Information on Research-based CBT Treatments
Category:Addiction
Category:Addiction medicine
Category:Treatment of obsessive–compulsive disorder
|
medicine_health
| 9,001
|
5876
|
Coronary artery disease
|
https://en.wikipedia.org/wiki/Coronary_artery_disease
|
Coronary artery disease (CAD), also called coronary heart disease (CHD), or ischemic heart disease (IHD), is a type of heart disease involving the reduction of blood flow to the cardiac muscle due to a build-up of atheromatous plaque in the arteries of the heart. It is the most common of the cardiovascular diseases. CAD can cause stable angina, unstable angina, myocardial ischemia, and myocardial infarction.
A common symptom is angina, which is chest pain or discomfort that may travel into the shoulder, arm, back, neck, or jaw. Occasionally it may feel like heartburn. In stable angina, symptoms occur with exercise or emotional stress, last less than a few minutes, and improve with rest. Shortness of breath may also occur and sometimes no symptoms are present. In many cases, the first sign is a heart attack. Other complications include heart failure or an abnormal heartbeat.
Risk factors include high blood pressure, smoking, diabetes mellitus, lack of exercise, obesity, high blood cholesterol, poor diet, depression, and excessive alcohol consumption. A number of tests may help with diagnosis including electrocardiogram, cardiac stress testing, coronary computed tomographic angiography, biomarkers (high-sensitivity cardiac troponins) and coronary angiogram, among others.
Ways to reduce CAD risk include eating a healthy diet, regularly exercising, maintaining a healthy weight, and not smoking. Medications for diabetes, high cholesterol, or high blood pressure are sometimes used. There is limited evidence for screening people who are at low risk and do not have symptoms. Treatment involves the same measures as prevention. Additional medications such as antiplatelets (including aspirin), beta blockers, or nitroglycerin may be recommended. Procedures such as percutaneous coronary intervention (PCI) or coronary artery bypass surgery (CABG) may be used in severe disease. In those with stable CAD it is unclear if PCI or CABG in addition to the other treatments improves life expectancy or decreases heart attack risk.
In 2015, CAD affected 110 million people and resulted in 8.9 million deaths. It makes up 15.6% of all deaths, making it the most common cause of death globally. The risk of death from CAD for a given age decreased between 1980 and 2010, especially in developed countries. The number of cases of CAD for a given age also decreased between 1990 and 2010. In the United States in 2010, about 20% of those over 65 had CAD, while it was present in 7% of those 45 to 64, and 1.3% of those 18 to 45; rates were higher among males than females of a given age.
Signs and symptoms
The most common symptom is chest pain or discomfort that occurs regularly with activity, after eating, or at other predictable times; this phenomenon is termed stable angina and is associated with narrowing of the arteries of the heart. Angina also includes chest tightness, heaviness, pressure, numbness, fullness, or squeezing. Angina that changes in intensity, character, or frequency is termed unstable. Unstable angina may precede myocardial infarction. In adults who go to the emergency department with an unclear cause of pain, about 30% have pain due to coronary artery disease. Angina, shortness of breath, sweating, nausea or vomiting, and lightheadedness are signs of a heart attack or myocardial infarction, and immediate emergency medical services are crucial.
With advanced disease, the narrowing of coronary arteries reduces the supply of oxygen-rich blood flowing to the heart, which becomes more pronounced during strenuous activities, during which the heart beats faster and has an increased oxygen demand. For some, this causes severe symptoms, while others experience no symptoms at all.
Symptoms in females
Symptoms in females can differ from those in males, and the most common symptom reported by females of all races is shortness of breath. Other symptoms more commonly reported by females than males are extreme fatigue, sleep disturbances, indigestion, and anxiety. However, some females experience irregular heartbeat, dizziness, sweating, and nausea. Burning, pain, or pressure in the chest or upper abdomen that can travel to the arm or jaw can also be experienced in females, but females less commonly report it than males. Generally, females experience symptoms 10 years later than males. Females are less likely to recognize symptoms and seek treatment.
Risk factors
Coronary artery disease is characterized by heart problems that result from atherosclerosis.Institute of Medicine (US) Committee on Social Security Cardiovascular Disability Criteria. (2010). Cardiovascular Disability: Updating the Social Security Listings. "Ischemic Heart Disease". NCBI, National Academies Press (US). Atherosclerosis is a type of arteriosclerosis which is the "chronic inflammation of the arteries which causes them to harden and accumulate cholesterol plaques (atheromatous plaques) on the artery walls".Tenas, M. S. & Torres, M. F. (2018) "What is Ischaemic Heart Disease?" Clinic Barcelona. CAD has several well-determined risk factors contributing to atherosclerosis. These risk factors for CAD include "smoking, diabetes, high blood pressure (hypertension), abnormal (high) amounts of cholesterol and other fat in the blood (dyslipidemia), type 2 diabetes and being overweight or obese (having excess body fat)" due to lack of exercise and a poor diet.Nordestgaard, B. G. & Palmer, T. M. & Benn, M. & Zacho, J & Tybjærg-Hansen, A. & Smith, G. D. & Timpson, N. J. (2012). "The Effect of Elevated Body Mass Index on Ischemic Heart Disease Risk: Causal Estimates from a Mendelian Randomisation Approach". PLoS Medicine vol. 9,5 e1001212. . Some other risk factors include high blood pressure, smoking, diabetes, lack of exercise, obesity, high blood cholesterol, poor diet, depression, family history, psychological stress and excessive alcohol. About half of cases are linked to genetics. Apart from these classical risk factors, several unconventional risk factors have also been studied including high serum fibrinogen, high c-reactive protein (CRP), chronic inflammatory conditions, hypovitaminosis D, high lipoprotein A levels, serum homocysteine etc. Smoking and obesity are associated with about 36% and 20% of cases, respectively. Smoking just one cigarette per day about doubles the risk of CAD. Lack of exercise has been linked to 7–12% of cases. Exposure to the herbicide Agent Orange may increase risk. Rheumatologic diseases such as rheumatoid arthritis, systemic lupus erythematosus, psoriasis, and psoriatic arthritis are independent risk factors as well.
Job stress appears to play a minor role, accounting for about 3% of cases. In one study, females who were free of stress from work life saw an increase in the diameter of their blood vessels, leading to decreased progression of atherosclerosis. In contrast, females who had high levels of work-related stress experienced a decrease in the diameter of their blood vessels and significantly increased disease progression.
Air pollution
Air pollution, both indoor and outdoor, is responsible for roughly 28% of deaths from CAD. This varies by region: In highly developed areas, this is approximately 10%, whereas in Southern, East and West Africa, and South Asia, approximately 40% of deaths from CAD can be attributed to unhealthy air. In particular, fine particle pollution (PM2.5), which comes mostly from the burning of fossil fuels, is a key risk factor for CAD.
Blood fats
The consumption of different types of fats including trans fat (trans unsaturated), and saturated fat, in a diet "influences the level of cholesterol that is present in the bloodstream". Unsaturated fats originate from plant sources (such as oils). There are two types of unsaturated fats, cis and trans isomers. Cis unsaturated fats are bent in molecular structure and trans are linear. Saturated fats originate from animal sources (such as animal fats) and are also molecularly linear in structure. The linear configurations of unsaturated trans and saturated fats allow them to easily accumulate and stack at the arterial walls when consumed in high amounts (and other positive measures towards physical health are not met).
Fats and cholesterol are insoluble in blood and thus are amalgamated with proteins to form lipoproteins for transport. Low-density lipoproteins (LDL) transport cholesterol from the liver to the rest of the body and raise blood cholesterol levels. The consumption of "saturated fats increases LDL levels within the body, thus raising blood cholesterol levels".
High-density lipoproteins (HDL) are considered 'good' lipoproteins as they search for excess cholesterol in the body and transport it back to the liver for disposal. Trans fats also "increase LDL levels whilst decreasing HDL levels within the body, significantly raising blood cholesterol levels".
High levels of cholesterol in the bloodstream lead to atherosclerosis. With increased levels of LDL in the bloodstream, "LDL particles will form deposits and accumulate within the arterial walls, which will lead to the development of plaques, restricting blood flow". The resultant reduction in the heart's blood supply due to atherosclerosis in coronary arteries "causes shortness of breath, angina pectoris (chest pains that are usually relieved by rest), and potentially fatal heart attacks (myocardial infarctions)".
Genetics
The heritability of coronary artery disease has been estimated between 40% and 60%. Genome-wide association studies have identified over 160 genetic susceptibility loci for coronary artery disease.
Transcriptome
Several RNA Transcripts associated with CAD - FoxP1, ICOSLG, IKZF4/Eos, SMYD3, TRIM28, and TCF3/E2A are likely markers of regulatory T cells (Tregs), consistent with known reductions in Tregs in CAD.
The RNA changes are mostly related to ciliary and endocytic transcripts, which in the circulating immune system would be related to the immune synapse. One of the most differentially expressed genes, fibromodulin (FMOD), which is increased 2.8-fold in CAD, is found mainly in connective tissue and is a modulator of the TGF-beta signaling pathway. However, not all RNA changes may be related to the immune synapse. For example, Nebulette, the most down-regulated transcript (2.4-fold), is found in cardiac muscle; it is a 'cytolinker' that connects actin and desmin to facilitate cytoskeletal function and vesicular movement. The endocytic pathway is further modulated by changes in tubulin, a key microtubule protein, and fidgetin, a tubulin-severing enzyme that is a marker for cardiovascular risk identified by genome-wide association study. Protein recycling would be modulated by changes in the proteasomal regulator SIAH3 and the ubiquitin ligase MARCHF10. On the ciliary aspect of the immune synapse, several of the modulated transcripts are related to ciliary length and function. Stereocilin is a partner to mesothelin, a related super-helical protein, whose transcript is also modulated in CAD. DCDC2, a double-cortin protein, modulates ciliary length. In the signaling pathways of the immune synapse, numerous transcripts are directly related to T-cell function and the control of differentiation. Butyrophilin is a co-regulator for T cell activation. Fibromodulin modulates the TGF-beta signaling pathway, a primary determinant of Tre differentiation. Further impact on the TGF-beta pathway is reflected in concurrent changes in the BMP receptor 1B RNA (BMPR1B), because the bone morphogenic proteins are members of the TGF-beta superfamily, and likewise impact Treg differentiation. Several of the transcripts (TMEM98, NRCAM, SFRP5, SHISA2) are elements of the Wnt signaling pathway, which is a major determinant of Treg differentiation.
Other
Endometriosis in females under the age of 40.
Depression and hostility appear to be risks.
The number of categories of adverse childhood experiences (psychological, physical, or sexual abuse; violence against mother; or living with household members who used substances, mentally ill, suicidal, or incarcerated) showed a graded correlation with the presence of adult diseases including coronary artery (ischemic heart) disease.
Hemostatic factors: High levels of fibrinogen and coagulation factor VII are associated with an increased risk of CAD.
Low hemoglobin.
In the Asian population, the b fibrinogen gene G-455A polymorphism was associated with the risk of CAD.
Patient-specific vessel ageing or remodelling determines endothelial cell behaviour and thus disease growth and progression. Such 'hemodynamic markers' are patient-specific risk surrogates.
HIV is a known risk factor for developing atherosclerosis and coronary artery disease.
Pathophysiology
Limitation of blood flow to the heart causes ischemia (cell starvation secondary to a lack of oxygen) of the heart's muscle cells. The heart's muscle cells may die from lack of oxygen and this is called a myocardial infarction (commonly referred to as a heart attack). It leads to damage, death, and eventual scarring of the heart muscle without regrowth of heart muscle cells. Chronic high-grade narrowing of the coronary arteries can induce transient ischemia, which leads to the induction of a ventricular arrhythmia, which may terminate in a dangerous heart rhythm known as ventricular fibrillation, which often leads to death.
Typically, coronary artery disease occurs when part of the smooth, elastic lining inside a coronary artery (the arteries that supply blood to the heart muscle) develops atherosclerosis. With atherosclerosis, the artery's lining becomes hardened, stiffened, and accumulates deposits of calcium, fatty lipids, and abnormal inflammatory cells – to form a plaque. Calcium phosphate (hydroxyapatite) deposits in the muscular layer of the blood vessels appear to play a significant role in stiffening the arteries and inducing the early phase of coronary arteriosclerosis. This can be seen in a so-called metastatic mechanism of calciphylaxis as it occurs in chronic kidney disease and hemodialysis. Although these people have kidney dysfunction, almost fifty percent of them die due to coronary artery disease. Plaques can be thought of as large "pimples" that protrude into the channel of an artery, causing partial obstruction to blood flow. People with coronary artery disease might have just one or two plaques or might have dozens distributed throughout their coronary arteries. A more severe form is chronic total occlusion (CTO) when a coronary artery is completely obstructed for more than 3 months.
Microvascular angina is a type of angina pectoris in which chest pain and chest discomfort occur without signs of blockages in the larger coronary arteries of their hearts when an angiogram (coronary angiogram) is being performed.
The exact cause of microvascular angina is unknown. Explanations include microvascular dysfunction or epicardial atherosclerosis. For reasons that are not well understood, females are more likely than males to have it; however, hormones and other risk factors unique to females may play a role.
Diagnosis
The diagnosis of CAD depends largely on the nature of the symptoms and imaging. The first investigation when CAD is suspected is an electrocardiogram (ECG/EKG), both for stable angina and acute coronary syndrome. An X-ray of the chest, blood tests, and resting echocardiography may be performed.
For stable symptomatic patients, several non-invasive tests can diagnose CAD depending on pre-assessment of the risk profile. Noninvasive imaging options include; Computed tomography angiography (CTA) (anatomical imaging, best test in patients with low-risk profile to "rule out" the disease), positron emission tomography (PET), single-photon emission computed tomography (SPECT)/nuclear stress test/myocardial scintigraphy and stress echocardiography (the three latter can be summarized as functional noninvasive methods and are typically better to "rule in"). Exercise ECG or stress test is inferior to non-invasive imaging methods due to the risk of false-negative and false-positive test results. The use of non-invasive imaging is not recommended on individuals who are exhibiting no symptoms and are otherwise at low risk for developing coronary disease., citing
Invasive testing with coronary angiography (ICA) can be used when non-invasive testing is inconclusive or show a high event risk.
The diagnosis of microvascular angina (previously known as cardiac syndrome X – the rare coronary artery disease that is more common in females, as mentioned, is a diagnosis of exclusion. Therefore, usually, the same tests are used as in any person suspected of having coronary artery disease:
Intravascular ultrasound
Magnetic resonance imaging (MRI)
Stable angina
Stable angina is the most common manifestation of ischemic heart disease, and is associated with reduced quality of life and increased mortality. It is caused by epicardial coronary stenosis, which results in reduced blood flow and oxygen supply to the myocardium.
Stable angina is short-term chest pain during physical exertion caused by an imbalance between myocardial oxygen supply and metabolic oxygen demand. Various forms of cardiac stress tests may be used to induce both symptoms and detect changes by way of electrocardiography (using an ECG), echocardiography (using ultrasound of the heart) or scintigraphy (using uptake of radionuclide by the heart muscle). If part of the heart seems to receive an insufficient blood supply, coronary angiography may be used to identify stenosis of the coronary arteries and suitability for angioplasty or bypass surgery.
In minor to moderate cases, nitroglycerine may be used to alleviate acute symptoms of stable angina or may be used immediately before exertion to prevent the onset of angina. Sublingual nitroglycerine is most commonly used to provide rapid relief for acute angina attacks and as a complement to anti-anginal treatments in patients with refractory and recurrent angina. When nitroglycerine enters the bloodstream, it forms free radical nitric oxide, or NO, which activates guanylate cyclase and in turn stimulates the release of cyclic GMP. This molecular signaling stimulates smooth muscle relaxation, resulting in vasodilation and consequently improved blood flow to heart regions affected by atherosclerotic plaque.
Stable coronary artery disease (SCAD) is also often called stable ischemic heart disease (SIHD). A 2015 monograph explains that "Regardless of the nomenclature, stable angina is the chief manifestation of SIHD or SCAD." There are U.S. and European clinical practice guidelines for SIHD/SCAD. In patients with non-severe asymptomatic aortic valve stenosis and no overt coronary artery disease, the increased troponin T (above 14 pg/mL) was found associated with an increased 5-year event rate of ischemic cardiac events (myocardial infarction, percutaneous coronary intervention, or coronary artery bypass surgery).
Acute coronary syndrome
Diagnosis of acute coronary syndrome generally takes place in the emergency department, where ECGs may be performed sequentially to identify "evolving changes" (indicating ongoing damage to the heart muscle). Diagnosis is clear-cut if ECGs show elevation of the "ST segment", which in the context of severe typical chest pain is strongly indicative of an acute myocardial infarction (MI); this is termed a STEMI (ST-elevation MI) and is treated as an emergency with either urgent coronary angiography and percutaneous coronary intervention (angioplasty with or without stent insertion) or with thrombolysis ("clot buster" medication), whichever is available. In the absence of ST-segment elevation, heart damage is detected by cardiac markers (blood tests that identify heart muscle damage). If there is evidence of damage (infarction), the chest pain is attributed to a "non-ST elevation MI" (NSTEMI). If there is no evidence of damage, the term "unstable angina" is used. This process usually necessitates hospital admission and close observation on a coronary care unit for possible complications (such as cardiac arrhythmias – irregularities in the heart rate). Depending on the risk assessment, stress testing or angiography may be used to identify and treat coronary artery disease in patients who have had an NSTEMI or unstable angina.
Risk assessment
There are various risk assessment systems for determining the risk of coronary artery disease, with various emphases on the different variables above. A notable example is Framingham Score, used in the Framingham Heart Study. It is mainly based on age, gender, diabetes, total cholesterol, HDL cholesterol, tobacco smoking, and systolic blood pressure. When predicting risk in younger adults (18–39 years old), the Framingham Risk Score remains below 10–12% for all deciles of baseline-predicted risk.
Polygenic score is another way of risk assessment. In one study, the relative risk of incident coronary events was 91% higher among participants at high genetic risk than among those at low genetic risk.
Prevention
Up to 90% of cardiovascular disease may be preventable if established risk factors are avoided. Prevention involves adequate physical exercise, decreasing obesity, treating high blood pressure, eating a healthy diet, decreasing cholesterol levels, and stopping smoking. Medications and exercise are roughly equally effective. High levels of physical activity reduce the risk of coronary artery disease by about 25%. Life's Essential 8 are the key measures for improving and maintaining cardiovascular health, as defined by the American Heart Association. AHA added sleep as a factor influencing heart health in 2022.
Most guidelines recommend combining these preventive strategies. A 2015 Cochrane Review found some evidence that counseling and education to bring about behavioral change might help in high-risk groups. However, there was insufficient evidence to show an effect on mortality or actual cardiovascular events.
In diabetes mellitus, there is little evidence that very tight blood sugar control improves cardiac risk, although improved sugar control appears to decrease other problems such as kidney failure and blindness.
A 2024 study published in The Lancet Diabetes & Endocrinology found that the oral glucose tolerance test (OGTT) is more effective than hemoglobin A1c (HbA1c) for detecting dysglycemia in patients with coronary artery disease. The study highlighted that 2-hour post-load glucose levels of at least 9 mmol/L were strong predictors of cardiovascular outcomes, while HbA1c levels of at least 5.9% were also significant but not independently associated when combined with OGTT results.
Diet
A diet high in fruits and vegetables decreases the risk of cardiovascular disease and death. Vegetarians have a lower risk of heart disease, possibly due to their greater consumption of fruits and vegetables. Evidence also suggests that the Mediterranean diet and a high fiber diet lower the risk.
The consumption of trans fat (commonly found in hydrogenated products such as margarine) has been shown to cause a precursor to atherosclerosis and increase the risk of coronary artery disease.
Evidence does not support a beneficial role for omega-3 fatty acid supplementation in preventing cardiovascular disease (including myocardial infarction and sudden cardiac death).
Secondary prevention
Secondary prevention is preventing further sequelae of already established disease. Effective lifestyle changes include:
Weight control
Smoking cessation
Avoiding the consumption of trans fats (in partially hydrogenated oils)
Decreasing psychosocial stress
Exercise
Aerobic exercise, like walking, jogging, or swimming, can reduce the risk of mortality from coronary artery disease. Aerobic exercise can help decrease blood pressure and the amount of blood cholesterol (LDL) over time. It also increases HDL cholesterol.
Although exercise is beneficial, it is unclear whether doctors should spend time counseling patients to exercise. The U.S. Preventive Services Task Force found "insufficient evidence" to recommend that doctors counsel patients on exercise, but "it did not review the evidence for the effectiveness of physical activity to reduce chronic disease, morbidity, and mortality", only the effectiveness of counseling itself. The American Heart Association, based on a non-systematic review, recommends that doctors counsel patients on exercise.
Psychological symptoms are common in people with CHD. Many psychological treatments may be offered following cardiac events. There is no evidence that they change mortality, the risk of revascularization procedures, or the rate of non-fatal myocardial infarction.
Antibiotics for secondary prevention of coronary heart disease
Early studies suggested that antibiotics might help patients with coronary disease reduce the risk of heart attacks and strokes. However, a 2021 Cochrane meta-analysis found that antibiotics given for secondary prevention of coronary heart disease are harmful to people with increased mortality and occurrence of stroke. So, antibiotic use is not currently supported for preventing secondary coronary heart disease.
Neuropsychological assessment
A thorough systematic review found that indeed there is a link between a CHD condition and brain dysfunction in females. Consequently, since research is showing that cardiovascular diseases, like CHD, can play a role as a precursor for dementia, like Alzheimer's disease, individuals with CHD should have a neuropsychological assessment.
Treatment
There are a number of treatment options for coronary artery disease:
Lifestyle changes
Medical treatment – commonly prescribed drugs (e.g., cholesterol lowering medications, beta-blockers, nitroglycerin, calcium channel blockers, etc.);
Coronary interventions as angioplasty and coronary stent;
Coronary artery bypass grafting (CABG)
Medications
Statins, which reduce cholesterol, reduce the risk of coronary artery disease
Nitroglycerin
Calcium channel blockers and/or beta-blockers
Antiplatelet drugs such as aspirin
It is recommended that blood pressure typically be reduced to less than 140/90 mmHg. The diastolic blood pressure should not be below 60 mmHg. Beta-blockers are recommended first line for this use.
Aspirin
In those with no previous history of heart disease, aspirin decreases the risk of a myocardial infarction but does not change the overall risk of death. Aspirin therapy to prevent heart disease is thus recommended only in adults who are at increased risk for cardiovascular events, which may include postmenopausal females, males above 40, and younger people with risk factors for coronary heart disease, including high blood pressure, a family history of heart disease, or diabetes. The benefits outweigh the harms most favorably in people at high risk for a cardiovascular event, where high risk is defined as at least a 3% chance over five years, but others with lower risk may still find the potential benefits worth the associated risks.
Anti-platelet therapy
Clopidogrel plus aspirin (dual anti-platelet therapy) reduces cardiovascular events more than aspirin alone in those with a STEMI. In others at high risk but not having an acute event, the evidence is weak. Specifically, its use does not change the risk of death in this group. In those who have had a stent, more than 12 months of clopidogrel plus aspirin does not affect the risk of death.
Surgery
Revascularization for acute coronary syndrome has a mortality benefit. Percutaneous revascularization for stable ischaemic heart disease does not appear to have benefits over medical therapy alone. In those with disease in more than one artery, coronary artery bypass grafts appear better than percutaneous coronary interventions. Newer "anaortic" or no-touch off-pump coronary artery revascularization techniques have shown reduced postoperative stroke rates comparable to percutaneous coronary intervention. Hybrid coronary revascularization has also been shown to be a safe and feasible procedure that may offer some advantages over conventional CABG though it is more expensive.
Epidemiology
As of 2010, CAD was the leading cause of death globally resulting in over 7 million deaths. This increased from 5.2 million deaths from CAD worldwide in 1990. It may affect individuals at any age but becomes dramatically more common at progressively older ages, with approximately a tripling with each decade of life. Males are affected more often than females.
The World Health Organization reported that: "The world's biggest killer is ischemic heart disease, responsible for 13% of the world's total deaths. Since 2000, the largest increase in deaths has been for this disease, rising by 2.7 million to 9.1 million deaths in 2021."
It is estimated that 60% of the world's cardiovascular disease burden will occur in the South Asian subcontinent despite only accounting for 20% of the world's population. This may be secondary to a combination of genetic predisposition and environmental factors. Organizations such as the Indian Heart Association are working with the World Heart Federation to raise awareness about this issue.Indian Heart Association Why South Asians Facts , 29 April 2015; accessed 26 October 2015.
Coronary artery disease is the leading cause of death for both males and females and accounts for approximately 600,000 deaths in the United States every year. According to present trends in the United States, half of healthy 40-year-old males will develop CAD in the future, and one in three healthy 40-year-old females. It is the most common reason for death of males and females over 20 years of age in the United States.
After analysing data from 2 111 882 patients, the recent meta-analysis revealed that the incidence of coronary artery diseases in breast cancer survivors was 4.29 (95% CI 3.09–5.94) per 1000 person-years.
Society and culture
Names
Other terms sometimes used for this condition are "hardening of the arteries" and "narrowing of the arteries". In Latin it is known as morbus ischaemicus cordis (MIC).
Support groups
The Infarct Combat Project (ICP) is an international nonprofit organization founded in 1998 which tries to decrease ischemic heart diseases through education and research.
Industry influence on research
In 2016 research into the internal documents of the Sugar Research Foundation, the trade association for the sugar industry in the US, had sponsored an influential literature review published in 1965 in the New England Journal of Medicine that downplayed early findings about the role of a diet heavy in sugar in the development of CAD and emphasized the role of fat; that review influenced decades of research funding and guidance on healthy eating.O'Connor, Anahad, "How the Sugar Industry Shifted Blame to Fat" , The New York Times, 12 September 2016. Retrieved 12 September 2016.
Research
Research efforts are focused on new angiogenic treatment modalities and various (adult) stem-cell therapies. A region on chromosome 17 was confined to families with multiple cases of myocardial infarction. Other genome-wide studies have identified a firm risk variant on chromosome 9 (9p21.3). However, these and other loci are found in intergenic segments and need further research in understanding how the phenotype is affected.
A more controversial link is that between Chlamydophila pneumoniae infection and atherosclerosis. While this intracellular organism has been demonstrated in atherosclerotic plaques, evidence is inconclusive regarding whether it can be considered a causative factor. Treatment with antibiotics in patients with proven atherosclerosis has not demonstrated a decreased risk of heart attacks or other coronary vascular diseases.
Myeloperoxidase (MPO) has been proposed to be involved in the development of CAD. In a 2024 study, MPO serum levels in individuals with CAD did not differ significantly from those in controls, though they were slightly lower on average. In addition, serum concentration showed no significant association with the disease extent.
Plant-based nutrition has been suggested as a way to reverse coronary artery disease, but strong evidence is still lacking for claims of potential benefits.
Several immunosuppressive drugs targeting the chronic inflammation in coronary artery disease have been tested.
See also
Mental stress-induced myocardial ischemia
References
External links
Risk Assessment of having a heart attack or dying of coronary artery disease, from the American Heart Association.
Category:Aging-associated diseases
Category:Heart diseases
Category:Ischemic heart diseases
Category:Wikipedia medicine articles ready to translate
Category:Wikipedia emergency medicine articles ready to translate
|
medicine_health
| 4,869
|
6115
|
P versus NP problem
|
https://en.wikipedia.org/wiki/P_versus_NP_problem
|
The P versus NP problem is a major unsolved problem in theoretical computer science. Informally, it asks whether every problem whose solution can be quickly verified can also be quickly solved.
Here, "quickly" means an algorithm exists that solves the task and runs in polynomial time (as opposed to, say, exponential time), meaning the task completion time is bounded above by a polynomial function on the size of the input to the algorithm. The general class of questions that some algorithm can answer in polynomial time is "P" or "class P". For some questions, there is no known way to find an answer quickly, but if provided with an answer, it can be verified quickly. The class of questions where an answer can be verified in polynomial time is "NP", standing for "nondeterministic polynomial time".A nondeterministic Turing machine can move to a state that is not determined by the previous state. Such a machine could solve an NP problem in polynomial time by falling into the correct answer state (by luck), then conventionally verifying it. Such machines are not practical for solving realistic problems but can be used as theoretical models.
An answer to the P versus NP question would determine whether problems that can be verified in polynomial time can also be solved in polynomial time. If P ≠ NP, which is widely believed, it would mean that there are problems in NP that are harder to compute than to verify: they could not be solved in polynomial time, but the answer could be verified in polynomial time.
The problem has been called the most important open problem in computer science. Aside from being an important problem in computational theory, a proof either way would have profound implications for mathematics, cryptography, algorithm research, artificial intelligence, game theory, multimedia processing, philosophy, economics and many other fields.
It is one of the seven Millennium Prize Problems selected by the Clay Mathematics Institute, each of which carries a US$1,000,000 prize for the first correct solution.
Example
Consider the following yes/no problem: given an incomplete Sudoku grid of size , is there at least one legal solution where every row, column, and square contains the integers 1 through ? It is straightforward to verify "yes" instances of this generalized Sudoku problem given a candidate solution. However, it is not known whether there is a polynomial-time algorithm that can correctly answer "yes" or "no" to all instances of this problem. Therefore, generalized Sudoku is in NP (quickly verifiable), but may or may not be in P (quickly solvable). (It is necessary to consider a generalized version of Sudoku, as any fixed size Sudoku has only a finite number of possible grids. In this case the problem is in P, as the answer can be found by table lookup.)
History
The precise statement of the P versus NP problem was introduced in 1971 by Stephen Cook in his seminal paper "The complexity of theorem proving procedures" (and independently by Leonid Levin in 1973).
Although the P versus NP problem was formally defined in 1971, there were previous inklings of the problems involved, the difficulty of proof, and the potential consequences. In 1955, mathematician John Nash wrote a letter to the National Security Agency, speculating that the time required to crack a sufficiently complex code would increase exponentially with the length of the key. If proved (and Nash was suitably skeptical), this would imply what is now called P ≠ NP, since a proposed key can be verified in polynomial time. Another mention of the underlying problem occurred in a 1956 letter written by Kurt Gödel to John von Neumann. Gödel asked whether theorem-proving (now known to be co-NP-complete) could be solved in quadratic or linear time, and pointed out one of the most important consequences—that if so, then the discovery of mathematical proofs could be automated.
Context
The relation between the complexity classes P and NP is studied in computational complexity theory, the part of the theory of computation dealing with the resources required during computation to solve a given problem. The most common resources are time (how many steps it takes to solve a problem) and space (how much memory it takes to solve a problem).
In such analysis, a model of the computer for which time must be analyzed is required. Typically such models assume that the computer is deterministic (given the computer's present state and any inputs, there is only one possible action that the computer might take) and sequential (it performs actions one after the other).
In this theory, the class P consists of all decision problems (defined below) solvable on a deterministic sequential machine in a duration polynomial in the size of the input; the class NP consists of all decision problems whose positive solutions are verifiable in polynomial time given the right information, or equivalently, whose solution can be found in polynomial time on a non-deterministic machine.Sipser, Michael: Introduction to the Theory of Computation, Second Edition, International Edition, page 270. Thomson Course Technology, 2006. Definition 7.19 and Theorem 7.20. Clearly, P ⊆ NP. Arguably, the biggest open question in theoretical computer science concerns the relationship between those two classes:
Is P equal to NP?
Since 2002, William Gasarch has conducted three polls of researchers concerning this and related questions. Confidence that P ≠ NP has been increasing – in 2019, 88% believed P ≠ NP, as opposed to 83% in 2012 and 61% in 2002. When restricted to experts, the 2019 answers became 99% believed P ≠ NP. These polls do not imply whether P = NP, Gasarch himself stated: "This does not bring us any closer to solving P=?NP or to knowing when it will be solved, but it attempts to be an objective report on the subjective opinion of this era."
NP-completeness
To attack the P = NP question, the concept of NP-completeness is very useful. NP-complete problems are problems that any other NP problem is reducible to in polynomial time and whose solution is still verifiable in polynomial time. That is, any NP problem can be transformed into any NP-complete problem. Informally, an NP-complete problem is an NP problem that is at least as "tough" as any other problem in NP.
NP-hard problems are those at least as hard as NP problems; i.e., all NP problems can be reduced (in polynomial time) to them. NP-hard problems need not be in NP; i.e., they need not have solutions verifiable in polynomial time.
For instance, the Boolean satisfiability problem is NP-complete by the Cook–Levin theorem, so any instance of any problem in NP can be transformed mechanically into a Boolean satisfiability problem in polynomial time. The Boolean satisfiability problem is one of many NP-complete problems. If any NP-complete problem is in P, then it would follow that P = NP. However, many important problems are NP-complete, and no fast algorithm for any of them is known.
From the definition alone it is unintuitive that NP-complete problems exist; however, a trivial NP-complete problem can be formulated as follows: given a Turing machine M guaranteed to halt in polynomial time, does a polynomial-size input that M will accept exist? It is in NP because (given an input) it is simple to check whether M accepts the input by simulating M; it is NP-complete because the verifier for any particular instance of a problem in NP can be encoded as a polynomial-time machine M that takes the solution to be verified as input. Then the question of whether the instance is a yes or no instance is determined by whether a valid input exists.
The first natural problem proven to be NP-complete was the Boolean satisfiability problem, also known as SAT. As noted above, this is the Cook–Levin theorem; its proof that satisfiability is NP-complete contains technical details about Turing machines as they relate to the definition of NP. However, after this problem was proved to be NP-complete, proof by reduction provided a simpler way to show that many other problems are also NP-complete, including the game Sudoku discussed earlier. In this case, the proof shows that a solution of Sudoku in polynomial time could also be used to complete Latin squares in polynomial time. This in turn gives a solution to the problem of partitioning tri-partite graphs into triangles, which could then be used to find solutions for the special case of SAT known as 3-SAT, which then provides a solution for general Boolean satisfiability. So a polynomial-time solution to Sudoku leads, by a series of mechanical transformations, to a polynomial time solution of satisfiability, which in turn can be used to solve any other NP-problem in polynomial time. Using transformations like this, a vast class of seemingly unrelated problems are all reducible to one another, and are in a sense "the same problem".
Harder problems
Although it is unknown whether P = NP, problems outside of P are known. Just as the class P is defined in terms of polynomial running time, the class EXPTIME is the set of all decision problems that have exponential running time. In other words, any problem in EXPTIME is solvable by a deterministic Turing machine in O(2p(n)) time, where p(n) is a polynomial function of n. A decision problem is EXPTIME-complete if it is in EXPTIME, and every problem in EXPTIME has a polynomial-time many-one reduction to it. A number of problems are known to be EXPTIME-complete. Because it can be shown that P ≠ EXPTIME, these problems are outside P, and so require more than polynomial time. In fact, by the time hierarchy theorem, they cannot be solved in significantly less than exponential time. Examples include finding a perfect strategy for chess positions on an N × N board and similar problems for other board games.
The problem of deciding the truth of a statement in Presburger arithmetic requires even more time. Fischer and Rabin proved in 1974 that every algorithm that decides the truth of Presburger statements of length n has a runtime of at least for some constant c. Hence, the problem is known to need more than exponential run time. Even more difficult are the undecidable problems, such as the halting problem. They cannot be completely solved by any algorithm, in the sense that for any particular algorithm there is at least one input for which that algorithm will not produce the right answer; it will either produce the wrong answer, finish without giving a conclusive answer, or otherwise run forever without producing any answer at all.
It is also possible to consider questions other than decision problems. One such class, consisting of counting problems, is called #P: whereas an NP problem asks "Are there any solutions?", the corresponding #P problem asks "How many solutions are there?". Clearly, a #P problem must be at least as hard as the corresponding NP problem, since a count of solutions immediately tells if at least one solution exists, if the count is greater than zero. Surprisingly, some #P problems that are believed to be difficult correspond to easy (for example linear-time) P problems. For these problems, it is very easy to tell whether solutions exist, but thought to be very hard to tell how many. Many of these problems are #P-complete, and hence among the hardest problems in #P, since a polynomial time solution to any of them would allow a polynomial time solution to all other #P problems.
Problems in NP not known to be in P or NP-complete
In 1975, Richard E. Ladner showed that if P ≠ NP, then there exist problems in NP that are neither in P nor NP-complete. Such problems are called NP-intermediate problems. The graph isomorphism problem, the discrete logarithm problem, and the integer factorization problem are examples of problems believed to be NP-intermediate. They are some of the very few NP problems not known to be in P or to be NP-complete.
The graph isomorphism problem is the computational problem of determining whether two finite graphs are isomorphic. An important unsolved problem in complexity theory is whether the graph isomorphism problem is in P, NP-complete, or NP-intermediate. The answer is not known, but it is believed that the problem is at least not NP-complete. If graph isomorphism is NP-complete, the polynomial time hierarchy collapses to its second level. Since it is widely believed that the polynomial hierarchy does not collapse to any finite level, it is believed that graph isomorphism is not NP-complete. The best algorithm for this problem, due to László Babai, runs in quasi-polynomial time.
The integer factorization problem is the computational problem of determining the prime factorization of a given integer. Phrased as a decision problem, it is the problem of deciding whether the input has a factor less than k. No efficient integer factorization algorithm is known, and this fact forms the basis of several modern cryptographic systems, such as the RSA algorithm. The integer factorization problem is in NP and in co-NP (and even in UP and co-UPLance Fortnow. Computational Complexity Blog: Complexity Class of the Week: Factoring. 13 September 2002.). If the problem is NP-complete, the polynomial time hierarchy will collapse to its first level (i.e., NP = co-NP). The most efficient known algorithm for integer factorization is the general number field sieve, which takes expected time
to factor an n-bit integer. The best known quantum algorithm for this problem, Shor's algorithm, runs in polynomial time, although this does not indicate where the problem lies with respect to non-quantum complexity classes.
Does P mean "easy"?
All of the above discussion has assumed that P means "easy" and "not in P" means "difficult", an assumption known as Cobham's thesis. It is a common assumption in complexity theory; but there are caveats.
First, it can be false in practice. A theoretical polynomial algorithm may have extremely large constant factors or exponents, rendering it impractical. For example, the problem of deciding whether a graph G contains H as a minor, where H is fixed, can be solved in a running time of O(n2), where n is the number of vertices in G. However, the big O notation hides a constant that depends superexponentially on H. The constant is greater than (using Knuth's up-arrow notation), and where h is the number of vertices in H.
On the other hand, even if a problem is shown to be NP-complete, and even if P ≠ NP, there may still be effective approaches to the problem in practice. There are algorithms for many NP-complete problems, such as the knapsack problem, the traveling salesman problem, and the Boolean satisfiability problem, that can solve to optimality many real-world instances in reasonable time. The empirical average-case complexity (time vs. problem size) of such algorithms can be surprisingly low. An example is the simplex algorithm in linear programming, which works surprisingly well in practice; despite having exponential worst-case time complexity, it runs on par with the best known polynomial-time algorithms.
Finally, there are types of computations which do not conform to the Turing machine model on which P and NP are defined, such as quantum computation and randomized algorithms.
Reasons to believe P ≠ NP or P = NP
Cook provides a restatement of the problem in The P Versus NP Problem as "Does P = NP?" According to polls, most computer scientists believe that P ≠ NP. A key reason for this belief is that after decades of studying these problems no one has been able to find a polynomial-time algorithm for any of more than 3,000 important known NP-complete problems (see List of NP-complete problems). These algorithms were sought long before the concept of NP-completeness was even defined (Karp's 21 NP-complete problems, among the first found, were all well-known existing problems at the time they were shown to be NP-complete). Furthermore, the result P = NP would imply many other startling results that are currently believed to be false, such as NP = co-NP and P = PH.
It is also intuitively argued that the existence of problems that are hard to solve but whose solutions are easy to verify matches real-world experience., point 9.
On the other hand, some researchers believe that it is overconfident to believe P ≠ NP and that researchers should also explore proofs of P = NP. For example, in 2002 these statements were made:
DLIN vs NLIN
When one substitutes "linear time on a multitape Turing machine" for "polynomial time" in the definitions of P and NP, one obtains the classes DLIN and NLIN.
It is known, Theorem 3.9 that DLIN ≠ NLIN.
Consequences of solution
One of the reasons the problem attracts so much attention is the consequences of the possible answers. Either direction of resolution would advance theory enormously, and perhaps have huge practical consequences as well.
P = NP
A proof that P = NP could have stunning practical consequences if the proof leads to efficient methods for solving some of the important problems in NP. The potential consequences, both positive and negative, arise since various NP-complete problems are fundamental in many fields.
It is also very possible that a proof would not lead to practical algorithms for NP-complete problems. The formulation of the problem does not require that the bounding polynomial be small or even specifically known. A non-constructive proof might show a solution exists without specifying either an algorithm to obtain it or a specific bound. Even if the proof is constructive, showing an explicit bounding polynomial and algorithmic details, if the polynomial is not very low-order the algorithm might not be sufficiently efficient in practice. In this case the initial proof would be mainly of interest to theoreticians, but the knowledge that polynomial time solutions are possible would surely spur research into better (and possibly practical) methods to achieve them.
A solution showing P = NP could upend the field of cryptography, which relies on certain problems being difficult. A constructive and efficient solutionExactly how efficient a solution must be to pose a threat to cryptography depends on the details. A solution of with a reasonable constant term would be disastrous. On the other hand, a solution that is in almost all cases would not pose an immediate practical danger. to an NP-complete problem such as 3-SAT would break most existing cryptosystems including:
Existing implementations of public-key cryptography,See for a reduction of factoring to SAT. A 512-bit factoring problem (8400 MIPS-years when factored) translates to a SAT problem of 63,652 variables and 406,860 clauses. a foundation for many modern security applications such as secure financial transactions over the Internet.
Symmetric ciphers such as AES or 3DES,See, for example, in which an instance of DES is encoded as a SAT problem with 10336 variables and 61935 clauses. A 3DES problem instance would be about 3 times this size. used for the encryption of communications data.
Cryptographic hashing, which underlies blockchain cryptocurrencies such as Bitcoin, and is used to authenticate software updates. For these applications, finding a pre-image that hashes to a given value must be difficult, ideally taking exponential time. If P = NP, then this can take polynomial time, through reduction to SAT.
These would need modification or replacement with information-theoretically secure solutions that do not assume P ≠ NP.
There are also enormous benefits that would follow from rendering tractable many currently mathematically intractable problems. For instance, many problems in operations research are NP-complete, such as types of integer programming and the travelling salesman problem. Efficient solutions to these problems would have enormous implications for logistics. Many other important problems, such as some problems in protein structure prediction, are also NP-complete; making these problems efficiently solvable could considerably advance life sciences and biotechnology.
These changes could be insignificant compared to the revolution that efficiently solving NP-complete problems would cause in mathematics itself. Gödel, in his early thoughts on computational complexity, noted that a mechanical method that could solve any problem would revolutionize mathematics:History of this letter and its translation from
Similarly, Stephen Cook (assuming not only a proof, but a practically efficient algorithm) says:
Research mathematicians spend their careers trying to prove theorems, and some proofs have taken decades or even centuries to find after problems have been stated—for instance, Fermat's Last Theorem took over three centuries to prove. A method guaranteed to find a proof if a "reasonable" size proof exists, would essentially end this struggle.
Donald Knuth has stated that he has come to believe that P = NP, but is reserved about the impact of a possible proof:
P ≠ NP
A proof of P ≠ NP would lack the practical computational benefits of a proof that P = NP, but would represent a great advance in computational complexity theory and guide future research. It would demonstrate that many common problems cannot be solved efficiently, so that the attention of researchers can be focused on partial solutions or solutions to other problems. Due to widespread belief in P ≠ NP, much of this focusing of research has already taken place.
P ≠ NP still leaves open the average-case complexity of hard problems in NP. For example, it is possible that SAT requires exponential time in the worst case, but that almost all randomly selected instances of it are efficiently solvable. Russell Impagliazzo has described five hypothetical "worlds" that could result from different possible resolutions to the average-case complexity question.R. Impagliazzo, "A personal view of average-case complexity", p. 134, 10th Annual Structure in Complexity Theory Conference (SCT'95), 1995. These range from "Algorithmica", where P = NP and problems like SAT can be solved efficiently in all instances, to "Cryptomania", where P ≠ NP and generating hard instances of problems outside P is easy, with three intermediate possibilities reflecting different possible distributions of difficulty over instances of NP-hard problems. The "world" where P ≠ NP but all problems in NP are tractable in the average case is called "Heuristica" in the paper. A Princeton University workshop in 2009 studied the status of the five worlds.
Results about difficulty of proof
Although the P = NP problem itself remains open despite a million-dollar prize and a huge amount of dedicated research, efforts to solve the problem have led to several new techniques. In particular, some of the most fruitful research related to the P = NP problem has been in showing that existing proof techniques are insufficient for answering the question, suggesting novel technical approaches are required.
As additional evidence for the difficulty of the problem, essentially all known proof techniques in computational complexity theory fall into one of the following classifications, all insufficient to prove P ≠ NP:
ClassificationDefinitionRelativizing proofsImagine a world where every algorithm is allowed to make queries to some fixed subroutine called an oracle (which can answer a fixed set of questions in constant time, such as an oracle that solves any traveling salesman problem in 1 step), and the running time of the oracle is not counted against the running time of the algorithm. Most proofs (especially classical ones) apply uniformly in a world with oracles regardless of what the oracle does. These proofs are called relativizing. In 1975, Baker, Gill, and Solovay showed that P = NP with respect to some oracles, while P ≠ NP for other oracles. As relativizing proofs can only prove statements that are true for all possible oracles, these techniques cannot resolve P = NP.Natural proofsIn 1993, Alexander Razborov and Steven Rudich defined a general class of proof techniques for circuit complexity lower bounds, called natural proofs. At the time, all previously known circuit lower bounds were natural, and circuit complexity was considered a very promising approach for resolving P = NP. However, Razborov and Rudich showed that if one-way functions exist, P and NP are indistinguishable to natural proof methods. Although the existence of one-way functions is unproven, most mathematicians believe that they exist, and a proof of their existence would be a much stronger statement than P ≠ NP. Thus, it is unlikely that natural proofs alone can resolve P = NP.Algebrizing proofsAfter the Baker–Gill–Solovay result, new non-relativizing proof techniques were successfully used to prove that IP = PSPACE. However, in 2008, Scott Aaronson and Avi Wigderson showed that the main technical tool used in the IP = PSPACE proof, known as arithmetization, was also insufficient to resolve P = NP. Arithmetization converts the operations of an algorithm to algebraic and basic arithmetic symbols and then uses those to analyze the workings. In the IP = PSPACE proof, they convert the black box and the Boolean circuits to an algebraic problem. As mentioned previously, it has been proven that this method is not viable to solve P = NP and other time complexity problems.
These barriers are another reason why NP-complete problems are useful: if a polynomial-time algorithm can be demonstrated for an NP-complete problem, this would solve the P = NP problem in a way not excluded by the above results.
These barriers lead some computer scientists to suggest the P versus NP problem may be independent of standard axiom systems like ZFC (cannot be proved or disproved within them). An independence result could imply that either P ≠ NP and this is unprovable in (e.g.) ZFC, or that P = NP but it is unprovable in ZFC that any polynomial-time algorithms are correct.. However, if the problem is undecidable even with much weaker assumptions extending the Peano axioms for integer arithmetic, then nearly polynomial-time algorithms exist for all NP problems.. Therefore, assuming (as most complexity theorists do) some NP problems don't have efficient algorithms, proofs of independence with those techniques are impossible. This also implies proving independence from PA or ZFC with current techniques is no easier than proving all NP problems have efficient algorithms.
Logical characterizations
The P = NP problem can be restated as certain classes of logical statements, as a result of work in descriptive complexity.
Consider all languages of finite structures with a fixed signature including a linear order relation. Then, all such languages in P are expressible in first-order logic with the addition of a suitable least fixed-point combinator. Recursive functions can be defined with this and the order relation. As long as the signature contains at least one predicate or function in addition to the distinguished order relation, so that the amount of space taken to store such finite structures is actually polynomial in the number of elements in the structure, this precisely characterizes P.
Similarly, NP is the set of languages expressible in existential second-order logic—that is, second-order logic restricted to exclude universal quantification over relations, functions, and subsets. The languages in the polynomial hierarchy, PH, correspond to all of second-order logic. Thus, the question "is P a proper subset of NP" can be reformulated as "is existential second-order logic able to describe languages (of finite linearly ordered structures with nontrivial signature) that first-order logic with least fixed point cannot?".Elvira Mayordomo. "P versus NP" Monografías de la Real Academia de Ciencias de Zaragoza 26: 57–68 (2004). The word "existential" can even be dropped from the previous characterization, since P = NP if and only if P = PH (as the former would establish that NP = co-NP, which in turn implies that NP = PH).
Polynomial-time algorithms
No known algorithm for a NP-complete problem runs in polynomial time. However, there are algorithms known for NP-complete problems that if P = NP, the algorithm runs in polynomial time on accepting instances (although with enormous constants, making the algorithm impractical). However, these algorithms do not qualify as polynomial time because their running time on rejecting instances are not polynomial. The following algorithm, due to Levin (without any citation), is such an example below. It correctly accepts the NP-complete language SUBSET-SUM. It runs in polynomial time on inputs that are in SUBSET-SUM if and only if P = NP:
// Algorithm that accepts the NP-complete language SUBSET-SUM.
//
// this is a polynomial-time algorithm if and only if P = NP.
//
// "Polynomial-time" means it returns "yes" in polynomial time when
// the answer should be "yes", and runs forever when it is "no".
//
// Input: S = a finite set of integers
// Output: "yes" if any subset of S adds up to 0.
// Runs forever with no output otherwise.
// Note: "Program number M" is the program obtained by
// writing the integer M in binary, then
// considering that string of bits to be a
// program. Every possible program can be
// generated this way, though most do nothing
// because of syntax errors.
FOR K = 1...∞
FOR M = 1...K
Run program number M for K steps with input S
IF the program outputs a list of distinct integers
AND the integers are all in S
AND the integers sum to 0
THEN
OUTPUT "yes" and HALT
This is a polynomial-time algorithm accepting an NP-complete language only if P = NP. "Accepting" means it gives "yes" answers in polynomial time, but is allowed to run forever when the answer is "no" (also known as a semi-algorithm).
This algorithm is enormously impractical, even if P = NP. If the shortest program that can solve SUBSET-SUM in polynomial time is b bits long, the above algorithm will try at least other programs first.
Formal definitions
P and NP
A decision problem is a problem that takes as input some string w over an alphabet Σ, and outputs "yes" or "no". If there is an algorithm (say a Turing machine, or a computer program with unbounded memory) that produces the correct answer for any input string of length n in at most cnk steps, where k and c are constants independent of the input string, then we say that the problem can be solved in polynomial time and we place it in the class P. Formally, P is the set of languages that can be decided by a deterministic polynomial-time Turing machine. Meaning,
where
and a deterministic polynomial-time Turing machine is a deterministic Turing machine M that satisfies two conditions:
M halts on all inputs w and
there exists such that , where O refers to the big O notation and
NP can be defined similarly using nondeterministic Turing machines (the traditional way). However, a modern approach uses the concept of certificate and verifier. Formally, NP is the set of languages with a finite alphabet and verifier that runs in polynomial time. The following defines a "verifier":
Let L be a language over a finite alphabet, Σ.
L ∈ NP if, and only if, there exists a binary relation and a positive integer k such that the following two conditions are satisfied:
; and
the language is decidable by a deterministic Turing machine in polynomial time.
A Turing machine that decides LR is called a verifier for L and a y such that (x, y) ∈ R is called a certificate of membership of x in L.
Not all verifiers must be polynomial-time. However, for L to be in NP, there must be a verifier that runs in polynomial time.
Example
Let
Whether a value of x is composite is equivalent to of whether x is a member of COMPOSITE. It can be shown that COMPOSITE ∈ NP by verifying that it satisfies the above definition (if we identify natural numbers with their binary representations).
COMPOSITE also happens to be in P, a fact demonstrated by the invention of the AKS primality test.
NP-completeness
There are many equivalent ways of describing NP-completeness.
Let L be a language over a finite alphabet Σ.
L is NP-complete if, and only if, the following two conditions are satisfied:
L ∈ NP; and
any L′ in NP is polynomial-time-reducible to L (written as ), where if, and only if, the following two conditions are satisfied:
There exists f : Σ* → Σ* such that for all w in Σ* we have: ; and
there exists a polynomial-time Turing machine that halts with f(w) on its tape on any input w.
Alternatively, if L ∈ NP, and there is another NP-complete problem that can be polynomial-time reduced to L, then L is NP-complete. This is a common way of proving some new problem is NP-complete.
Claimed solutions
While the P versus NP problem is generally considered unsolved, many amateur and some professional researchers have claimed solutions. Gerhard J. Woeginger compiled a list of 116 purported proofs from 1986 to 2016, of which 61 were proofs of P = NP, 49 were proofs of P ≠ NP, and 6 proved other results, e.g. that the problem is undecidable. Some attempts at resolving P versus NP have received brief media attention, though these attempts have been refuted.
Popular culture
The film Travelling Salesman, by director Timothy Lanzone, is the story of four mathematicians hired by the US government to solve the P versus NP problem.
In the sixth episode of The Simpsons seventh season "Treehouse of Horror VI", the equation P = NP is seen shortly after Homer accidentally stumbles into the "third dimension".
In the second episode of season 2 of Elementary, "Solve for X" Holmes and Watson investigate the murders of mathematicians who were attempting to solve P versus NP.
Similar problems
R vs. RE problem, where R is analog of class P, and RE is analog class NP. These classes are not equal, because undecidable but verifiable problems do exist, for example, Hilbert's tenth problem which is RE-complete.
A similar problem exists in the theory of algebraic complexity: VP vs. VNP problem. Like P vs. NP, the answer is currently unknown.L. G. Valiant. Completeness classes in algebra. In Proceedings of 11th ACM STOC, pp. 249–261, 1979.
See also
Game complexity
List of unsolved problems in mathematics
Unique games conjecture
Unsolved problems in computer science
Notes
References
Sources
Further reading
Online drafts
External links
Aviad Rubinstein's Hardness of Approximation Between P and NP, winner of the ACM's 2017 Doctoral Dissertation Award.
Category:1956 in computing
Category:Computer-related introductions in 1956
Category:Conjectures
Category:Mathematical optimization
Category:Millennium Prize Problems
Category:Structural complexity theory
Category:Unsolved problems in computer science
Category:Unsolved problems in mathematics
|
computer_science
| 5,662
|
6355
|
Chloroplast
|
https://en.wikipedia.org/wiki/Chloroplast
|
A chloroplast () is a type of organelle known as a plastid that conducts photosynthesis mostly in plant and algal cells. Chloroplasts have a high concentration of chlorophyll pigments which capture the energy from sunlight and convert it to chemical energy and release oxygen. The chemical energy created is then used to make sugar and other organic molecules from carbon dioxide in a process called the Calvin cycle. Chloroplasts carry out a number of other functions, including fatty acid synthesis, amino acid synthesis, and the immune response in plants. The number of chloroplasts per cell varies from one, in some unicellular algae, up to 100 in plants like Arabidopsis and wheat.
Chloroplasts are highly dynamic—they circulate and are moved around within cells. Their behavior is strongly influenced by environmental factors like light color and intensity. Chloroplasts cannot be made anew by the plant cell and must be inherited by each daughter cell during cell division, which is thought to be inherited from their ancestor—a photosynthetic cyanobacterium that was engulfed by an early eukaryotic cell.
Chloroplasts evolved from an ancient cyanobacterium that was engulfed by an early eukaryotic cell. Because of their endosymbiotic origins, chloroplasts, like mitochondria, contain their own DNA separate from the cell nucleus. With one exception (the amoeboid Paulinella chromatophora), all chloroplasts can be traced back to a single endosymbiotic event. Despite this, chloroplasts can be found in extremely diverse organisms that are not directly related to each other—a consequence of many secondary and even tertiary endosymbiotic events.
Discovery and etymology
The first definitive description of a chloroplast (Chlorophyllkörnen, "grain of chlorophyll") was given by Hugo von Mohl in 1837 as discrete bodies within the green plant cell.von Mohl, H. (1835/1837). Ueber die Vermehrung der Pflanzen-Zellen durch Teilung. Dissert. Tubingen 1835. Flora 1837, . In 1883, Andreas Franz Wilhelm Schimper named these bodies as "chloroplastids" (Chloroplastida). In 1884, Eduard Strasburger adopted the term "chloroplasts" (Chloroplasten).
The word chloroplast is derived from the Greek words chloros (χλωρός), which means green, and plastes (πλάστης), which means "the one who forms".
Endosymbiotic origin of chloroplasts
Chloroplasts are one of many types of organelles in photosynthetic eukaryotic cells. They evolved from cyanobacteria through a process called organellogenesis. Cyanobacteria are a diverse phylum of gram-negative bacteria capable of carrying out oxygenic photosynthesis. Like chloroplasts, they have thylakoids. The thylakoid membranes contain photosynthetic pigments, including chlorophyll a. This origin of chloroplasts was first suggested by the Russian biologist Konstantin Mereschkowski in 1905 after Andreas Franz Wilhelm Schimper observed in 1883 that chloroplasts closely resemble cyanobacteria. Chloroplasts are only found in plants, algae, and some species of the amoeboid Paulinella.
Mitochondria are thought to have come from a similar endosymbiosis event, where an aerobic prokaryote was engulfed.
Primary endosymbiosis
Approximately twobillion years ago, a free-living cyanobacterium entered an early eukaryotic cell, either as food or as an internal parasite, but managed to escape the phagocytic vacuole it was contained in and persist inside the cell. This event is called endosymbiosis, or "cell living inside another cell with a mutual benefit for both". The external cell is commonly referred to as the host while the internal cell is called the endosymbiont. The engulfed cyanobacteria provided an advantage to the host by providing sugar from photosynthesis. Over time, the cyanobacterium was assimilated, and many of its genes were lost or transferred to the nucleus of the host. Some of the cyanobacterial proteins were then synthesized by host cell and imported back into the chloroplast (formerly the cyanobacterium), allowing the host to control the chloroplast.
Chloroplasts which can be traced back directly to a cyanobacterial ancestor (i.e. without a subsequent endosymbiotic event) are known as primary plastids ("plastid" in this context means almost the same thing as chloroplast). Chloroplasts that can be traced back to another photosynthetic eukaryotic endosymbiont are called secondary plastids or tertiary plastids (discussed below).
Whether primary chloroplasts came from a single endosymbiotic event or multiple independent engulfments across various eukaryotic lineages was long debated. It is now generally held that with one exception (the amoeboid Paulinella chromatophora), chloroplasts arose from a single endosymbiotic event around twobillion years ago and these chloroplasts all share a single ancestor. It has been proposed this the closest living relative of the ancestral engulfed cyanobacterium is Gloeomargarita lithophora. Separately, somewhere about 90–140 million years ago, this process happened again in the amoeboid Paulinella with a cyanobacterium in the genus Prochlorococcus. This independently evolved chloroplast is often called a chromatophore instead of a chloroplast.Not to be confused with chromatophore—the pigmented cells in some animals—or chromatophore—the membrane associated vesicle in some bacteria.
Chloroplasts are believed to have arisen after mitochondria, since all eukaryotes contain mitochondria, but not all have chloroplasts. This is called serial endosymbiosis—where an early eukaryote engulfed the mitochondrion ancestor, and then descendants of it then engulfed the chloroplast ancestor, creating a cell with both chloroplasts and mitochondria.
Secondary and tertiary endosymbiosis
Many other organisms obtained chloroplasts from the primary chloroplast lineages through secondary endosymbiosis—engulfing a red or green alga with a primary chloroplast. These chloroplasts are known as secondary plastids.
As a result of the secondary endosymbiotic event, secondary chloroplasts have additional membranes outside of the original two in primary chloroplasts. In secondary plastids, typically only the chloroplast, and sometimes its cell membrane and nucleus remain, forming a chloroplast with three or four membranes—the two cyanobacterial membranes, sometimes the eaten alga's cell membrane, and the phagosomal vacuole from the host's cell membrane.
The genes in the phagocytosed eukaryotes nucleus are often transferred to the secondary host's nucleus. Cryptomonas and chlorarachniophytes retain the phagocytosed eukaryotes nucleus, an object called a nucleomorph, located between the second and third membranes of the chloroplast.
All secondary chloroplasts come from green and red algae. No secondary chloroplasts from glaucophytes have been observed, probably because glaucophytes are relatively rare in nature, making them less likely to have been taken up by another eukaryote.
Still other organisms, including the dinoflagellates Karlodinium and Karenia, obtained chloroplasts by engulfing an organism with a secondary plastid. These are called tertiary plastids.
alt=Cladogram of chloroplast evolution|center|thumb|800x800px|Possible cladogram of chloroplast evolution Circles represent endosymbiotic events. For clarity, dinophyte tertiary endosymbioses and many nonphotosynthetic lineages have been omitted.
a It is now established that Chromalveolata is paraphyletic to Rhizaria.
Primary chloroplast lineages
All primary chloroplasts belong to one of four chloroplast lineages—the glaucophyte chloroplast lineage, the rhodophyte ("red") chloroplast lineage, and the chloroplastida ("green") chloroplast lineage, the amoeboid Paulinella chromatophora lineage. The glaucophyte, rhodophyte, and chloroplastidian lineages are all descended from the same ancestral endosymbiotic event and are all within the group Archaeplastida.
Glaucophyte chloroplasts
The glaucophyte chloroplast group is the smallest of the three primary chloroplast lineages as there are only 25 described glaucophyte species. Glaucophytes diverged first before the red and green chloroplast lineages diverged. Because of this, they are sometimes considered intermediates between cyanobacteria and the red and green chloroplasts. This early divergence is supported by both phylogenetic studies and physical features present in glaucophyte chloroplasts and cyanobacteria, but not the red and green chloroplasts. First, glaucophyte chloroplasts have a peptidoglycan wall, a type of cell wall otherwise only in bacteria (including cyanobacteria).For this reason, glaucophyte chloroplasts are also known as 'muroplasts' from the Latin muro meaning wall. Second, glaucophyte chloroplasts contain concentric unstacked thylakoids which surround a carboxysome – an icosahedral structure that contains the enzyme RuBisCO responsible for carbon fixation. Third, starch created by the chloroplast is collected outside the chloroplast. Additionally, like cyanobacteria, both glaucophyte and rhodophyte thylakoids are studded with light collecting structures called phycobilisomes.
Rhodophyta (red chloroplasts)
The rhodophyte, or red algae, group is a large and diverse lineage. Rhodophyte chloroplasts are also called rhodoplasts, literally "red chloroplasts". Rhodoplasts have a double membrane with an intermembrane space and phycobilin pigments organized into phycobilisomes on the thylakoid membranes, preventing their thylakoids from stacking. Some contain pyrenoids. Rhodoplasts have chlorophyll a and phycobilins for photosynthetic pigments; the phycobilin phycoerythrin is responsible for giving many red algae their distinctive red color. However, since they also contain the blue-green chlorophyll a and other pigments, many are reddish to purple from the combination. The red phycoerythrin pigment is an adaptation to help red algae catch more sunlight in deep water—as such, some red algae that live in shallow water have less phycoerythrin in their rhodoplasts, and can appear more greenish. Rhodoplasts synthesize a form of starch called floridean starch, which collects into granules outside the rhodoplast, in the cytoplasm of the red alga.
Chloroplastida (green chloroplasts)
The chloroplastida group is another large, highly diverse lineage that includes both green algae and land plants. This group is also called Viridiplantae, which includes two core clades—Chlorophyta and Streptophyta.
Most green chloroplasts are green in color, though some aren't due to accessory pigments that override the green from chlorophylls, such as in the resting cells of Haematococcus pluvialis. Green chloroplasts differ from glaucophyte and red algal chloroplasts in that they have lost their phycobilisomes, and contain chlorophyll b. They have also lost the peptidoglycan wall between their double membrane, leaving an intermembrane space. Some plants have kept some genes required the synthesis of peptidoglycan, but have repurposed them for use in chloroplast division instead. Chloroplastida lineages also keep their starch inside their chloroplasts. In plants and some algae, the chloroplast thylakoids are arranged in grana stacks. Some green algal chloroplasts, as well as those of hornworts, contain a structure called a pyrenoid, that concentrate RuBisCO and CO in the chloroplast, functionally similar to the glaucophyte carboxysome.
There are some lineages of non-photosynthetic parasitic green algae that have lost their chloroplasts entirely, such as Prototheca, or have no chloroplast while retaining the separate chloroplast genome, as in Helicosporidium. Morphological and physiological similarities, as well as phylogenetics, confirm that these are lineages that ancestrally had chloroplasts but have since lost them.
Paulinella chromatophora
The photosynthetic amoeboids in the genus Paulinella—P. chromatophora, P. micropora, and marine P. longichromatophora—have the only known independently evolved chloroplast, often called a chromatophore. While all other chloroplasts originate from a single ancient endosymbiotic event, Paulinella independently acquired an endosymbiotic cyanobacterium from the genus Synechococcus around 90 – 140 million years ago. Each Paulinella cell contains one or two sausage-shaped chloroplasts; they were first described in 1894 by German biologist Robert Lauterborn.
The chromatophore is highly reduced compared to its free-living cyanobacterial relatives and has limited functions. For example, it has a genome of about 1 million base pairs, one third the size of Synechococcus genomes, and only encodes around 850 proteins. However, this is still much larger than other chloroplast genomes, which are typically around 150,000 base pairs. Chromatophores have also transferred much less of their DNA to the nucleus of their hosts. About 0.3–0.8% of the nuclear DNA in Paulinella is from the chromatophore, compared with 11–14% from the chloroplast in plants. Similar to other chloroplasts, Paulinella provides specific proteins to the chromatophore using a specific targeting sequence. Because chromatophores are much younger compared to the canonical chloroplasts, Paulinella chromatophora is studied to understand how early chloroplasts evolved.
Secondary and tertiary chloroplast lineages
Green algal derived chloroplasts
Green algae have been taken up by many groups in three or four separate events. Primarily, secondary chloroplasts derived from green algae are in the euglenoids and chlorarachniophytes. They are also found in one lineage of dinoflagellates and possibly the ancestor of the CASH lineage (cryptomonads, alveolates, stramenopiles and haptophytes) Many green algal derived chloroplasts contain pyrenoids, but unlike chloroplasts in their green algal ancestors, storage product collects in granules outside the chloroplast.
Euglenophytes
The euglenophytes are a group of common flagellated protists that contain chloroplasts derived from a green alga. Euglenophytes are the only group outside Diaphoretickes that have chloroplasts without performing kleptoplasty. Euglenophyte chloroplasts have three membranes. It is thought that the membrane of the primary endosymbiont host was lost (e.g. the green algal membrane), leaving the two cyanobacterial membranes and the secondary host's phagosomal membrane. Euglenophyte chloroplasts have a pyrenoid and thylakoids stacked in groups of three. The carbon fixed through photosynthesis is stored in the form of paramylon, which is contained in membrane-bound granules in the cytoplasm of the euglenophyte.
Chlorarachniophytes
Chlorarachniophytes are a rare group of organisms that also contain chloroplasts derived from green algae, though their story is more complicated than that of the euglenophytes. The ancestor of chlorarachniophytes is thought to have been a eukaryote with a red algal derived chloroplast. It is then thought to have lost its first red algal chloroplast, and later engulfed a green alga, giving it its second, green algal derived chloroplast.
Chlorarachniophyte chloroplasts are bounded by four membranes, except near the cell membrane, where the chloroplast membranes fuse into a double membrane. Their thylakoids are arranged in loose stacks of three. Chlorarachniophytes have a form of polysaccharide called chrysolaminarin, which they store in the cytoplasm, often collected around the chloroplast pyrenoid, which bulges into the cytoplasm.
Chlorarachniophyte chloroplasts are notable because the green alga they are derived from has not been completely broken down—its nucleus still persists as a nucleomorph found between the second and third chloroplast membranes—the periplasmic space, which corresponds to the green alga's cytoplasm.
Prasinophyte-derived chloroplast
Dinoflagellates in the genus Lepidodinium have lost their original peridinin chloroplast and replaced it with a green algal derived chloroplast (more specifically, a prasinophyte). Lepidodinium is the only dinoflagellate that has a chloroplast that's not from the rhodoplast lineage. The chloroplast is surrounded by two membranes and has no nucleomorph—all the nucleomorph genes have been transferred to the dinophyte nucleus. The endosymbiotic event that led to this chloroplast was serial secondary endosymbiosis rather than tertiary endosymbiosis—the endosymbiont was a green alga containing a primary chloroplast (making a secondary chloroplast).
Tripartite symbiosis
The ciliate Pseudoblepharisma tenue has two bacterial symbionts, one pink, one green. In 2021, both symbionts were confirmed to be photosynthetic: Ca. Thiodictyon intracellulare (Chromatiaceae), a purple sulfur bacterium with a genome just half the size of their closest known relatives; and Chlorella sp. K10, a green alga. There is also a variant of Pseudoblepharisma tenue that only contains chloroplasts from green algae and no endosymbiotic purple bacteria.
Red algal derived chloroplasts
Secondary chloroplasts derived from red algae appear to have only been taken up only once, which then diversified into a large group called chromists or chromalveolates. Today they are found in the haptophytes, cryptomonads, heterokonts, dinoflagellates and apicomplexans (the CASH lineage). Red algal secondary chloroplasts usually contain chlorophyll c and are surrounded by four membranes.
However, chromist monophyly has been rejected, and it is considered more likely that some chromists acquired their plastids by incorporating another chromist instead of inheriting them from a common ancestor. Cryptophytes seem to have acquired plastids from red algae, which were then transmitted from them to both the Heterokontophytes and the Haptophytes, and then from these last to the Myzozoa.
Cryptophytes
Cryptophytes, or cryptomonads, are a group of algae that contain a red-algal derived chloroplast. Cryptophyte chloroplasts contain a nucleomorph that superficially resembles that of the chlorarachniophytes. Cryptophyte chloroplasts have four membranes. The outermost membrane is continuous with the rough endoplasmic reticulum. They synthesize ordinary starch, which is stored in granules found in the periplastid space—outside the original double membrane, in the place that corresponds to the ancestral red alga's cytoplasm. Inside cryptophyte chloroplasts is a pyrenoid and thylakoids in stacks of two. Cryptophyte chloroplasts do not have phycobilisomes, but they do have phycobilin pigments which they keep in the thylakoid space, rather than anchored on the outside of their thylakoid membranes.
Cryptophytes may have played a key role in the spreading of red algal based chloroplasts.
Haptophytes
Haptophytes are similar and closely related to cryptophytes or heterokontophytes. Their chloroplasts lack a nucleomorph, their thylakoids are in stacks of three, and they synthesize chrysolaminarin sugar, which are stored in granules completely outside of the chloroplast, in the cytoplasm of the haptophyte.
Stramenopiles (heterokontophytes)
The stramenopiles, also known as heterokontophytes, are a very large and diverse group of eukaryotes. It inlcludes Ochrophyta—which includes diatoms, brown algae (seaweeds), and golden algae (chrysophytes)— and Xanthophyceae (also called yellow-green algae).
Heterokont chloroplasts are very similar to haptophyte chloroplasts. They have a pyrenoid, triplet thylakoids, and, with some exceptions, four layer plastidic envelope with the outermost membrane connected to the endoplasmic reticulum. Like haptophytes, stramenopiles store sugar in chrysolaminarin granules in the cytoplasm. Stramenopile chloroplasts contain chlorophyll a and, with a few exceptions, chlorophyll c. They also have carotenoids which give them their many colors.
Apicomplexans, chromerids, and dinophytes
The alveolates are a major clade of unicellular eukaryotes of both autotrophic and heterotrophic members. Many members contain a red-algal derived plastid. One notable characteristic of this diverse group is the frequent loss of photosynthesis. However, a majority of these heterotrophs continue to process a non-photosynthetic plastid.
Apicomplexans
Apicomplexans are a group of alveolates. Like the helicosproidia, they're parasitic, and have a nonphotosynthetic chloroplast. They were once thought to be related to the helicosproidia, but it is now known that the helicosproida are green algae rather than part of the CASH lineage. The apicomplexans include Plasmodium, the malaria parasite. Many apicomplexans keep a vestigial red algal derived chloroplast called an apicoplast, which they inherited from their ancestors. Apicoplasts have lost all photosynthetic function, and contain no photosynthetic pigments or true thylakoids. They are bounded by four membranes, but the membranes are not connected to the endoplasmic reticulum. Other apicomplexans like Cryptosporidium have lost the chloroplast completely. Apicomplexans store their energy in amylopectin granules that are located in their cytoplasm, even though they are nonphotosynthetic.
The fact that apicomplexans still keep their nonphotosynthetic chloroplast around demonstrates how the chloroplast carries out important functions other than photosynthesis. Plant chloroplasts provide plant cells with many important things besides sugar, and apicoplasts are no different—they synthesize fatty acids, isopentenyl pyrophosphate, iron-sulfur clusters, and carry out part of the heme pathway. The most important apicoplast function is isopentenyl pyrophosphate synthesis—in fact, apicomplexans die when something interferes with this apicoplast function, and when apicomplexans are grown in an isopentenyl pyrophosphate-rich medium, they dump the organelle.
Chromerids
The chromerids are a group of algae known from Australian corals which comprise some close photosynthetic relatives of the apicomplexans. The first member, Chromera velia, was discovered and first isolated in 2001. The discovery of Chromera velia with similar structure to the apicomplexans, provides an important link in the evolutionary history of the apicomplexans and dinophytes. Their plastids have four membranes, lack chlorophyll c and use the type II form of RuBisCO obtained from a horizontal transfer event.
Dinophytes
The dinoflagellates are yet another very large and diverse group, around half of which are at least partially photosynthetic (i.e. mixotrophic). Dinoflagellate chloroplasts have relatively complex history. Most dinoflagellate chloroplasts are secondary red algal derived chloroplasts. Many dinoflagellates have lost the chloroplast (becoming nonphotosynthetic), some of these have replaced it though tertiary endosymbiosis. Others replaced their original chloroplast with a green algal derived chloroplast. The peridinin chloroplast is thought to be the dinophytes' "original" chloroplast, which has been lost, reduced, replaced, or has company in several other dinophyte lineages.
The most common dinophyte chloroplast is the peridinin-type chloroplast, characterized by the carotenoid pigment peridinin in their chloroplasts, along with chlorophyll a and chlorophyll c2. Peridinin is not found in any other group of chloroplasts. The peridinin chloroplast is bounded by three membranes (occasionally two), having lost the red algal endosymbiont's original cell membrane. The outermost membrane is not connected to the endoplasmic reticulum. They contain a pyrenoid, and have triplet-stacked thylakoids. Starch is found outside the chloroplast. Peridinin chloroplasts also have DNA that is highly reduced and fragmented into many small circles. Most of the genome has migrated to the nucleus, and only critical photosynthesis-related genes remain in the chloroplast.
Most dinophyte chloroplasts contain form II RuBisCO, at least the photosynthetic pigments chlorophyll a, chlorophyll c2, beta-carotene, and at least one dinophyte-unique xanthophyll (peridinin, dinoxanthin, or diadinoxanthin), giving many a golden-brown color. All dinophytes store starch in their cytoplasm, and most have chloroplasts with thylakoids arranged in stacks of three.
Haptophyte-derived chloroplasts
The fucoxanthin dinophyte lineages (including Karlodinium and Karenia) lost their original red algal derived chloroplast, and replaced it with a new chloroplast derived from a haptophyte endosymbiont, making these tertiary plastids. Karlodinium and Karenia probably took up different endosymbionts. Because the haptophyte chloroplast has four membranes, tertiary endosymbiosis would be expected to create a six membraned chloroplast, adding the haptophyte's cell membrane and the dinophyte's phagosomal vacuole. However, the haptophyte was heavily reduced, stripped of a few membranes and its nucleus, leaving only its chloroplast (with its original double membrane), and possibly one or two additional membranes around it.
Fucoxanthin-containing chloroplasts are characterized by having the pigment fucoxanthin (actually 19′-hexanoyloxy-fucoxanthin and/or 19′-butanoyloxy-fucoxanthin) and no peridinin. Fucoxanthin is also found in haptophyte chloroplasts, providing evidence of ancestry.
Diatom-derived chloroplasts
Some dinophytes, like Kryptoperidinium and Durinskia, have a diatom (heterokontophyte)-derived chloroplast. These chloroplasts are bounded by up to five membranes, (depending on whether the entire diatom endosymbiont is counted as the chloroplast, or just the red algal derived chloroplast inside it). The diatom endosymbiont has been reduced relatively little—it still retains its original mitochondria, and has endoplasmic reticulum, ribosomes, a nucleus, and of course, red algal derived chloroplasts—practically a complete cell, all inside the host's endoplasmic reticulum lumen. However the diatom endosymbiont can't store its own food—its storage polysaccharide is found in granules in the dinophyte host's cytoplasm instead. The diatom endosymbiont's nucleus is present, but it probably can't be called a nucleomorph because it shows no sign of genome reduction, and might have even been expanded. Diatoms have been engulfed by dinoflagellates at least three times.
The diatom endosymbiont is bounded by a single membrane, inside it are chloroplasts with four membranes. Like the diatom endosymbiont's diatom ancestor, the chloroplasts have triplet thylakoids and pyrenoids.
In some of these genera, the diatom endosymbiont's chloroplasts aren't the only chloroplasts in the dinophyte. The original three-membraned peridinin chloroplast is still around, converted to an eyespot.
Kleptoplasty
In some groups of mixotrophic protists, like some dinoflagellates (e.g. Dinophysis), chloroplasts are separated from a captured alga and used temporarily. These klepto chloroplasts may only have a lifetime of a few days and are then replaced.
Cryptophyte-derived dinophyte chloroplast
Members of the genus Dinophysis have a phycobilin-containing chloroplast taken from a cryptophyte. However, the cryptophyte is not an endosymbiont—only the chloroplast seems to have been taken, and the chloroplast has been stripped of its nucleomorph and outermost two membranes, leaving just a two-membraned chloroplast. Cryptophyte chloroplasts require their nucleomorph to maintain themselves, and Dinophysis species grown in cell culture alone cannot survive, so it is possible (but not confirmed) that the Dinophysis chloroplast is a kleptoplast—if so, Dinophysis chloroplasts wear out and Dinophysis species must continually engulf cryptophytes to obtain new chloroplasts to replace the old ones.
Chloroplast DNA
Chloroplasts, like other endosymbiotic organelles, contain a genome separate from that in the cell nucleus. The existence of chloroplast DNA (cpDNA) was identified biochemically in 1959, and confirmed by electron microscopy in 1962. The discoveries that the chloroplast contains ribosomes and performs protein synthesis revealed that the chloroplast is genetically semi-autonomous. Chloroplast DNA was first sequenced in 1986. Since then, hundreds of chloroplast genomes from various species have been sequenced, but they are mostly those of land plants and green algae—glaucophytes, red algae, and other algal groups are extremely underrepresented, potentially introducing some bias in views of "typical" chloroplast DNA structure and content.
Molecular structure
With few exceptions, most chloroplasts have their entire chloroplast genome combined into a single large circular DNA molecule, typically 120,000–170,000 base pairs long and a mass of about 80–130 million daltons. While chloroplast genomes can almost always be assembled into a circular map, the physical DNA molecules inside cells take on a variety of linear and branching forms. New chloroplasts may contain up to 100 copies of their genome, though the number of copies decreases to about 15–20 as the chloroplasts age.
Chloroplast DNA is usually condensed into nucleoids, which can contain multiple copies of the chloroplast genome. Many nucleoids can be found in each chloroplast. In primitive red algae, the chloroplast DNA nucleoids are clustered in the center of the chloroplast, while in green plants and green algae, the nucleoids are dispersed throughout the stroma. Chloroplast DNA is not associated with true histones, proteins that are used to pack DNA molecules tightly in eukaryote nuclei. Though in red algae, similar proteins tightly pack each chloroplast DNA ring in a nucleoid.
Many chloroplast genomes contain two inverted repeats, which separate a long single copy section (LSC) from a short single copy section (SSC). A given pair of inverted repeats are rarely identical, but they are always very similar to each other, apparently resulting from concerted evolution. The inverted repeats vary wildly in length, ranging from 4,000 to 25,000 base pairs long each and containing as few as four or as many as over 150 genes. The inverted repeat regions are highly conserved in land plants, and accumulate few mutations.
Similar inverted repeats exist in the genomes of cyanobacteria and the other two chloroplast lineages (glaucophyta and rhodophyceae), suggesting that they predate the chloroplast. Some chloroplast genomes have since lost or flipped the inverted repeats (making them direct repeats). It is possible that the inverted repeats help stabilize the rest of the chloroplast genome, as chloroplast genomes which have lost some of the inverted repeat segments tend to get rearranged more.
DNA repair and replication
In chloroplasts of the moss Physcomitrella patens, the DNA mismatch repair protein Msh1 interacts with the recombinational repair proteins RecA and RecG to maintain chloroplast genome stability. In chloroplasts of the plant Arabidopsis thaliana the RecA protein maintains the integrity of the chloroplast's DNA by a process that likely involves the recombinational repair of DNA damage.
The mechanism for chloroplast DNA (cpDNA) replication has not been conclusively determined, but two main models have been proposed. Scientists have attempted to observe chloroplast replication via electron microscopy since the 1970s. The results of the microscopy experiments led to the idea that chloroplast DNA replicates using a double displacement loop (D-loop). As the D-loop moves through the circular DNA, it adopts a theta intermediary form, also known as a Cairns replication intermediate, and completes replication with a rolling circle mechanism. Transcription starts at specific points of origin. Multiple replication forks open up, allowing replication machinery to transcribe the DNA. As replication continues, the forks grow and eventually converge. The new cpDNA structures separate, creating daughter cpDNA chromosomes.
In addition to the early microscopy experiments, this model is also supported by the amounts of deamination seen in cpDNA. Deamination occurs when an amino group is lost and is a mutation that often results in base changes. When adenine is deaminated, it becomes hypoxanthine. Hypoxanthine can bind to cytosine, and when the XC base pair is replicated, it becomes a GC (thus, an A → G base change).
In cpDNA, there are several A → G deamination gradients. DNA becomes susceptible to deamination events when it is single stranded. When replication forks form, the strand not being copied is single stranded, and thus at risk for A → G deamination. Therefore, gradients in deamination indicate that replication forks were most likely present and the direction that they initially opened (the highest gradient is most likely nearest the start site because it was single stranded for the longest amount of time). This mechanism is still the leading theory today; however, a second theory suggests that most cpDNA is actually linear and replicates through homologous recombination. It further contends that only a minority of the genetic material is kept in circular chromosomes while the rest is in branched, linear, or other complex structures.
One of competing model for cpDNA replication asserts that most cpDNA is linear and participates in homologous recombination and replication structures similar to the linear and circular DNA structures of bacteriophage T4. It has been established that some plants have linear cpDNA, such as maize, and that more species still contain complex structures that scientists do not yet understand. When the original experiments on cpDNA were performed, scientists did notice linear structures; however, they attributed these linear forms to broken circles. If the branched and complex structures seen in cpDNA experiments are real and not artifacts of concatenated circular DNA or broken circles, then a D-loop mechanism of replication is insufficient to explain how those structures would replicate. At the same time, homologous recombination does not expand the multiple A --> G gradients seen in plastomes. Because of the failure to explain the deamination gradient as well as the numerous plant species that have been shown to have circular cpDNA, the predominant theory continues to hold that most cpDNA is circular and most likely replicates via a D loop mechanism.
Gene content and protein synthesis
The ancestral cyanobacteria that led to chloroplasts probably had a genome that contained over 3000 genes, but only approximately 100 genes remain in contemporary chloroplast genomes. These genes code for a variety of things, mostly to do with the protein pipeline and photosynthesis. As in prokaryotes, genes in chloroplast DNA are organized into operons. Unlike prokaryotic DNA molecules, chloroplast DNA molecules contain introns (plant mitochondrial DNAs do too, but not human mtDNAs).
Among land plants, the contents of the chloroplast genome are fairly similar.
Chloroplast genome reduction and gene transfer
Over time, many parts of the chloroplast genome were transferred to the nuclear genome of the host, a process called endosymbiotic gene transfer. As a result, the chloroplast genome is heavily reduced compared to that of free-living cyanobacteria. Chloroplasts may contain 60–100 genes whereas cyanobacteria often have more than 1500 genes in their genome. Recently, a plastid without a genome was found, demonstrating chloroplasts can lose their genome during endosymbiotic the gene transfer process.
Endosymbiotic gene transfer is how we know about the lost chloroplasts in many CASH lineages. Even if a chloroplast is eventually lost, the genes it donated to the former host's nucleus persist, providing evidence for the lost chloroplast's existence. For example, while diatoms (a heterokontophyte) now have a red algal derived chloroplast, the presence of many green algal genes in the diatom nucleus provide evidence that the diatom ancestor had a green algal derived chloroplast at some point, which was subsequently replaced by the red chloroplast.
In land plants, some 11–14% of the DNA in their nuclei can be traced back to the chloroplast, up to 18% in Arabidopsis, corresponding to about 4,500 protein-coding genes. There have been a few recent transfers of genes from the chloroplast DNA to the nuclear genome in land plants.
Of the approximately 3000 proteins found in chloroplasts, some 95% of them are encoded by nuclear genes. Many of the chloroplast's protein complexes consist of subunits from both the chloroplast genome and the host's nuclear genome. As a result, protein synthesis must be coordinated between the chloroplast and the nucleus. The chloroplast is mostly under nuclear control, though chloroplasts can also give out signals regulating gene expression in the nucleus, called retrograde signaling. Recent research indicates that parts of the retrograde signaling network once considered characteristic for land plants emerged already in an algal progenitor, integrating into co-expressed cohorts of genes in the closest algal relatives of land plants.
Protein synthesis
Protein synthesis within chloroplasts relies on two RNA polymerases. One is coded by the chloroplast DNA, the other is of nuclear origin. The two RNA polymerases may recognize and bind to different kinds of promoters within the chloroplast genome. The ribosomes in chloroplasts are similar to bacterial ribosomes.
Protein targeting and import
Because so many chloroplast genes have been moved to the nucleus, many proteins that would originally have been translated in the chloroplast are now synthesized in the cytoplasm of the plant cell. These proteins must be directed back to the chloroplast, and imported through at least two chloroplast membranes.
Curiously, around half of the protein products of transferred genes aren't even targeted back to the chloroplast. Many became exaptations, taking on new functions like participating in cell division, protein routing, and even disease resistance. A few chloroplast genes found new homes in the mitochondrial genome—most became nonfunctional pseudogenes, though a few tRNA genes still work in the mitochondrion. Some transferred chloroplast DNA protein products get directed to the secretory pathway, though many secondary plastids are bounded by an outermost membrane derived from the host's cell membrane, and therefore topologically outside of the cell because to reach the chloroplast from the cytosol, the cell membrane must be crossed, which signifies entrance into the extracellular space. In those cases, chloroplast-targeted proteins do initially travel along the secretory pathway.
Because the cell acquiring a chloroplast already had mitochondria (and peroxisomes, and a cell membrane for secretion), the new chloroplast host had to develop a unique protein targeting system to avoid having chloroplast proteins being sent to the wrong organelle.
In most, but not all cases, nuclear-encoded chloroplast proteins are translated with a cleavable transit peptide that's added to the N-terminus of the protein precursor. Sometimes the transit sequence is found on the C-terminus of the protein, or within the functional part of the protein.
Transport proteins and membrane translocons
After a chloroplast polypeptide is synthesized on a ribosome in the cytosol, an enzyme specific to chloroplast proteins phosphorylates, or adds a phosphate group to many (but not all) of them in their transit sequences.
Phosphorylation helps many proteins bind the polypeptide, keeping it from folding prematurely. This is important because it prevents chloroplast proteins from assuming their active form and carrying out their chloroplast functions in the wrong place—the cytosol. At the same time, they have to keep just enough shape so that they can be recognized by the chloroplast. These proteins also help the polypeptide get imported into the chloroplast.
From here, chloroplast proteins bound for the stroma must pass through two protein complexes—the TOC complex, or translocon on the outer chloroplast membrane, and the TIC translocon, or translocon on the inner chloroplast membrane translocon. Chloroplast polypeptide chains probably often travel through the two complexes at the same time, but the TIC complex can also retrieve preproteins lost in the intermembrane space.
Structure
In land plants, chloroplasts are generally lens-shaped, 3–10 μm in diameter and 1–3 μm thick. Corn seedling chloroplasts are ≈20 μm3 in volume. Greater diversity in chloroplast shapes exists among the algae, which often contain a single chloroplast that can be shaped like a net (e.g., Oedogonium), a cup (e.g., Chlamydomonas), a ribbon-like spiral around the edges of the cell (e.g., Spirogyra), or slightly twisted bands at the cell edges (e.g., Sirogonium). Some algae have two chloroplasts in each cell; they are star-shaped in Zygnema, or may follow the shape of half the cell in order Desmidiales. In some algae, the chloroplast takes up most of the cell, with pockets for the nucleus and other organelles, for example, some species of Chlorella have a cup-shaped chloroplast that occupies much of the cell.
All chloroplasts have at least three membrane systems—the outer chloroplast membrane, the inner chloroplast membrane, and the thylakoid system. The two innermost lipid-bilayer membranes that surround all chloroplasts correspond to the outer and inner membranes of the ancestral cyanobacterium's gram negative cell wall, and not the phagosomal membrane from the host, which was probably lost. Chloroplasts that are the product of secondary endosymbiosis may have additional membranes surrounding these three. Inside the outer and inner chloroplast membranes is the chloroplast stroma, a semi-gel-like fluid that makes up much of a chloroplast's volume, and in which the thylakoid system floats.
There are some common misconceptions about the outer and inner chloroplast membranes. The fact that chloroplasts are surrounded by a double membrane is often cited as evidence that they are the descendants of endosymbiotic cyanobacteria. This is often interpreted as meaning the outer chloroplast membrane is the product of the host's cell membrane infolding to form a vesicle to surround the ancestral cyanobacterium—which is not true—both chloroplast membranes are homologous to the cyanobacterium's original double membranes.
The chloroplast double membrane is also often compared to the mitochondrial double membrane. This is not a valid comparison—the inner mitochondria membrane is used to run proton pumps and carry out oxidative phosphorylation across to generate ATP energy. The only chloroplast structure that can be considered analogous to it is the internal thylakoid system. Even so, in terms of "in-out", the direction of chloroplast H ion flow is in the opposite direction compared to oxidative phosphorylation in mitochondria. In addition, in terms of function, the inner chloroplast membrane, which regulates metabolite passage and synthesizes some materials, has no counterpart in the mitochondrion.
Outer chloroplast membrane
The outer chloroplast membrane is a semi-porous membrane that small molecules and ions can easily diffuse across. However, it is not permeable to larger proteins, so chloroplast polypeptides being synthesized in the cell cytoplasm must be transported across the outer chloroplast membrane by the TOC complex, or translocon on the outer chloroplast membrane.
The chloroplast membranes sometimes protrude out into the cytoplasm, forming a stromule, or stroma-containing tubule. Stromules are very rare in chloroplasts, and are much more common in other plastids like chromoplasts and amyloplasts in petals and roots, respectively. They may exist to increase the chloroplast's surface area for cross-membrane transport, because they are often branched and tangled with the endoplasmic reticulum. When they were first observed in 1962, some plant biologists dismissed the structures as artifactual, claiming that stromules were just oddly shaped chloroplasts with constricted regions or dividing chloroplasts. However, there is a growing body of evidence that stromules are functional, integral features of plant cell plastids, not merely artifacts.
Intermembrane space and peptidoglycan wall
Usually, a thin intermembrane space about 10–20 nanometers thick exists between the outer and inner chloroplast membranes.
Glaucophyte algal chloroplasts have a peptidoglycan layer between the chloroplast membranes. It corresponds to the peptidoglycan cell wall of their cyanobacterial ancestors, which is located between their two cell membranes. These chloroplasts are called muroplasts (from Latin "mura", meaning "wall"). Other chloroplasts were assumed to have lost the cyanobacterial wall, leaving an intermembrane space between the two chloroplast envelope membranes, but has since been found also in moss, lycophytes and ferns.Plant peptidoglycan precursor biosynthesis: Conservation between moss chloroplasts and Gram-negative bacteria
Inner chloroplast membrane
The inner chloroplast membrane borders the stroma and regulates passage of materials in and out of the chloroplast. After passing through the TOC complex in the outer chloroplast membrane, polypeptides must pass through the TIC complex (translocon on the inner chloroplast membrane) which is located in the inner chloroplast membrane.
In addition to regulating the passage of materials, the inner chloroplast membrane is where fatty acids, lipids, and carotenoids are synthesized.
Peripheral reticulum
Some chloroplasts contain a structure called the chloroplast peripheral reticulum. It is often found in the chloroplasts of plants, though it has also been found in some angiosperms, and even some gymnosperms. The chloroplast peripheral reticulum consists of a maze of membranous tubes and vesicles continuous with the inner chloroplast membrane that extends into the internal stromal fluid of the chloroplast. Its purpose is thought to be to increase the chloroplast's surface area for cross-membrane transport between its stroma and the cell cytoplasm. The small vesicles sometimes observed may serve as transport vesicles to shuttle stuff between the thylakoids and intermembrane space.
Stroma
The protein-rich, alkaline, aqueous fluid within the inner chloroplast membrane and outside of the thylakoid space is called the stroma, which corresponds to the cytosol of the original cyanobacterium. Nucleoids of chloroplast DNA, chloroplast ribosomes, the thylakoid system with plastoglobuli, starch granules, and many proteins can be found floating around in it. The Calvin cycle, which fixes CO into G3P takes place in the stroma.
Chloroplast ribosomes
Chloroplasts have their own ribosomes, which they use to synthesize a small fraction of their proteins. Chloroplast ribosomes are about two-thirds the size of cytoplasmic ribosomes (around 17 nm vs 25 nm). They take mRNAs transcribed from the chloroplast DNA and translate them into protein. While similar to bacterial ribosomes, chloroplast translation is more complex than in bacteria, so chloroplast ribosomes include some chloroplast-unique features.
Small subunit ribosomal RNAs in several Chlorophyta and euglenid chloroplasts lack motifs for Shine-Dalgarno sequence recognition, which is considered essential for translation initiation in most chloroplasts and prokaryotes. Such loss is also rarely observed in other plastids and prokaryotes. An additional 4.5S rRNA with homology to the 3' tail of 23S is found in "higher" plants.
Plastoglobuli
Plastoglobuli (singular plastoglobulus, sometimes spelled plastoglobule(s)), are spherical bubbles of lipids and proteins about 45–60 nanometers across. They are surrounded by a lipid monolayer. Plastoglobuli are found in all chloroplasts, but become more common when the chloroplast is under oxidative stress, or when it ages and transitions into a gerontoplast. Plastoglobuli also exhibit a greater size variation under these conditions. They are also common in etioplasts, but decrease in number as the etioplasts mature into chloroplasts.
Plastoglobuli contain both structural proteins and enzymes involved in lipid synthesis and metabolism. They contain many types of lipids including plastoquinone, vitamin E, carotenoids and chlorophylls.
Plastoglobuli were once thought to be free-floating in the stroma, but it is now thought that they are permanently attached either to a thylakoid or to another plastoglobulus attached to a thylakoid, a configuration that allows a plastoglobulus to exchange its contents with the thylakoid network. In normal green chloroplasts, the vast majority of plastoglobuli occur singularly, attached directly to their parent thylakoid. In old or stressed chloroplasts, plastoglobuli tend to occur in linked groups or chains, still always anchored to a thylakoid.
Plastoglobuli form when a bubble appears between the layers of the lipid bilayer of the thylakoid membrane, or bud from existing plastoglobuli—though they never detach and float off into the stroma. Practically all plastoglobuli form on or near the highly curved edges of the thylakoid disks or sheets. They are also more common on stromal thylakoids than on granal ones.
Starch granules
Starch granules are very common in chloroplasts, typically taking up 15% of the organelle's volume, though in some other plastids like amyloplasts, they can be big enough to distort the shape of the organelle. Starch granules are simply accumulations of starch in the stroma, and are not bounded by a membrane.
Starch granules appear and grow throughout the day, as the chloroplast synthesizes sugars, and are consumed at night to fuel respiration and continue sugar export into the phloem, though in mature chloroplasts, it is rare for a starch granule to be completely consumed or for a new granule to accumulate.
Starch granules vary in composition and location across different chloroplast lineages. In red algae, starch granules are found in the cytoplasm rather than in the chloroplast. In plants, mesophyll chloroplasts, which do not synthesize sugars, lack starch granules.
RuBisCO
The chloroplast stroma contains many proteins, though the most common and important is RuBisCO, which is probably also the most abundant protein on the planet. RuBisCO is the enzyme that fixes CO into sugar molecules. In plants, RuBisCO is abundant in all chloroplasts, though in plants, it is confined to the bundle sheath chloroplasts, where the Calvin cycle is carried out in plants.
Pyrenoids
The chloroplasts of some hornworts and algae contain structures called pyrenoids. They are not found in higher plants. Pyrenoids are roughly spherical and highly refractive bodies which are a site of starch accumulation in plants that contain them. They consist of a matrix opaque to electrons, surrounded by two hemispherical starch plates. The starch is accumulated as the pyrenoids mature. In algae with carbon concentrating mechanisms, the enzyme RuBisCO is found in the pyrenoids. Starch can also accumulate around the pyrenoids when CO2 is scarce. Pyrenoids can divide to form new pyrenoids, or be produced "de novo".
Thylakoid system
(Bottom) Large-scale 3D model generated from segmentation of tomographic reconstructions by STEM. grana=yellow; stroma lamellae=green; plastoglobules=purple; chloroplast envelope=blue. See.
Thylakoids (sometimes spelled thylakoïds), are small interconnected sacks which contain the membranes that the light reactions of photosynthesis take place on. The word thylakoid comes from the Greek word thylakos which means "sack".
Suspended within the chloroplast stroma is the thylakoid system, a highly dynamic collection of membranous sacks called thylakoids where chlorophyll is found and the light reactions of photosynthesis happen.
In most vascular plant chloroplasts, the thylakoids are arranged in stacks called grana, though in certain plant chloroplasts and some algal chloroplasts, the thylakoids are free floating.
Thylakoid structure
Using a light microscope, it is just barely possible to see tiny green granules—which were named grana. With electron microscopy, it became possible to see the thylakoid system in more detail, revealing it to consist of stacks of flat thylakoids which made up the grana, and long interconnecting stromal thylakoids which linked different grana.
In the transmission electron microscope, thylakoid membranes appear as alternating light-and-dark bands, 8.5 nanometers thick.
The three-dimensional structure of the thylakoid membrane system has been disputed. Many models have been proposed, the most prevalent being the helical model, in which granum stacks of thylakoids are wrapped by helical stromal thylakoids. Another model known as the 'bifurcation model', which was based on the first electron tomography study of plant thylakoid membranes, depicts the stromal membranes as wide lamellar sheets perpendicular to the grana columns which bifurcates into multiple parallel discs forming the granum-stroma assembly. The helical model was supported by several additional works, but ultimately it was determined in 2019 that features from both the helical and bifurcation models are consolidated by newly discovered left-handed helical membrane junctions. Likely for ease, the thylakoid system is still commonly depicted by older "hub and spoke" models where the grana are connected to each other by tubes of stromal thylakoids.
Grana consist of a stacks of flattened circular granal thylakoids that resemble pancakes. Each granum can contain anywhere from two to a hundred thylakoids, though grana with 10–20 thylakoids are most common. Wrapped around the grana are multiple parallel right-handed helical stromal thylakoids, also known as frets or lamellar thylakoids. The helices ascend at an angle of ~20°, connecting to each granal thylakoid at a bridge-like slit junction.
The stroma lamellae extend as large sheets perpendicular to the grana columns. These sheets are connected to the right-handed helices either directly or through bifurcations that form left-handed helical membrane surfaces. The left-handed helical surfaces have a similar tilt angle to the right-handed helices (~20°), but ¼ the pitch. Approximately 4 left-handed helical junctions are present per granum, resulting in a pitch-balanced array of right- and left-handed helical membrane surfaces of different radii and pitch that consolidate the network with minimal surface and bending energies. While different parts of the thylakoid system contain different membrane proteins, the thylakoid membranes are continuous and the thylakoid space they enclose form a single continuous labyrinth.
Thylakoid composition
Embedded in the thylakoid membranes are important protein complexes which carry out the light reactions of photosynthesis. Photosystem II and photosystem I contain light-harvesting complexes with chlorophyll and carotenoids that absorb light energy and use it to energize electrons. Molecules in the thylakoid membrane use the energized electrons to pump hydrogen ions into the thylakoid space, decreasing the pH and turning it acidic. ATP synthase is a large protein complex that harnesses the concentration gradient of the hydrogen ions in the thylakoid space to generate ATP energy as the hydrogen ions flow back out into the stroma—much like a dam turbine.
There are two types of thylakoids—granal thylakoids, which are arranged in grana, and stromal thylakoids, which are in contact with the stroma. Granal thylakoids are pancake-shaped circular disks about 300–600 nanometers in diameter. Stromal thylakoids are helicoid sheets that spiral around grana. The flat tops and bottoms of granal thylakoids contain only the relatively flat photosystem II protein complex. This allows them to stack tightly, forming grana with many layers of tightly appressed membrane, called granal membrane, increasing stability and surface area for light capture.
In contrast, photosystem I and ATP synthase are large protein complexes which jut out into the stroma. They can't fit in the appressed granal membranes, and so are found in the stromal thylakoid membrane—the edges of the granal thylakoid disks and the stromal thylakoids. These large protein complexes may act as spacers between the sheets of stromal thylakoids.
The number of thylakoids and the total thylakoid area of a chloroplast is influenced by light exposure. Shaded chloroplasts contain larger and more grana with more thylakoid membrane area than chloroplasts exposed to bright light, which have smaller and fewer grana and less thylakoid area. Thylakoid extent can change within minutes of light exposure or removal.
Pigments and chloroplast colors
Inside the photosystems embedded in chloroplast thylakoid membranes are various photosynthetic pigments, which absorb and transfer light energy. The types of pigments found are different in various groups of chloroplasts, and are responsible for a wide variety of chloroplast colorations. Other plastid types, such as the leucoplast and the chromoplast, contain little chlorophyll and do not carry out photosynthesis.
Paper chromatography of some spinach leaf extract shows the various pigments present in their chloroplasts.|100px
Paper chromatography of some spinach leaf extract shows the various pigments present in their chloroplasts.
Xanthophylls
Chlorophyll a
Chlorophyll b
Chlorophylls
Chlorophyll a is found in all chloroplasts, as well as their cyanobacterial ancestors. Chlorophyll a is a blue-green pigment partially responsible for giving most cyanobacteria and chloroplasts their color. Other forms of chlorophyll exist, such as the accessory pigments chlorophyll b, chlorophyll c, chlorophyll d, and chlorophyll f.
Chlorophyll b is an olive green pigment found only in the chloroplasts of plants, green algae, any secondary chloroplasts obtained through the secondary endosymbiosis of a green alga, and a few cyanobacteria. It is the chlorophylls a and b together that make most plant and green algal chloroplasts green.
Chlorophyll c is mainly found in secondary endosymbiotic chloroplasts that originated from a red alga, although it is not found in chloroplasts of red algae themselves. Chlorophyll c is also found in some green algae and cyanobacteria.
Chlorophylls d and f are pigments found only in some cyanobacteria.
Carotenoids
In addition to chlorophylls, another group of yellow–orange pigments called carotenoids are also found in the photosystems. There are about thirty photosynthetic carotenoids. They help transfer and dissipate excess energy, and their bright colors sometimes override the chlorophyll green, like during the fall, when the leaves of some land plants change color. β-carotene is a bright red-orange carotenoid found in nearly all chloroplasts, like chlorophyll a. Xanthophylls, especially the orange-red zeaxanthin, are also common. Many other forms of carotenoids exist that are only found in certain groups of chloroplasts.
Phycobilins
Phycobilins are a third group of pigments found in cyanobacteria, and glaucophyte, red algal, and cryptophyte chloroplasts. Phycobilins come in all colors, though phycoerytherin is one of the pigments that makes many red algae red. Phycobilins often organize into relatively large protein complexes about 40 nanometers across called phycobilisomes. Like photosystem I and ATP synthase, phycobilisomes jut into the stroma, preventing thylakoid stacking in red algal chloroplasts. Cryptophyte chloroplasts and some cyanobacteria don't have their phycobilin pigments organized into phycobilisomes, and keep them in their thylakoid space instead.
Photosynthetic pigments. Presence of pigments across chloroplast groups and cyanobacteria.
Colored cells represent pigment presence. Chl = chlorophyll Chl a Chl b Chl c Chl d and f Xanthophylls α-carotene β-carotene Phycobilins Land plants Green algae Euglenophytes and Chlorarachniophytes Multicellular red algae Unicellular red algae Haptophytes and Dinophytes Cryptophytes Glaucophytes Cyanobacteria
Specialized chloroplasts in plants
To fix carbon dioxide into sugar molecules in the process of photosynthesis, chloroplasts use an enzyme called RuBisCO. RuBisCO has trouble distinguishing between carbon dioxide and oxygen, so at high oxygen concentrations, RuBisCO starts accidentally adding oxygen to sugar precursors. This has the result of ATP energy being wasted and being released, all with no sugar being produced. This is a big problem, since O is produced by the initial light reactions of photosynthesis, causing issues down the line in the Calvin cycle which uses RuBisCO.
plants evolved a way to solve this—by spatially separating the light reactions and the Calvin cycle. The light reactions, which store light energy in ATP and NADPH, are done in the mesophyll cells of a leaf. The Calvin cycle, which uses the stored energy to make sugar using RuBisCO, is done in the bundle sheath cells, a layer of cells surrounding a vein in a leaf.
As a result, chloroplasts in mesophyll cells and bundle sheath cells are specialized for each stage of photosynthesis. In mesophyll cells, chloroplasts are specialized for the light reactions, so they lack RuBisCO, and have normal grana and thylakoids, which they use to make ATP and NADPH, as well as oxygen. They store in a four-carbon compound, which is why the process is called photosynthesis. The four-carbon compound is then transported to the bundle sheath chloroplasts, where it drops off and returns to the mesophyll. Bundle sheath chloroplasts do not carry out the light reactions, preventing oxygen from building up in them and disrupting RuBisCO activity. Because of this, they lack thylakoids organized into grana stacks—though bundle sheath chloroplasts still have free-floating thylakoids in the stroma where they still carry out cyclic electron flow, a light-driven method of synthesizing ATP to power the Calvin cycle without generating oxygen. They lack photosystem II, and only have photosystem I—the only protein complex needed for cyclic electron flow. Because the job of bundle sheath chloroplasts is to carry out the Calvin cycle and make sugar, they often contain large starch grains.
Both types of chloroplast contain large amounts of chloroplast peripheral reticulum, which they use to get more surface area to transport stuff in and out of them. Mesophyll chloroplasts have a little more peripheral reticulum than bundle sheath chloroplasts.
Function and chemistry
Guard cell chloroplasts
Unlike most epidermal cells, the guard cells of plant stomata contain relatively well-developed chloroplasts.Lawson T. and J. I. L. Morison. Essay 10.1 Guard Cell Photosynthesis. Plant Physiology and Development, Sixth Edition However, exactly what they do is controversial.
Plant innate immunity
Plants lack specialized immune cells—all plant cells participate in the plant immune response. Chloroplasts, along with the nucleus, cell membrane, and endoplasmic reticulum, are key players in pathogen defense. Due to its role in a plant cell's immune response, pathogens frequently target the chloroplast.
Plants have two main immune responses—the hypersensitive response, in which infected cells seal themselves off and undergo programmed cell death, and systemic acquired resistance, where infected cells release signals warning the rest of the plant of a pathogen's presence.
Chloroplasts stimulate both responses by purposely damaging their photosynthetic system, producing reactive oxygen species. High levels of reactive oxygen species will cause the hypersensitive response. The reactive oxygen species also directly kill any pathogens within the cell. Lower levels of reactive oxygen species initiate systemic acquired resistance, triggering defense-molecule production in the rest of the plant.
In some plants, chloroplasts are known to move closer to the infection site and the nucleus during an infection.
Chloroplasts can serve as cellular sensors. After detecting stress in a cell, which might be due to a pathogen, chloroplasts begin producing molecules like salicylic acid, jasmonic acid, nitric oxide and reactive oxygen species which can serve as defense-signals. As cellular signals, reactive oxygen species are unstable molecules, so they probably don't leave the chloroplast, but instead pass on their signal to an unknown second messenger molecule. All these molecules initiate retrograde signaling—signals from the chloroplast that regulate gene expression in the nucleus.
In addition to defense signaling, chloroplasts, with the help of the peroxisomes, help synthesize an important defense molecule, jasmonate. Chloroplasts synthesize all the fatty acids in a plant cell—linoleic acid, a fatty acid, is a precursor to jasmonate.
Photosynthesis
One of the main functions of the chloroplast is its role in photosynthesis, the process by which light is transformed into chemical energy, to subsequently produce food in the form of sugars. Water (H2O) and carbon dioxide (CO2) are used in photosynthesis, and sugar and oxygen (O2) are made, using light energy. Photosynthesis is divided into two stages—the light reactions, where water is split to produce oxygen, and the dark reactions, or Calvin cycle, which builds sugar molecules from carbon dioxide. The two phases are linked by the energy carriers adenosine triphosphate (ATP) and nicotinamide adenine dinucleotide phosphate (NADP+).
Light reactions
The light reactions take place on the thylakoid membranes. They take light energy and store it in NADPH, a form of NADP+, and ATP to fuel the dark reactions.
Energy carriers
ATP is the phosphorylated version of adenosine diphosphate (ADP), which stores energy in a cell and powers most cellular activities. ATP is the energized form, while ADP is the (partially) depleted form. NADP+ is an electron carrier which ferries high energy electrons. In the light reactions, it gets reduced, meaning it picks up electrons, becoming NADPH.
Photophosphorylation
Like mitochondria, chloroplasts use the potential energy stored in an H+, or hydrogen ion, gradient to generate ATP energy. The two photosystems capture light energy to energize electrons taken from water, and release them down an electron transport chain. The molecules between the photosystems harness the electrons' energy to pump hydrogen ions into the thylakoid space, creating a concentration gradient, with more hydrogen ions (up to a thousand times as many) inside the thylakoid system than in the stroma. The hydrogen ions in the thylakoid space then diffuse back down their concentration gradient, flowing back out into the stroma through ATP synthase. ATP synthase uses the energy from the flowing hydrogen ions to phosphorylate adenosine diphosphate into adenosine triphosphate, or ATP. Because chloroplast ATP synthase projects out into the stroma, the ATP is synthesized there, in position to be used in the dark reactions.
NADP+ reduction
Electrons are often removed from the electron transport chains to charge NADP+ with electrons, reducing it to NADPH. Like ATP synthase, ferredoxin-NADP+ reductase, the enzyme that reduces NADP+, releases the NADPH it makes into the stroma, right where it is needed for the dark reactions.
Because NADP+ reduction removes electrons from the electron transport chains, they must be replaced—the job of photosystem II, which splits water molecules (H2O) to obtain the electrons from its hydrogen atoms.
Cyclic photophosphorylation
While photosystem II photolyzes water to obtain and energize new electrons, photosystem I simply reenergizes depleted electrons at the end of an electron transport chain. Normally, the reenergized electrons are taken by NADP+, though sometimes they can flow back down more H+-pumping electron transport chains to transport more hydrogen ions into the thylakoid space to generate more ATP. This is termed cyclic photophosphorylation because the electrons are recycled. Cyclic photophosphorylation is common in plants, which need more ATP than NADPH.
Dark reactions
The Calvin cycle, also known as the dark reactions, is a series of biochemical reactions that fixes CO2 into G3P sugar molecules and uses the energy and electrons from the ATP and NADPH made in the light reactions. The Calvin cycle takes place in the stroma of the chloroplast.
While named "the dark reactions", in most plants, they take place in the light, since the dark reactions are dependent on the products of the light reactions.
Carbon fixation and G3P synthesis
The Calvin cycle starts by using the enzyme RuBisCO to fix CO2 into five-carbon Ribulose bisphosphate (RuBP) molecules. The result is unstable six-carbon molecules that immediately break down into three-carbon molecules called 3-phosphoglyceric acid, or 3-PGA.
The ATP and NADPH made in the light reactions is used to convert the 3-PGA into glyceraldehyde-3-phosphate, or G3P sugar molecules. Most of the G3P molecules are recycled back into RuBP using energy from more ATP, but one out of every six produced leaves the cycle—the end product of the dark reactions.
Sugars and starches
Glyceraldehyde-3-phosphate can double up to form larger sugar molecules like glucose and fructose. These molecules are processed, and from them, the still larger sucrose, a disaccharide commonly known as table sugar, is made, though this process takes place outside of the chloroplast, in the cytoplasm.
Alternatively, glucose monomers in the chloroplast can be linked together to make starch, which accumulates into the starch grains found in the chloroplast.
Under conditions such as high atmospheric CO2 concentrations, these starch grains may grow very large, distorting the grana and thylakoids. The starch granules displace the thylakoids, but leave them intact.
Waterlogged roots can also cause starch buildup in the chloroplasts, possibly due to less sucrose being exported out of the chloroplast (or more accurately, the plant cell). This depletes a plant's free phosphate supply, which indirectly stimulates chloroplast starch synthesis.
While linked to low photosynthesis rates, the starch grains themselves may not necessarily interfere significantly with the efficiency of photosynthesis, and might simply be a side effect of another photosynthesis-depressing factor.
Photorespiration
Photorespiration can occur when the oxygen concentration is too high. RuBisCO cannot distinguish between oxygen and carbon dioxide very well, so it can accidentally add O2 instead of CO2 to RuBP. This process reduces the efficiency of photosynthesis—it consumes ATP and oxygen, releases CO2, and produces no sugar. It can waste up to half the carbon fixed by the Calvin cycle. Several mechanisms have evolved in different lineages that raise the carbon dioxide concentration relative to oxygen within the chloroplast, increasing the efficiency of photosynthesis. These mechanisms are called carbon dioxide concentrating mechanisms, or CCMs. These include Crassulacean acid metabolism, carbon fixation, and pyrenoids. Chloroplasts in plants are notable as they exhibit a distinct chloroplast dimorphism.
pH
Because of the H+ gradient across the thylakoid membrane, the interior of the thylakoid is acidic, with a pH around 4, while the stroma is slightly basic, with a pH of around 8.
The optimal stroma pH for the Calvin cycle is 8.1, with the reaction nearly stopping when the pH falls below 7.3.
CO2 in water can form carbonic acid, which can disturb the pH of isolated chloroplasts, interfering with photosynthesis, even though CO2 is used in photosynthesis. However, chloroplasts in living plant cells are not affected by this as much.
Chloroplasts can pump K+ and H+ ions in and out of themselves using a poorly understood light-driven transport system.
In the presence of light, the pH of the thylakoid lumen can drop up to 1.5 pH units, while the pH of the stroma can rise by nearly one pH unit.
Amino acid synthesis
Chloroplasts alone make almost all of a plant cell's amino acids in their stroma except the sulfur-containing ones like cysteine and methionine. Cysteine is made in the chloroplast (the proplastid too) but it is also synthesized in the cytosol and mitochondria, probably because it has trouble crossing membranes to get to where it is needed. The chloroplast is known to make the precursors to methionine but it is unclear whether the organelle carries out the last leg of the pathway or if it happens in the cytosol.
Other nitrogen compounds
Chloroplasts make all of a cell's purines and pyrimidines—the nitrogenous bases found in DNA and RNA. They also convert nitrite (NO2−) into ammonia (NH3) which supplies the plant with nitrogen to make its amino acids and nucleotides.
Other chemical products
The plastid is the site of diverse and complex lipid synthesis in plants.Buchanan BB, Gruissem W, Jones RL (Eds.). 2015. Biochemistry & Molecular Biology of Plants. Wiley Blackwell. The carbon used to form the majority of the lipid is from acetyl-CoA, which is the decarboxylation product of pyruvate. Pyruvate may enter the plastid from the cytosol by passive diffusion through the membrane after production in glycolysis. Pyruvate is also made in the plastid from phosphoenolpyruvate, a metabolite made in the cytosol from pyruvate or PGA. Acetate in the cytosol is unavailable for lipid biosynthesis in the plastid.Bao X, Focke M, Pollard M, Ohlrogge J. 2000. Understanding in vivo carbon precursor supply for fatty acid synthesis in leaf tissue. Plant Journal 22, 39–50. The typical length of fatty acids produced in the plastid are 16 or 18 carbons, with 0-3 cis double bonds.
The biosynthesis of fatty acids from acetyl-CoA primarily requires two enzymes. Acetyl-CoA carboxylase creates malonyl-CoA, used in both the first step and the extension steps of synthesis. Fatty acid synthase (FAS) is a large complex of enzymes and cofactors including acyl carrier protein (ACP) which holds the acyl chain as it is synthesized. The initiation of synthesis begins with the condensation of malonyl-ACP with acetyl-CoA to produce ketobutyryl-ACP. 2 reductions involving the use of NADPH and one dehydration creates butyryl-ACP. Extension of the fatty acid comes from repeated cycles of malonyl-ACP condensation, reduction, and dehydration.
Other lipids are derived from the methyl-erythritol phosphate (MEP) pathway and consist of gibberelins, sterols, abscisic acid, phytol, and innumerable secondary metabolites.
Location
Distribution in a plant
Not all cells in a multicellular plant contain chloroplasts. All green parts of a plant contain chloroplasts as the color comes from the chlorophyll. The plant cells which contain chloroplasts are usually parenchyma cells, though chloroplasts can also be found in collenchyma tissue. A plant cell which contains chloroplasts is known as a chlorenchyma cell. A typical chlorenchyma cell of a land plant contains about 10 to 100 chloroplasts.
In some plants such as cacti, chloroplasts are found in the stems, though in most plants, chloroplasts are concentrated in the leaves. One square millimeter of leaf tissue can contain half a million chloroplasts. Within a leaf, chloroplasts are mainly found in the mesophyll layers of a leaf, and the guard cells of stomata. Palisade mesophyll cells can contain 30–70 chloroplasts per cell, while stomatal guard cells contain only around 8–15 per cell, as well as much less chlorophyll. Chloroplasts can also be found in the bundle sheath cells of a leaf, especially in C plants, which carry out the Calvin cycle in their bundle sheath cells. They are often absent from the epidermis of a leaf.
Cellular location
Chloroplast movement
The chloroplasts of plant and algal cells can orient themselves to best suit the available light. In low-light conditions, they will spread out in a sheet—maximizing the surface area to absorb light. Under intense light, they will seek shelter by aligning in vertical columns along the plant cell's cell wall or turning sideways so that light strikes them edge-on. This reduces exposure and protects them from photooxidative damage. This ability to distribute chloroplasts so that they can take shelter behind each other or spread out may be the reason why land plants evolved to have many small chloroplasts instead of a few big ones.
Chloroplast movement is considered one of the most closely regulated stimulus-response systems that can be found in plants. Mitochondria have also been observed to follow chloroplasts as they move.
In higher plants, chloroplast movement is run by phototropins, blue light photoreceptors also responsible for plant phototropism. In some algae, mosses, ferns, and flowering plants, chloroplast movement is influenced by red light in addition to blue light, though very long red wavelengths inhibit movement rather than speeding it up. Blue light generally causes chloroplasts to seek shelter, while red light draws them out to maximize light absorption.
Studies of Vallisneria gigantea, an aquatic flowering plant, have shown that chloroplasts can get moving within five minutes of light exposure, though they don't initially show any net directionality. They may move along microfilament tracks, and the fact that the microfilament mesh changes shape to form a honeycomb structure surrounding the chloroplasts after they have moved suggests that microfilaments may help to anchor chloroplasts in place.
Differentiation, replication, and inheritance
Chloroplasts are a special type of a plant cell organelle called a plastid, though the two terms are sometimes used interchangeably. There are many other types of plastids, which carry out various functions. All chloroplasts in a plant are descended from undifferentiated proplastids found in the zygote, or fertilized egg. Proplastids are commonly found in an adult plant's apical meristems. Chloroplasts do not normally develop from proplastids in root tip meristems—instead, the formation of starch-storing amyloplasts is more common.
In shoots, proplastids from shoot apical meristems can gradually develop into chloroplasts in photosynthetic leaf tissues as the leaf matures, if exposed to the required light. This process involves invaginations of the inner plastid membrane, forming sheets of membrane that project into the internal stroma. These membrane sheets then fold to form thylakoids and grana.
If angiosperm shoots are not exposed to the required light for chloroplast formation, proplastids may develop into an etioplast stage before becoming chloroplasts. An etioplast is a plastid that lacks chlorophyll, and has inner membrane invaginations that form a lattice of tubes in their stroma, called a prolamellar body. While etioplasts lack chlorophyll, they have a yellow chlorophyll precursor stocked. Within a few minutes of light exposure, the prolamellar body begins to reorganize into stacks of thylakoids, and chlorophyll starts to be produced. This process, where the etioplast becomes a chloroplast, takes several hours. Gymnosperms do not require light to form chloroplasts.
Light, however, does not guarantee that a proplastid will develop into a chloroplast. Whether a proplastid develops into a chloroplast some other kind of plastid is mostly controlled by the nucleus and is largely influenced by the kind of cell it resides in.
Plastid interconversion
Plastid differentiation is not permanent, in fact many interconversions are possible. Chloroplasts may be converted to chromoplasts, which are pigment-filled plastids responsible for the bright colors seen in flowers and ripe fruit. Starch storing amyloplasts can also be converted to chromoplasts, and it is possible for proplastids to develop straight into chromoplasts. Chromoplasts and amyloplasts can also become chloroplasts, like what happens when a carrot or a potato is illuminated. If a plant is injured, or something else causes a plant cell to revert to a meristematic state, chloroplasts and other plastids can turn back into proplastids. Chloroplast, amyloplast, chromoplast, proplastid are not absolute; state—intermediate forms are common.
Division
Most chloroplasts in a photosynthetic cell do not develop directly from proplastids or etioplasts. In fact, a typical shoot meristematic plant cell contains only 7–20 proplastids. These proplastids differentiate into chloroplasts, which divide to create the 30–70 chloroplasts found in a mature photosynthetic plant cell. If the cell divides, chloroplast division provides the additional chloroplasts to partition between the two daughter cells.
In single-celled algae, chloroplast division is the only way new chloroplasts are formed. There is no proplastid differentiation—when an algal cell divides, its chloroplast divides along with it, and each daughter cell receives a mature chloroplast.
Almost all chloroplasts in a cell divide, rather than a small group of rapidly dividing chloroplasts. Chloroplasts have no definite S-phase—their DNA replication is not synchronized or limited to that of their host cells.
Much of what we know about chloroplast division comes from studying organisms like Arabidopsis and the red alga Cyanidioschyzon merolæ.
The division process starts when the proteins FtsZ1 and FtsZ2 assemble into filaments, and with the help of a protein ARC6, form a structure called a Z-ring within the chloroplast's stroma. The Min system manages the placement of the Z-ring, ensuring that the chloroplast is cleaved more or less evenly. The protein MinD prevents FtsZ from linking up and forming filaments. Another protein ARC3 may also be involved, but it is not very well understood. These proteins are active at the poles of the chloroplast, preventing Z-ring formation there, but near the center of the chloroplast, MinE inhibits them, allowing the Z-ring to form.
Next, the two plastid-dividing rings, or PD rings form. The inner plastid-dividing ring is located in the inner side of the chloroplast's inner membrane, and is formed first. The outer plastid-dividing ring is found wrapped around the outer chloroplast membrane. It consists of filaments about 5 nanometers across, arranged in rows 6.4 nanometers apart, and shrinks to squeeze the chloroplast. This is when chloroplast constriction begins. In a few species like Cyanidioschyzon merolæ, chloroplasts have a third plastid-dividing ring located in the chloroplast's intermembrane space.
Late into the constriction phase, dynamin proteins assemble around the outer plastid-dividing ring, helping provide force to squeeze the chloroplast. Meanwhile, the Z-ring and the inner plastid-dividing ring break down. During this stage, the many chloroplast DNA plasmids floating around in the stroma are partitioned and distributed to the two forming daughter chloroplasts.
Later, the dynamins migrate under the outer plastid dividing ring, into direct contact with the chloroplast's outer membrane, to cleave the chloroplast in two daughter chloroplasts.
A remnant of the outer plastid dividing ring remains floating between the two daughter chloroplasts, and a remnant of the dynamin ring remains attached to one of the daughter chloroplasts.
Of the five or six rings involved in chloroplast division, only the outer plastid-dividing ring is present for the entire constriction and division phase—while the Z-ring forms first, constriction does not begin until the outer plastid-dividing ring forms.
Regulation
In species of algae that contain a single chloroplast, regulation of chloroplast division is extremely important to ensure that each daughter cell receives a chloroplast—chloroplasts can't be made from scratch. In organisms like plants, whose cells contain multiple chloroplasts, coordination is looser and less important. It is likely that chloroplast and cell division are somewhat synchronized, though the mechanisms for it are mostly unknown.
Light has been shown to be a requirement for chloroplast division. Chloroplasts can grow and progress through some of the constriction stages under poor quality green light, but are slow to complete division—they require exposure to bright white light to complete division. Spinach leaves grown under green light have been observed to contain many large dumbbell-shaped chloroplasts. Exposure to white light can stimulate these chloroplasts to divide and reduce the population of dumbbell-shaped chloroplasts.
Chloroplast inheritance
Like mitochondria, chloroplasts are usually inherited from a single parent. Biparental chloroplast inheritance—where plastid genes are inherited from both parent plants—occurs in very low levels in some flowering plants.
Many mechanisms prevent biparental chloroplast DNA inheritance, including selective destruction of chloroplasts or their genes within the gamete or zygote, and chloroplasts from one parent being excluded from the embryo. Parental chloroplasts can be sorted so that only one type is present in each offspring.
Gymnosperms, such as pine trees, mostly pass on chloroplasts paternally, while flowering plants often inherit chloroplasts maternally. Flowering plants were once thought to only inherit chloroplasts maternally. However, there are now many documented cases of angiosperms inheriting chloroplasts paternally.
Angiosperms, which pass on chloroplasts maternally, have many ways to prevent paternal inheritance. Most of them produce sperm cells that do not contain any plastids. There are many other documented mechanisms that prevent paternal inheritance in these flowering plants, such as different rates of chloroplast replication within the embryo.
Among angiosperms, paternal chloroplast inheritance is observed more often in hybrids than in offspring from parents of the same species. This suggests that incompatible hybrid genes might interfere with the mechanisms that prevent paternal inheritance.
Transplastomic plants
Recently, chloroplasts have caught attention by developers of genetically modified crops. Since, in most flowering plants, chloroplasts are not inherited from the male parent, transgenes in these plastids cannot be disseminated by pollen. This makes plastid transformation a valuable tool for the creation and cultivation of genetically modified plants that are biologically contained, thus posing significantly lower environmental risks. This biological containment strategy is therefore suitable for establishing the coexistence of conventional and organic agriculture. While the reliability of this mechanism has not yet been studied for all relevant crop species, recent results in tobacco plants are promising, showing a failed containment rate of transplastomic plants at 3 in 1,000,000.
Footnotes
References
External links
Chloroplast – Cell Centered Database
Co-Extra research on chloroplast transformation
NCBI full chloroplast genome
Category:Photosynthesis
Category:Plastids
Category:Endosymbiotic events
|
biology
| 12,565
|
7012
|
Chagas disease
|
https://en.wikipedia.org/wiki/Chagas_disease
|
Chagas disease, also known as American trypanosomiasis, is a tropical parasitic disease caused by Trypanosoma cruzi. It is spread mostly by insects in the subfamily Triatominae, known as "kissing bugs". The symptoms change throughout the infection. In the early stage, symptoms are typically either not present or mild and may include fever, swollen lymph nodes, headaches, or swelling at the site of the bite. After four to eight weeks, untreated individuals enter the chronic phase of disease, which in most cases does not result in further symptoms. Up to 45% of people with chronic infections develop heart disease 10–30 years after the initial illness, which can lead to heart failure. Digestive complications, including an enlarged esophagus or an enlarged colon, may also occur in up to 21% of people, and up to 10% of people may experience nerve damage.
is commonly spread to humans and other mammals by the kissing bug's bite wound and the bug's infected feces. The disease may also be spread through blood transfusion, organ transplantation, consuming food or drink contaminated with the parasites, and vertical transmission (from a mother to her baby). Diagnosis of early disease is by finding the parasite in the blood using a microscope or detecting its DNA by polymerase chain reaction. Chronic disease is diagnosed by finding antibodies for in the blood.
Prevention focuses on eliminating kissing bugs and avoiding their bites. This may involve the use of insecticides or bed-nets. Other preventive efforts include screening blood used for transfusions. Early infections are treatable with the medications benznidazole or nifurtimox, which usually cure the disease if given shortly after the person is infected, but become less effective the longer a person has had Chagas disease. When used in chronic disease, medication may delay or prevent the development of end-stage symptoms. Benznidazole and nifurtimox often cause side effects, including skin disorders, digestive system irritation, and neurological symptoms, which can result in treatment being discontinued. New drugs for Chagas disease are under development, and while experimental vaccines have been studied in animal models, a human vaccine has not been developed.
It is estimated that 6.5 million people, mostly in Mexico, Central America and South America, have Chagas disease as of 2019, resulting in approximately 9,490 annual deaths. Most people with the disease are poor, and most do not realize they are infected. Large-scale population migrations have carried Chagas disease to new regions, which include the United States and many European countries. The disease affects more than 150 types of animals. From 2000–2018, 29 confirmed locally-acquired cases of Chagas disease were reported in eight US states, leading to calls to reclassify Chagas as endemic to the US.
The disease was first described in 1909 by Brazilian physician Carlos Chagas, after whom it is named. Chagas disease is classified as a neglected tropical disease.
Signs and symptoms
Chagas disease occurs in two stages: an acute stage, which develops one to two weeks after the insect bite, and a chronic stage, which develops over many years. The acute stage is often symptom-free. When present, the symptoms are typically minor and not specific to any particular disease. Signs and symptoms include fever, malaise, headache, and enlargement of the liver, spleen, and lymph nodes. Sometimes, people develop a swollen nodule at the site of infection, which is called "Romaña's sign" if it is on the eyelid, or a "chagoma" if it is elsewhere on the skin. In rare cases (less than 1–5%), infected individuals develop severe acute disease, which can involve inflammation of the heart muscle, fluid accumulation around the heart, and inflammation of the brain and surrounding tissues, and may be life-threatening. The acute phase typically lasts four to eight weeks and resolves without treatment.
Unless treated with antiparasitic drugs, individuals remain infected with after recovering from the acute phase. Most chronic infections are asymptomatic, which is referred to as indeterminate chronic Chagas disease. However, over decades with the disease, approximately 30–40% of people develop organ dysfunction (determinate chronic Chagas disease), which most often affects the heart or digestive system.
The most common long-term manifestation is heart disease, which occurs in 14–45% of people with chronic Chagas disease. People with Chagas heart disease often experience heart palpitations, and sometimes fainting, due to irregular heart function. By electrocardiogram, people with Chagas heart disease most frequently have arrhythmias. As the disease progresses, the heart's ventricles become enlarged (dilated cardiomyopathy), which reduces its ability to pump blood. In many cases, the first sign of Chagas heart disease is heart failure, thromboembolism, or chest pain associated with abnormalities in the microvasculature.
Also common in chronic Chagas disease is damage to the digestive system, which affects 10–21% of people. Enlargement of the esophagus or colon are the most common digestive issues. Those with enlarged esophagus often experience pain (odynophagia) or trouble swallowing (dysphagia), acid reflux, cough, and weight loss. Individuals with enlarged colon often experience constipation, and may develop severe blockage of the intestine or its blood supply. Up to 10% of chronically infected individuals develop nerve damage that can result in numbness and altered reflexes or movement. While chronic disease typically develops over decades, some individuals with Chagas disease (less than 10%) progress to heart damage directly after acute disease.
Signs and symptoms differ for people infected with through less common routes. People infected through ingestion of parasites tend to develop severe disease within three weeks of consumption, with symptoms including fever, vomiting, shortness of breath, cough, and pain in the chest, abdomen, and muscles. Those infected congenitally typically have few to no symptoms, but can have mild non-specific symptoms, or severe symptoms such as jaundice, respiratory distress, and heart problems. People infected through organ transplant or blood transfusion tend to have symptoms similar to those of vector-borne disease, but the symptoms may not manifest for anywhere from a week to five months. Chronically infected individuals who become immunosuppressed due to HIV infection can have particularly severe and distinct disease, most commonly characterized by inflammation in the brain and surrounding tissue or brain abscesses. Symptoms vary widely based on the size and location of brain abscesses, but typically include fever, headaches, seizures, loss of sensation, or other neurological issues that indicate particular sites of nervous system damage. Occasionally, these individuals also experience acute heart inflammation, skin lesions, and disease of the stomach, intestine, or peritoneum.
Cause
Chagas disease is caused by infection with the protozoan parasite , which is typically introduced into humans through the bite of triatomine bugs, also called "kissing bugs". When the insect defecates at the bite site, motile forms called trypomastigotes enter the bloodstream and invade various host cells. Inside a host cell, the parasite transforms into a replicative form called an amastigote, which undergoes several rounds of replication. The replicated amastigotes transform back into trypomastigotes, which burst the host cell and are released into the bloodstream. Trypomastigotes then disseminate throughout the body to various tissues, where they invade cells and replicate. Over many years, cycles of parasite replication and immune response can severely damage these tissues, particularly the heart and digestive tract.
Transmission
T. cruzi can be transmitted by various triatomine bugs in the genera Triatoma, Panstrongylus, and Rhodnius. The primary vectors for human infection are the species of triatomine bugs that inhabit human dwellings, namely Triatoma infestans, Rhodnius prolixus, Triatoma dimidiata and Panstrongylus megistus. These insects are known by a number of local names, including vinchuca in Argentina, Bolivia, Chile and Paraguay, barbeiro (the barber) in Brazil, pito in Colombia, chinche in Central America, and chipo in Venezuela. The bugs tend to feed at night, preferring moist surfaces near the eyes or mouth. A triatomine bug can become infected with when it feeds on an infected host. replicates in the insect's intestinal tract and is shed in the bug's feces. When an infected triatomine feeds, it pierces the skin and takes in a blood meal, defecating at the same time to make room for the new meal. The bite is typically painless, but causes itching. Scratching at the bite introduces the -laden feces into the bite wound, initiating infection.
In addition to classical vector spread, Chagas disease can be transmitted through the consumption of food or drink contaminated with triatomine insects or their feces. Since heating or drying kills the parasites, drinks and especially fruit juices are the most frequent source of infection. This oral route of transmission has been implicated in several outbreaks, where it led to unusually severe symptoms, likely due to infection with a higher parasite load than from the bite of a triatomine bug—a single crushed triatomine in a food or beverage harboring T cruzi can contain about 600,000 metacyclic trypomastigotes, while triatomine fecal matter contains 3,000-4,000 per μL.
T. cruzi can be transmitted independently of the triatomine bug during blood transfusion, following organ transplantation, or across the placenta during pregnancy. Transfusion with the blood of an infected donor infects the recipient 10–25% of the time. To prevent this, blood donations are screened for in many countries with endemic Chagas disease, as well as the United States. Similarly, transplantation of solid organs from an infected donor can transmit to the recipient. This is especially true for heart transplant, which transmits T. cruzi 75–100% of the time, and less so for transplantation of the liver (0–29%) or a kidney (0–19%). An infected mother can pass to her child through the placenta; this occurs in up to 15% of births by infected mothers. As of 2019, 22.5% of new infections occurred through congenital transmission.
Pathophysiology
In the acute phase of the disease, signs and symptoms are caused directly by the replication of and the immune system's response to it. During this phase, can be found in various tissues throughout the body and circulating in the blood. During the initial weeks of infection, parasite replication is brought under control by the production of antibodies and activation of the host's inflammatory response, particularly cells that target intracellular pathogens such as NK cells and macrophages, driven by inflammation-signaling molecules like TNF-α and IFN-γ.
During chronic Chagas disease, long-term organ damage develops over the years due to continued replication of the parasite and damage from the immune system. Early in the course of the disease, is found frequently in the striated muscle fibers of the heart. As disease progresses, the heart becomes generally enlarged, with substantial regions of cardiac muscle fiber replaced by scar tissue and fat. Areas of active inflammation are scattered throughout the heart, with each housing inflammatory immune cells, typically macrophages and T cells. Late in the disease, parasites are rarely detected in the heart, and may be present at only very low levels.
In the heart, colon, and esophagus, chronic disease leads to a massive loss of nerve endings. In the heart, this may contribute to arrhythmias and other cardiac dysfunction. In the colon and esophagus, loss of nervous system control is the major driver of organ dysfunction. Loss of nerves impairs the movement of food through the digestive tract, which can lead to blockage of the esophagus or colon and restriction of their blood supply.
The parasite can insert kinetoplast DNA into host cells, an example of horizontal gene transfer. Vertical inheritance of the inserted kDNA has been demonstrated in rabbits and birds. In chickens, offspring carrying inserted kDNA show symptoms of disease despite carrying no live trypanosomes. In 2010, integrated kDNA was found to be vertically transmitted in five human families.
Diagnosis
The presence of T. cruzi in the blood is diagnostic of Chagas disease. During the acute phase of infection, it can be detected by microscopic examination of fresh anticoagulated blood, or its buffy coat, for motile parasites; or by preparation of thin and thick blood smears stained with Giemsa, for direct visualization of parasites. Blood smear examination detects parasites in 34–85% of cases. The sensitivity increases if techniques such as microhematocrit centrifugation are used to concentrate the blood. On microscopic examination of stained blood smears, trypomastigotes appear as S or U-shaped organisms with a flagellum connected to the body by an undulating membrane. A nucleus and a smaller structure called a kinetoplast are visible inside the parasite's body; the kinetoplast of is relatively large, which helps to distinguish it from other species of trypanosomes that infect humans.
Alternatively, T. cruzi DNA can be detected by polymerase chain reaction (PCR). In acute and congenital Chagas disease, PCR is more sensitive than microscopy, and it is more reliable than antibody-based tests for the diagnosis of congenital disease because it is not affected by the transfer of antibodies against from a mother to her baby (passive immunity). PCR is also used to monitor levels in organ transplant recipients and immunosuppressed people, which allows infection or reactivation to be detected at an early stage.
In chronic Chagas disease, the concentration of parasites in the blood is too low to be reliably detected by microscopy or PCR, so the diagnosis is usually made using serological tests, which detect immunoglobulin G antibodies against in the blood. Two positive serology results, using different test methods, are required to confirm the diagnosis. If the test results are inconclusive, additional testing methods such as Western blot can be used.
Various rapid diagnostic tests for Chagas disease are available. These tests are easily transported and can be performed by people without special training. They are useful for screening large numbers of people and testing people who cannot access healthcare facilities, but their sensitivity is relatively low, and it is recommended that a second method is used to confirm a positive result.
T. cruzi parasites can be grown from blood samples by blood culture, xenodiagnosis, or by inoculating animals with the person's blood. In the blood culture method, the person's red blood cells are separated from the plasma and added to a specialized growth medium to encourage multiplication of the parasite. It can take up to six months to obtain the result. Xenodiagnosis involves feeding the blood to triatomine insects, and then examining their feces for the parasite 30 to 60 days later. These methods are not routinely used, as they are slow and have low sensitivity.
Prevention
Efforts to prevent Chagas disease have largely focused on vector control to limit exposure to triatomine bugs. Insecticide-spraying programs have been the mainstay of vector control, consisting of spraying homes and the surrounding areas with residual insecticides. This was originally done with organochlorine, organophosphate, and carbamate insecticides, which were supplanted in the 1980s with pyrethroids. These programs have drastically reduced transmission in Brazil and Chile, and eliminated major vectors from certain regions: Triatoma infestans from Brazil, Chile, Uruguay, and parts of Peru and Paraguay, as well as Rhodnius prolixus from Central America. Vector control in some regions has been hindered by the development of insecticide resistance among triatomine bugs. In response, vector control programs have implemented alternative insecticides (e.g. fenitrothion and bendiocarb in Argentina and Bolivia), treatment of domesticated animals (which are also fed on by triatomine bugs) with pesticides, pesticide-impregnated paints, and other experimental approaches. In areas with triatomine bugs, transmission of can be prevented by sleeping under bed nets and by housing improvements that prevent triatomine bugs from colonizing houses.
Blood transfusion was formerly the second-most common mode of transmission for Chagas disease. can survive in refrigerated stored blood, and can survive freezing and thawing, allowing it to persist in whole blood, packed red blood cells, granulocytes, cryoprecipitate, and platelets. The development and implementation of blood bank screening tests have dramatically reduced the risk of infection during a blood transfusion. Nearly all blood donations in Latin American countries undergo Chagas screening. Widespread screening is also common in non-endemic nations with significant populations of immigrants from endemic areas, including the United Kingdom (implemented in 1999), Spain (2005), the United States (2007), France and Sweden (2009), Switzerland (2012), and Belgium (2013). Serological tests, typically ELISAs, are used to detect antibodies against proteins in donor blood.
Other modes of transmission have been targeted by Chagas disease prevention programs. Treating -infected mothers during pregnancy reduces the risk of congenital transmission of the infection. To this end, many countries in Latin America have implemented routine screening of pregnant women and infants for infection, and the World Health Organization recommends screening all children born to infected mothers to prevent congenital infection from developing into chronic disease. Similarly to blood transfusions, many countries with endemic Chagas disease screen organs for transplantation with serological tests.
There is no vaccine against Chagas disease. Several experimental vaccines have been tested in animals infected with and were able to reduce parasite numbers in the blood and heart, but no vaccine candidates had undergone clinical trials in humans as of 2016.
Management
Chagas disease is managed using antiparasitic drugs to eliminate T. cruzi from the body, and symptomatic treatment to address the effects of the infection. As of 2018, benznidazole and nifurtimox were the antiparasitic drugs of choice for treating Chagas disease, though benznidazole is the only drug available in most of Latin America. For either drug, treatment typically consists of two to three oral doses per day for 60 to 90 days. Antiparasitic treatment is most effective early in the course of infection: it eliminates from 50 to 80% of people in the acute phase (WHO: "nearly 100 %"),WHO. (13 April 2022). "Chagas disease (also known as American trypanosomiasis". Fact sheets. but only 20–60% of those in the chronic phase. Treatment of chronic disease is more effective in children than in adults, and the cure rate for congenital disease approaches 100% if treated in the first year of life. Antiparasitic treatment can also slow the progression of the disease and reduce the possibility of congenital transmission. Elimination of does not cure the cardiac and gastrointestinal damage caused by chronic Chagas disease, so these conditions must be treated separately. Antiparasitic treatment is not recommended for people who have already developed dilated cardiomyopathy.
Benznidazole is usually considered the first-line treatment because it has milder adverse effects than nifurtimox, and its efficacy is better understood. Both benznidazole and nifurtimox have common side effects that can result in treatment being discontinued. The most common side effects of benznidazole are skin rash, digestive problems, decreased appetite, weakness, headache, and sleeping problems. These side effects can sometimes be treated with antihistamines or corticosteroids, and are generally reversed when treatment is stopped. However, benznidazole is discontinued in up to 29% of cases. Nifurtimox has more frequent side effects, affecting up to 97.5% of individuals taking the drug. The most common side effects are loss of appetite, weight loss, nausea and vomiting, and various neurological disorders including mood changes, insomnia, paresthesia and peripheral neuropathy. Treatment is discontinued in up to 75% of cases. Both drugs are contraindicated for use in pregnant women and people with liver or kidney failure. As of 2019, resistance to these drugs has been reported.
Complications
In the chronic stage, treatment involves managing the clinical manifestations of the disease. The treatment of Chagas cardiomyopathy is similar to that of other forms of heart disease. Beta blockers and ACE inhibitors may be prescribed, but some people with Chagas disease may not be able to take the standard dose of these drugs because they have low blood pressure or a low heart rate. To manage irregular heartbeats, people may be prescribed anti-arrhythmic drugs such as amiodarone, or have a pacemaker implanted. Blood thinners may be used to prevent thromboembolism and stroke. Chronic heart disease caused by untreated T. cruzi infection is a common reason for heart transplantation surgery. Because transplant recipients take immunosuppressive drugs to prevent organ rejection, they are monitored using PCR to detect reactivation of the disease. People with Chagas disease who undergo heart transplantation have higher survival rates than the average heart transplant recipient.
Mild gastrointestinal disease may be treated symptomatically, such as by using laxatives for constipation or taking a prokinetic drug like metoclopramide before meals to relieve esophageal symptoms. Surgery to sever the muscles of the lower esophageal sphincter (cardiomyotomy) may be performed in more severe cases of esophageal disease, and surgical removal of the affected part of the organ may be required for advanced megacolon and megaesophagus.
Epidemiology
In 2019, an estimated 6.5 million people worldwide had Chagas disease, with approximately 173,000 new infections and 9,490 deaths each year. The disease resulted in a global annual economic burden estimated at US$7.2 billion in 2013, 86% of which is borne by endemic countries. Chagas disease results in the loss of over 800,000 disability-adjusted life years each year.
The endemic area of Chagas disease stretches from the southern United States to northern Chile and Argentina, with Bolivia (6.1%), Argentina (3.6%), and Paraguay (2.1%) exhibiting the highest prevalence of the disease. Within continental Latin America, Chagas disease is endemic to 21 countries: Argentina, Belize, Bolivia, Brazil, Chile, Colombia, Costa Rica, Ecuador, El Salvador, French Guiana, Guatemala, Guyana, Honduras, Mexico, Nicaragua, Panama, Paraguay, Peru, Suriname, Uruguay, and Venezuela. In endemic areas, due largely to vector control efforts and screening of blood donations, annual infections and deaths have fallen by 67% and more than 73% respectively from their peaks in the 1980s to 2010. Transmission by insect vector and blood transfusion has been completely interrupted in Uruguay (1997), Chile (1999), and Brazil (2006), and in Argentina, vectorial transmission had been interrupted in 13 of the 19 endemic provinces as of 2001. During Venezuela's humanitarian crisis, vectorial transmission has begun occurring in areas where it had previously been interrupted, and Chagas disease seroprevalence rates have increased. Transmission rates have also risen in the Gran Chaco region due to insecticide resistance and in the Amazon basin due to oral transmission.
While the rate of vector-transmitted Chagas disease has declined throughout most of Latin America, the rate of orally transmitted disease has risen, possibly due to increasing urbanization and deforestation bringing people into closer contact with triatomines and altering the distribution of triatomine species. Orally transmitted Chagas disease is of particular concern in Venezuela, where 16 outbreaks have been recorded between 2007 and 2018.
Chagas exists in two different ecological zones. In the Southern Cone region, the main vector lives in and around human homes. In Central America and Mexico, the main vector species lives both inside dwellings and in uninhabited areas. In both zones, Chagas occurs almost exclusively in rural areas, where also circulates in wild and domestic animals. commonly infects more than 100 species of mammals across Latin America including opossums (Didelphis spp.), armadillos, marmosets, bats, various rodents and dogs all of which can be infected by the vectors or orally by eating triatomine bugs and other infected animals. For entomophagous animals this is a common mode. Didelphis spp. are unique in that they do not require the triatomine for transmission, completing the life cycle through their own urine and feces. Veterinary transmission also occurs through vertical transmission through the placenta, blood transfusion and organ transplants.
Non-endemic countries
Though Chagas is traditionally considered a disease of rural Latin America, international migration has dispersed those with the disease to numerous non-endemic countries, primarily in North America and Europe. As of 2020, approximately 300,000 infected people are living in the United States, and in 2018 it was estimated that 30,000 to 40,000 people in the United States had Chagas cardiomyopathy. The vast majority of cases in the United States occur in immigrants from Latin America, but local transmission is possible. Eleven triatomine species are native to the United States, and some southern states have persistent cycles of disease transmission between insect vectors and animal reservoirs, which include woodrats, possums, raccoons, armadillos and skunks. However, locally acquired infection is very rare: only 28 cases were documented from 1955 to 2015. Taking into account the local reservoir and transmission to humans, some scientists have proposed reclassifying Chagas disease as endemic to the US, specifically "hypoendemic" to reflect the low transmission.
As of 2013, the cost of treatment in the United States was estimated to be US$900 million annually (global cost $7 billion), which included hospitalization and medical devices such as pacemakers.
Chagas disease affected approximately 68,000 to 123,000 people in Europe as of 2019. Spain, which has a high rate of immigration from Latin America, has the highest prevalence of the disease. It is estimated that 50,000 to 70,000 people in Spain are living with Chagas disease, accounting for the majority of European cases. The prevalence varies widely within European countries due to differing immigration patterns. Italy has the second highest prevalence, followed by the Netherlands, the United Kingdom, and Germany.
History
T. cruzi likely circulated in South American mammals long before the arrival of humans on the continent. has been detected in ancient human remains across South America, from a 9000-year-old Chinchorro mummy in the Atacama Desert, to remains of various ages in Minas Gerais, to an 1100-year-old mummy as far north as the Chihuahuan Desert near the Rio Grande. Many early written accounts describe symptoms consistent with Chagas disease, with early descriptions of the disease sometimes attributed to Miguel Diaz Pimenta (1707), (1735), and Theodoro J. H. Langgaard (1842).
The formal description of Chagas disease was made by Carlos Chagas in 1909 after examining a two-year-old girl with fever, swollen lymph nodes, and an enlarged spleen and liver. Upon examination of her blood, Chagas saw trypanosomes identical to those he had recently identified from the hindgut of triatomine bugs and named Trypanosoma cruzi in honor of his mentor, Brazilian physician Oswaldo Cruz. He sent infected triatomine bugs to Cruz in Rio de Janeiro, who showed the bite of the infected triatomine could transmit to marmoset monkeys as well. In just two years, 1908 and 1909, Chagas published descriptions of the disease, the organism that caused it, and the insect vector required for infection. (in Portuguese with German full translation as "Ueber eine neue Trypanosomiasis des Menschen.") Almost immediately thereafter, at the suggestion of Miguel Couto, then professor of the , the disease was widely referred to as "Chagas disease". Chagas' discovery brought him national and international renown, but in highlighting the inadequacies of the Brazilian government's response to the disease, Chagas attracted criticism to himself and to the disease that bore his name, stifling research on his discovery and likely frustrating his nomination for the Nobel Prize in 1921.
In the 1930s, Salvador Mazza rekindled Chagas disease research, describing over a thousand cases in Argentina's Chaco Province. In Argentina, the disease is known as mal de Chagas-Mazza in his honor. Serological tests for Chagas disease were introduced in the 1940s, demonstrating that infection with was widespread across Latin America. This, combined with successes eliminating the malaria vector through insecticide use, spurred the creation of public health campaigns focused on treating houses with insecticides to eradicate triatomine bugs. The 1950s saw the discovery that treating blood with crystal violet could eradicate the parasite, leading to its widespread use in transfusion screening programs in Latin America. Large-scale control programs began to take form in the 1960s, first in São Paulo, then various locations in Argentina, then national-level programs across Latin America. These programs received a major boost in the 1980s with the introduction of pyrethroid insecticides, which did not leave stains or odors after application and were longer-lasting and more cost-effective. Regional bodies dedicated to controlling Chagas disease arose through support of the Pan American Health Organization, with the Initiative of the Southern Cone for the Elimination of Chagas Diseases launching in 1991, followed by the Initiative of the Andean countries (1997), Initiative of the Central American countries (1997), and the Initiative of the Amazon countries (2004).
Research
Treatments
Fexinidazole, an antiparasitic drug approved for treating African trypanosomiasis, has shown activity against Chagas disease in animal models. As of 2019, it is undergoing phase II clinical trials for chronic Chagas disease in Spain. Other drug candidates include GNF6702, a proteasome inhibitor that is effective against Chagas disease in mice and is undergoing preliminary toxicity studies, and AN4169, which has had promising results in animal models.
Several experimental vaccines have been tested in animals. In addition to subunit vaccines, some approaches have involved vaccination with attenuated parasites or organisms that express some of the same antigens as but do not cause human disease, such as Trypanosoma rangeli or Phytomonas serpens. DNA vaccination has also been explored. As of 2019, vaccine research has mainly been limited to small animal models.
Diagnostic tests
As of 2018, standard diagnostic tests for Chagas disease were limited in their ability to measure the effectiveness of antiparasitic treatment, as serological tests may remain positive for years after is eliminated from the body, and PCR may give false-negative results when the parasite concentration in the blood is low. Several potential biomarkers of treatment response are under investigation, such as immunoassays against specific antigens, flow cytometry testing to detect antibodies against different life stages of , and markers of physiological changes caused by the parasite, such as alterations in coagulation and lipid metabolism.
Another research area is the use of biomarkers to predict the progression of chronic disease. Serum levels of tumor necrosis factor alpha, brain and atrial natriuretic peptide, and angiotensin-converting enzyme 2 have been studied as indicators of the prognosis of Chagas cardiomyopathy.
T. cruzi shed acute-phase antigen (SAPA), which can be detected in blood using ELISA or Western blot, has been used as an indicator of early acute and congenital infection. An assay for antigens in urine has been developed to diagnose congenital disease.
See also
Drugs for Neglected Diseases Initiative
Chagas: Time to Treat campaign
References
External links
Chagas information at the U.S. Centers for Disease Control
Chagas information from the Drugs for Neglected Diseases initiative
Chagas disease information for travellers from the International Association for Medical Assistance to Travellers
Category:Parasitic infestations, stings, and bites of the skin
Category:Insect-borne diseases
Category:Protozoal diseases
Category:Tropical diseases
Category:Zoonoses
Category:Wikipedia medicine articles ready to translate
Category:Wikipedia infectious disease articles ready to translate
|
medicine_health
| 4,961
|
7172
|
Chemotherapy
|
https://en.wikipedia.org/wiki/Chemotherapy
|
Chemotherapy (often abbreviated chemo, sometimes CTX and CTx) is the type of cancer treatment that uses one or more anti-cancer drugs (chemotherapeutic agents or alkylating agents) in a standard regimen. Chemotherapy may be given with a curative intent (which almost always involves combinations of drugs), or it may aim only to prolong life or to reduce symptoms (palliative chemotherapy). Chemotherapy is one of the major categories of the medical discipline specifically devoted to pharmacotherapy for cancer, which is called medical oncology.
The term chemotherapy now means the non-specific use of intracellular poisons to inhibit mitosis (cell division) or to induce DNA damage (so that DNA repair can augment chemotherapy). This meaning excludes the more-selective agents that block extracellular signals (signal transduction). Therapies with specific molecular or genetic targets, which inhibit growth-promoting signals from classic endocrine hormones (primarily estrogens for breast cancer and androgens for prostate cancer), are now called hormonal therapies. Other inhibitions of growth-signals, such as those associated with receptor tyrosine kinases, are targeted therapy.
The use of drugs (whether chemotherapy, hormonal therapy, or targeted therapy) is systemic therapy for cancer: they are introduced into the blood stream (the system) and therefore can treat cancer anywhere in the body. Systemic therapy is often used with other, local therapy (treatments that work only where they are applied), such as radiation, surgery, and hyperthermia.
Traditional chemotherapeutic agents are cytotoxic by means of interfering with cell division (mitosis) but cancer cells vary widely in their susceptibility to these agents. To a large extent, chemotherapy can be thought of as a way to damage or stress cells, which may then lead to cell death if apoptosis is initiated. Many of the side effects of chemotherapy can be traced to damage to normal cells that divide rapidly and are thus sensitive to anti-mitotic drugs: cells in the bone marrow, digestive tract and hair follicles. This results in the most common side-effects of chemotherapy: myelosuppression (decreased production of blood cells, hence that also immunosuppression), mucositis (inflammation of the lining of the digestive tract), and alopecia (hair loss). Because of the effect on immune cells (especially lymphocytes), chemotherapy drugs often find use in a host of diseases that result from harmful overactivity of the immune system against self (so-called autoimmunity). These include rheumatoid arthritis, systemic lupus erythematosus, multiple sclerosis, vasculitis and many others.
Treatment strategies
+ Common combination chemotherapy regimens Cancer type Drugs AcronymBreast cancer Cyclophosphamide, methotrexate, 5-fluorouracil, vinorelbine CMF Doxorubicin, cyclophosphamide ACHodgkin's lymphoma Bleomycin, Etoposide, doxorubicin, cyclophosphamide, Vincristine, Procarbazine, PrednisoneBEACOPPDoxorubicin, bleomycin, vinblastine, dacarbazineABVDMustine, vincristine, procarbazine, prednisoloneMOPP Non-Hodgkin's lymphomaCyclophosphamide, doxorubicin, vincristine, prednisolone CHOP, R-CVP Germ cell tumorBleomycin, etoposide, cisplatin BEPStomach cancer Epirubicin, cisplatin, 5-fluorouracil ECFEpirubicin, cisplatin, capecitabine ECX Bladder cancer Methotrexate, vincristine, doxorubicin, cisplatin MVAC Lung cancer Cyclophosphamide, doxorubicin, vincristine, vinorelbine CAV Colorectal cancer 5-fluorouracil, folinic acid, oxaliplatin FOLFOX Pancreatic cancer Gemcitabine, 5-fluorouracil FOLFOXBone cancerDoxorubicin, cisplatin, methotrexate, ifosfamide, etoposideMAP/MAPIE
There are a number of strategies in the administration of chemotherapeutic drugs used today. Chemotherapy may be given with a curative intent or it may aim to prolong life or to palliate symptoms.
Induction chemotherapy is the first line treatment of cancer with a chemotherapeutic drug. This type of chemotherapy is used for curative intent.
Combined modality chemotherapy is the use of drugs with other cancer treatments, such as surgery, radiation therapy, or hyperthermia therapy.
Consolidation chemotherapy is given after remission in order to prolong the overall disease-free time and improve overall survival. The drug that is administered is the same as the drug that achieved remission.
Intensification chemotherapy is identical to consolidation chemotherapy but a different drug than the induction chemotherapy is used.
Combination chemotherapy involves treating a person with a number of different drugs simultaneously. The drugs differ in their mechanism and side-effects. The biggest advantage is minimising the chances of resistance developing to any one agent. Also, the drugs can often be used at lower doses, reducing toxicity.
Neoadjuvant chemotherapy is given prior to a local treatment such as surgery, and is designed to shrink the primary tumor. It is also given for cancers with a high risk of micrometastatic disease.
Adjuvant chemotherapy is given after a local treatment (radiotherapy or surgery). It can be used when there is little evidence of cancer present, but there is risk of recurrence. It is also useful in killing any cancerous cells that have spread to other parts of the body. These micrometastases can be treated with adjuvant chemotherapy and can reduce relapse rates caused by these disseminated cells.
Maintenance chemotherapy is a repeated low-dose treatment to prolong remission.
Salvage chemotherapy or palliative chemotherapy is given without curative intent, but simply to decrease tumor load and increase life expectancy. For these regimens, in general, a better toxicity profile is expected.
All chemotherapy regimens require that the recipient be capable of undergoing the treatment. Performance status is often used as a measure to determine whether a person can receive chemotherapy, or whether dose reduction is required. Because only a fraction of the cells in a tumor die with each treatment (fractional kill), repeated doses must be administered to continue to reduce the size of the tumor. Current chemotherapy regimens apply drug treatment in cycles, with the frequency and duration of treatments limited by toxicity.
Effectiveness
The effectiveness of chemotherapy depends on the type of cancer and the stage. The overall effectiveness ranges from being curative for some cancers, such as some leukemias, to being ineffective, such as in some brain tumors, to being needless in others, like most non-melanoma skin cancers.
Dosage
Dosage of chemotherapy can be difficult: If the dose is too low, it will be ineffective against the tumor, whereas, at excessive doses, the toxicity (side-effects) will be intolerable to the person receiving it. The standard method of determining chemotherapy dosage is based on calculated body surface area (BSA). The BSA is usually calculated with a mathematical formula or a nomogram, using the recipient's weight and height, rather than by direct measurement of body area. This formula was originally derived in a 1916 study and attempted to translate medicinal doses established with laboratory animals to equivalent doses for humans. The study only included nine human subjects. When chemotherapy was introduced in the 1950s, the BSA formula was adopted as the official standard for chemotherapy dosing for lack of a better option.
The validity of this method in calculating uniform doses has been questioned because the formula only takes into account the individual's weight and height. Drug absorption and clearance are influenced by multiple factors, including age, sex, metabolism, disease state, organ function, drug-to-drug interactions, genetics, and obesity, which have major impacts on the actual concentration of the drug in the person's bloodstream. As a result, there is high variability in the systemic chemotherapy drug concentration in people dosed by BSA, and this variability has been demonstrated to be more than ten-fold for many drugs. In other words, if two people receive the same dose of a given drug based on BSA, the concentration of that drug in the bloodstream of one person may be 10 times higher or lower compared to that of the other person. This variability is typical with many chemotherapy drugs dosed by BSA, and, as shown below, was demonstrated in a study of 14 common chemotherapy drugs.
The result of this pharmacokinetic variability among people is that many people do not receive the right dose to achieve optimal treatment effectiveness with minimized toxic side effects. Some people are overdosed while others are underdosed. For example, in a randomized clinical trial, investigators found 85% of metastatic colorectal cancer patients treated with 5-fluorouracil (5-FU) did not receive the optimal therapeutic dose when dosed by the BSA standard—68% were underdosed and 17% were overdosed.
There has been controversy over the use of BSA to calculate chemotherapy doses for people who are obese. Because of their higher BSA, clinicians often arbitrarily reduce the dose prescribed by the BSA formula for fear of overdosing. In many cases, this can result in sub-optimal treatment.
Several clinical studies have demonstrated that when chemotherapy dosing is individualized to achieve optimal systemic drug exposure, treatment outcomes are improved and toxic side effects are reduced. In the 5-FU clinical study cited above, people whose dose was adjusted to achieve a pre-determined target exposure realized an 84% improvement in treatment response rate and a six-month improvement in overall survival (OS) compared with those dosed by BSA.
In the same study, investigators compared the incidence of common 5-FU-associated grade 3/4 toxicities between the dose-adjusted people and people dosed per BSA. The incidence of debilitating grades of diarrhea was reduced from 18% in the BSA-dosed group to 4% in the dose-adjusted group and serious hematologic side effects were eliminated. Because of the reduced toxicity, dose-adjusted patients were able to be treated for longer periods of time. BSA-dosed people were treated for a total of 680 months while people in the dose-adjusted group were treated for a total of 791 months. Completing the course of treatment is an important factor in achieving better treatment outcomes.
Similar results were found in a study involving people with colorectal cancer who have been treated with the popular FOLFOX regimen. The incidence of serious diarrhea was reduced from 12% in the BSA-dosed group of patients to 1.7% in the dose-adjusted group, and the incidence of severe mucositis was reduced from 15% to 0.8%.
The FOLFOX study also demonstrated an improvement in treatment outcomes. Positive response increased from 46% in the BSA-dosed group to 70% in the dose-adjusted group. Median progression free survival (PFS) and overall survival (OS) both improved by six months in the dose adjusted group.
One approach that can help clinicians individualize chemotherapy dosing is to measure the drug levels in blood plasma over time and adjust dose according to a formula or algorithm to achieve optimal exposure. With an established target exposure for optimized treatment effectiveness with minimized toxicities, dosing can be personalized to achieve target exposure and optimal results for each person. Such an algorithm was used in the clinical trials cited above and resulted in significantly improved treatment outcomes.
Oncologists are already individualizing dosing of some cancer drugs based on exposure. Carboplatin and busulfan dosing rely upon results from blood tests to calculate the optimal dose for each person. Simple blood tests are also available for dose optimization of methotrexate, 5-FU, paclitaxel, and docetaxel.
The serum albumin level immediately prior to chemotherapy administration is an independent prognostic predictor of survival in various cancer types.
Types
Alkylating agents
Alkylating agents are the oldest group of chemotherapeutics in use today. Originally derived from mustard gas used in World War I, there are now many types of alkylating agents in use. They are so named because of their ability to alkylate many molecules, including proteins, RNA and DNA. This ability to bind covalently to DNA via their alkyl group is the primary cause for their anti-cancer effects. DNA is made of two strands and the molecules may either bind twice to one strand of DNA (intrastrand crosslink) or may bind once to both strands (interstrand crosslink). If the cell tries to replicate crosslinked DNA during cell division, or tries to repair it, the DNA strands can break. This leads to a form of programmed cell death called apoptosis. Alkylating agents will work at any point in the cell cycle and thus are known as cell cycle-independent drugs. For this reason, the effect on the cell is dose dependent; the fraction of cells that die is directly proportional to the dose of drug.
The subtypes of alkylating agents are the nitrogen mustards, nitrosoureas, tetrazines, aziridines, cisplatins and derivatives, and non-classical alkylating agents. Nitrogen mustards include mechlorethamine, cyclophosphamide, melphalan, chlorambucil, ifosfamide and busulfan. Nitrosoureas include N-Nitroso-N-methylurea (MNU), carmustine (BCNU), lomustine (CCNU) and semustine (MeCCNU), fotemustine and streptozotocin. Tetrazines include dacarbazine, mitozolomide and temozolomide. Aziridines include thiotepa, mytomycin and diaziquone (AZQ). Cisplatin and derivatives include cisplatin, carboplatin and oxaliplatin. They impair cell function by forming covalent bonds with the amino, carboxyl, sulfhydryl, and phosphate groups in biologically important molecules. Non-classical alkylating agents include procarbazine and hexamethylmelamine.
Antimetabolites
Anti-metabolites are a group of molecules that impede DNA and RNA synthesis. Many of them have a similar structure to the building blocks of DNA and RNA. The building blocks are nucleotides; a molecule comprising a nucleobase, a sugar and a phosphate group. The nucleobases are divided into purines (guanine and adenine) and pyrimidines (cytosine, thymine and uracil). Anti-metabolites resemble either nucleobases or nucleosides (a nucleotide without the phosphate group), but have altered chemical groups. These drugs exert their effect by either blocking the enzymes required for DNA synthesis or becoming incorporated into DNA or RNA. By inhibiting the enzymes involved in DNA synthesis, they prevent mitosis because the DNA cannot duplicate itself. Also, after misincorporation of the molecules into DNA, DNA damage can occur and programmed cell death (apoptosis) is induced. Unlike alkylating agents, anti-metabolites are cell cycle dependent. This means that they only work during a specific part of the cell cycle, in this case S-phase (the DNA synthesis phase). For this reason, at a certain dose, the effect plateaus and proportionally no more cell death occurs with increased doses. Subtypes of the anti-metabolites are the anti-folates, fluoropyrimidines, deoxynucleoside analogues and thiopurines.
The anti-folates include methotrexate and pemetrexed. Methotrexate inhibits dihydrofolate reductase (DHFR), an enzyme that regenerates tetrahydrofolate from dihydrofolate. When the enzyme is inhibited by methotrexate, the cellular levels of folate coenzymes diminish. These are required for thymidylate and purine production, which are both essential for DNA synthesis and cell division. Pemetrexed is another anti-metabolite that affects purine and pyrimidine production, and therefore also inhibits DNA synthesis. It primarily inhibits the enzyme thymidylate synthase, but also has effects on DHFR, aminoimidazole carboxamide ribonucleotide formyltransferase and glycinamide ribonucleotide formyltransferase. The fluoropyrimidines include fluorouracil and capecitabine. Fluorouracil is a nucleobase analogue that is metabolised in cells to form at least two active products; 5-fluourouridine monophosphate (FUMP) and 5-fluoro-2'-deoxyuridine 5'-phosphate (fdUMP). FUMP becomes incorporated into RNA and fdUMP inhibits the enzyme thymidylate synthase; both of which lead to cell death. Capecitabine is a prodrug of 5-fluorouracil that is broken down in cells to produce the active drug. The deoxynucleoside analogues include cytarabine, gemcitabine, decitabine, azacitidine, fludarabine, nelarabine, cladribine, clofarabine, and pentostatin. The thiopurines include thioguanine and mercaptopurine.
Anti-microtubule agents
Anti-microtubule agents are plant-derived chemicals that block cell division by preventing microtubule function. Microtubules are an important cellular structure composed of two proteins, α-tubulin and β-tubulin. They are hollow, rod-shaped structures that are required for cell division, among other cellular functions. Microtubules are dynamic structures, which means that they are permanently in a state of assembly and disassembly. Vinca alkaloids and taxanes are the two main groups of anti-microtubule agents, and although both of these groups of drugs cause microtubule dysfunction, their mechanisms of action are completely opposite: Vinca alkaloids prevent the assembly of microtubules, whereas taxanes prevent their disassembly. By doing so, they can induce mitotic catastrophe in the cancer cells. Following this, cell cycle arrest occurs, which induces programmed cell death (apoptosis). These drugs can also affect blood vessel growth, an essential process that tumours utilise in order to grow and metastasise.
Vinca alkaloids are derived from the Madagascar periwinkle, Catharanthus roseus, formerly known as Vinca rosea. They bind to specific sites on tubulin, inhibiting the assembly of tubulin into microtubules. The original vinca alkaloids are natural products that include vincristine and vinblastine. Following the success of these drugs, semi-synthetic vinca alkaloids were produced: vinorelbine (used in the treatment of non-small-cell lung cancer), vindesine, and vinflunine. These drugs are cell cycle-specific. They bind to the tubulin molecules in S-phase and prevent proper microtubule formation required for M-phase.
Taxanes are natural and semi-synthetic drugs. The first drug of their class, paclitaxel, was originally extracted from Taxus brevifolia, the Pacific yew. Now this drug and another in this class, docetaxel, are produced semi-synthetically from a chemical found in the bark of another yew tree, Taxus baccata.
Podophyllotoxin is an antineoplastic lignan obtained primarily from the American mayapple (Podophyllum peltatum) and Himalayan mayapple (Sinopodophyllum hexandrum). It has anti-microtubule activity, and its mechanism is similar to that of vinca alkaloids in that they bind to tubulin, inhibiting microtubule formation. Podophyllotoxin is used to produce two other drugs with different mechanisms of action: etoposide and teniposide.
Topoisomerase inhibitors
Topoisomerase inhibitors are drugs that affect the activity of two enzymes: topoisomerase I and topoisomerase II. When the DNA double-strand helix is unwound, during DNA replication or transcription, for example, the adjacent unopened DNA winds tighter (supercoils), like opening the middle of a twisted rope. The stress caused by this effect is in part aided by the topoisomerase enzymes. They produce single- or double-strand breaks into DNA, reducing the tension in the DNA strand. This allows the normal unwinding of DNA to occur during replication or transcription. Inhibition of topoisomerase I or II interferes with both of these processes.
Two topoisomerase I inhibitors, irinotecan and topotecan, are semi-synthetically derived from camptothecin, which is obtained from the Chinese ornamental tree Camptotheca acuminata. Drugs that target topoisomerase II can be divided into two groups. The topoisomerase II poisons cause increased levels enzymes bound to DNA. This prevents DNA replication and transcription, causes DNA strand breaks, and leads to programmed cell death (apoptosis). These agents include etoposide, doxorubicin, mitoxantrone and teniposide. The second group, catalytic inhibitors, are drugs that block the activity of topoisomerase II, and therefore prevent DNA synthesis and translation because the DNA cannot unwind properly. This group includes novobiocin, merbarone, and aclarubicin, which also have other significant mechanisms of action.
Cytotoxic antibiotics
The cytotoxic antibiotics are a varied group of drugs that have various mechanisms of action. The common theme that they share in their chemotherapy indication is that they interrupt cell division. The most important subgroup is the anthracyclines and the bleomycins; other prominent examples include mitomycin C and actinomycin.
Among the anthracyclines, doxorubicin and daunorubicin were the first, and were obtained from the bacterium Streptomyces peucetius. Derivatives of these compounds include epirubicin and idarubicin. Other clinically used drugs in the anthracycline group are pirarubicin, aclarubicin, and mitoxantrone. The mechanisms of anthracyclines include DNA intercalation (molecules insert between the two strands of DNA), generation of highly reactive free radicals that damage intercellular molecules and topoisomerase inhibition.
Actinomycin is a complex molecule that intercalates DNA and prevents RNA synthesis.
Bleomycin, a glycopeptide isolated from Streptomyces verticillus, also intercalates DNA, but produces free radicals that damage DNA. This occurs when bleomycin binds to a metal ion, becomes chemically reduced and reacts with oxygen.
Mitomycin is a cytotoxic antibiotic with the ability to alkylate DNA.
Delivery
Most chemotherapy is delivered intravenously, although a number of agents can be administered orally (e.g., melphalan, busulfan, capecitabine). According to a recent (2016) systematic review, oral therapies present additional challenges for patients and care teams to maintain and support adherence to treatment plans.
There are many intravenous methods of drug delivery, known as vascular access devices. These include the winged infusion device, peripheral venous catheter, midline catheter, peripherally inserted central catheter (PICC), central venous catheter and implantable port. The devices have different applications regarding duration of chemotherapy treatment, method of delivery and types of chemotherapeutic agent.
Depending on the person, the cancer, the stage of cancer, the type of chemotherapy, and the dosage, intravenous chemotherapy may be given on either an inpatient or an outpatient basis. For continuous, frequent or prolonged intravenous chemotherapy administration, various systems may be surgically inserted into the vasculature to maintain access. Commonly used systems are the Hickman line, the Port-a-Cath, and the PICC line. These have a lower infection risk, are much less prone to phlebitis or extravasation, and eliminate the need for repeated insertion of peripheral cannulae.
Isolated limb perfusion (often used in melanoma), or isolated infusion of chemotherapy into the liver or the lung have been used to treat some tumors. The main purpose of these approaches is to deliver a very high dose of chemotherapy to tumor sites without causing overwhelming systemic damage. These approaches can help control solitary or limited metastases, but they are by definition not systemic, and, therefore, do not treat distributed metastases or micrometastases.
Topical chemotherapies, such as 5-fluorouracil, are used to treat some cases of non-melanoma skin cancer.
If the cancer has central nervous system involvement, or with meningeal disease, intrathecal chemotherapy may be administered.
Adverse effects
Chemotherapeutic techniques have a range of side effects that depend on the type of medications used. The most common medications affect mainly the fast-dividing cells of the body, such as blood cells and the cells lining the mouth, stomach, and intestines. Chemotherapy-related iatrogenic toxicities can occur acutely after administration, within hours or days, or chronically, from weeks to years.
Immunosuppression and myelosuppression
Virtually all chemotherapeutic regimens can cause depression of the immune system, often by paralysing the bone marrow and leading to a decrease of white blood cells, red blood cells, and platelets.
Anemia and thrombocytopenia may require blood transfusion. Neutropenia (a decrease of the neutrophil granulocyte count below 0.5 billion/litre) can be improved with synthetic G-CSF (granulocyte-colony-stimulating factor, e.g., filgrastim, lenograstim, efbemalenograstim alfa).
In very severe myelosuppression, which occurs in some regimens, almost all the bone marrow stem cells (cells that produce white and red blood cells) are destroyed, meaning allogenic or autologous bone marrow cell transplants are necessary. (In autologous BMTs, cells are removed from the person before the treatment, multiplied and then re-injected afterward; in allogenic BMTs, the source is a donor.) However, some people still develop diseases because of this interference with bone marrow.
Although people receiving chemotherapy are encouraged to wash their hands, avoid sick people, and take other infection-reducing steps, about 85% of infections are due to naturally occurring microorganisms in the person's own gastrointestinal tract (including oral cavity) and skin. Dental evaluation and treatment before cytotoxic chemotherapy is recommended for reducing the risk of oral and systemic infections during the neutropenic phase. This complication may manifest as systemic infections, such as sepsis, or as localized outbreaks, such as Herpes simplex, shingles, or other members of the Herpesviridea. The risk of illness and death can be reduced by taking common antibiotics such as quinolones or trimethoprim/sulfamethoxazole before any fever or sign of infection appears. Quinolones show effective prophylaxis mainly with hematological cancer. However, in general, for every five people who are immunosuppressed following chemotherapy who take an antibiotic, one fever can be prevented; for every 34 who take an antibiotic, one death can be prevented. Sometimes, chemotherapy treatments are postponed because the immune system is suppressed to a critically low level.
In Japan, the government has approved the use of some medicinal mushrooms like Trametes versicolor, to counteract depression of the immune system in people undergoing chemotherapy.
Trilaciclib is an inhibitor of cyclin-dependent kinase 4/6 approved for the prevention of myelosuppression caused by chemotherapy. The drug is given before chemotherapy to protect bone marrow function.
Neutropenic enterocolitis
Due to immune system suppression, neutropenic enterocolitis (typhlitis) is a "life-threatening gastrointestinal complication of chemotherapy." Typhlitis is an intestinal infection which may manifest itself through symptoms including nausea, vomiting, diarrhea, a distended abdomen, fever, chills, or abdominal pain and tenderness.
Typhlitis is a medical emergency. It has a very poor prognosis and is often fatal unless promptly recognized and aggressively treated. Successful treatment hinges on early diagnosis provided by a high index of suspicion and the use of CT scanning, nonoperative treatment for uncomplicated cases, and sometimes elective right hemicolectomy to prevent recurrence.
Gastrointestinal distress
Nausea, vomiting, anorexia, diarrhea, abdominal cramps, and constipation are common side-effects of chemotherapeutic medications that kill fast-dividing cells. Malnutrition and dehydration can result when the recipient does not eat or drink enough, or when the person vomits frequently, because of gastrointestinal damage. This can result in rapid weight loss, or occasionally in weight gain, if the person eats too much in an effort to allay nausea or heartburn. Weight gain can also be caused by some steroid medications. These side-effects can frequently be reduced or eliminated with antiemetic drugs. Low-certainty evidence also suggests that probiotics may have a preventative and treatment effect of diarrhoea related to chemotherapy alone and with radiotherapy. However, a high index of suspicion is appropriate, since diarrhoea and bloating are also symptoms of typhlitis, a very serious and potentially life-threatening medical emergency that requires immediate treatment.
Anemia
Anemia can be a combined outcome caused by myelosuppressive chemotherapy, and possible cancer-related causes such as bleeding, blood cell destruction (hemolysis), hereditary disease, kidney dysfunction, nutritional deficiencies or anemia of chronic disease. Treatments to mitigate anemia include hormones to boost blood production (erythropoietin), iron supplements, and blood transfusions. Myelosuppressive therapy can cause a tendency to bleed easily, leading to anemia. Medications that kill rapidly dividing cells or blood cells can reduce the number of platelets in the blood, which can result in bruises and bleeding. Extremely low platelet counts may be temporarily boosted through platelet transfusions and new drugs to increase platelet counts during chemotherapy are being developed. Sometimes, chemotherapy treatments are postponed to allow platelet counts to recover.
Fatigue may be a consequence of the cancer or its treatment, and can last for months to years after treatment. One physiological cause of fatigue is anemia, which can be caused by chemotherapy, surgery, radiotherapy, primary and metastatic disease or nutritional depletion. Aerobic exercise has been found to be beneficial in reducing fatigue in people with solid tumours.
Nausea and vomiting
Nausea and vomiting are two of the most feared cancer treatment-related side-effects for people with cancer and their families. In 1983, Coates et al. found that people receiving chemotherapy ranked nausea and vomiting as the first and second most severe side-effects, respectively. Up to 20% of people receiving highly emetogenic agents in this era postponed, or even refused potentially curative treatments. Chemotherapy-induced nausea and vomiting (CINV) are common with many treatments and some forms of cancer. Since the 1990s, several novel classes of antiemetics have been developed and commercialized, becoming a nearly universal standard in chemotherapy regimens, and helping to successfully manage these symptoms in many people. Effective mediation of these unpleasant and sometimes debilitating symptoms results in increased quality of life for the recipient and more efficient treatment cycles, as patients are less likely to avoid or refuse treatment.
Hair loss
alt=Chemotherapy adverse effects on hair|thumb|Hair matting after few sessions of chemotherapy
Hair loss (alopecia) can be caused by chemotherapy that kills rapidly dividing cells; other medications may cause hair to thin. These are most often temporary effects: hair usually starts to regrow a few weeks after the last treatment, but sometimes with a change in color, texture, thickness or style. Sometimes hair has a tendency to curl after regrowth, resulting in "chemo curls." Severe hair loss occurs most often with drugs such as doxorubicin, daunorubicin, paclitaxel, docetaxel, cyclophosphamide, ifosfamide and etoposide. Permanent thinning or hair loss can result from some standard chemotherapy regimens.
Chemotherapy induced hair loss occurs by a non-androgenic mechanism, and can manifest as alopecia totalis, telogen effluvium, or less often alopecia areata. It is usually associated with systemic treatment due to the high mitotic rate of hair follicles, and more reversible than androgenic hair loss, although permanent cases can occur. Chemotherapy induces hair loss in women more often than men.
Scalp cooling offers a means of preventing both permanent and temporary hair loss; however, concerns about this method have been raised.
Secondary neoplasm
Development of secondary neoplasia after successful chemotherapy or radiotherapy treatment can occur. The most common secondary neoplasm is secondary acute myeloid leukemia, which develops primarily after treatment with alkylating agents or topoisomerase inhibitors. Survivors of childhood cancer are more than 13 times as likely to get a secondary neoplasm during the 30 years after treatment than the general population. Not all of this increase can be attributed to chemotherapy.
Infertility
Some types of chemotherapy are gonadotoxic and may cause infertility. Chemotherapies with high risk include procarbazine and other alkylating drugs such as cyclophosphamide, ifosfamide, busulfan, melphalan, chlorambucil, and chlormethine. Drugs with medium risk include doxorubicin and platinum analogs such as cisplatin and carboplatin. On the other hand, therapies with low risk of gonadotoxicity include plant derivatives such as vincristine and vinblastine, antibiotics such as bleomycin and dactinomycin, and antimetabolites such as methotrexate, mercaptopurine, and 5-fluorouracil.
Female infertility by chemotherapy appears to be secondary to premature ovarian failure by loss of primordial follicles. This loss is not necessarily a direct effect of the chemotherapeutic agents, but could be due to an increased rate of growth initiation to replace damaged developing follicles.
People may choose between several methods of fertility preservation prior to chemotherapy, including cryopreservation of semen, ovarian tissue, oocytes, or embryos. As more than half of cancer patients are elderly, this adverse effect is only relevant for a minority of patients. A study in France between 1999 and 2011 came to the result that embryo freezing before administration of gonadotoxic agents to females caused a delay of treatment in 34% of cases, and a live birth in 27% of surviving cases who wanted to become pregnant, with the follow-up time varying between 1 and 13 years.
Potential protective or attenuating agents include GnRH analogs, where several studies have shown a protective effect in vivo in humans, but some studies show no such effect. Sphingosine-1-phosphate (S1P) has shown similar effect, but its mechanism of inhibiting the sphingomyelin apoptotic pathway may also interfere with the apoptosis action of chemotherapy drugs.
In chemotherapy as a conditioning regimen in hematopoietic stem cell transplantation, a study of people conditioned with cyclophosphamide alone for severe aplastic anemia came to the result that ovarian recovery occurred in all women younger than 26 years at time of transplantation, but only in five of 16 women older than 26 years.In turn citing:
Teratogenicity
Chemotherapy is teratogenic during pregnancy, especially during the first trimester, to the extent that abortion usually is recommended if pregnancy in this period is found during chemotherapy. Second- and third-trimester exposure does not usually increase the teratogenic risk and adverse effects on cognitive development, but it may increase the risk of various complications of pregnancy and fetal myelosuppression.
Female patients of reproductive potential should use effective contraception during chemotherapy and for a few months after the last dose (e.g. 6 month for doxorubicin).
In males previously having undergone chemotherapy or radiotherapy, there appears to be no increase in genetic defects or congenital malformations in their children conceived after therapy. The use of assisted reproductive technologies and micromanipulation techniques might increase this risk. In females previously having undergone chemotherapy, miscarriage and congenital malformations are not increased in subsequent conceptions. However, when in vitro fertilization and embryo cryopreservation is practised between or shortly after treatment, possible genetic risks to the growing oocytes exist, and hence it has been recommended that the babies be screened.
Peripheral neuropathy
Between 30 and 40 percent of people undergoing chemotherapy experience chemotherapy-induced peripheral neuropathy (CIPN), a progressive, enduring, and often irreversible condition, causing pain, tingling, numbness and sensitivity to cold, beginning in the hands and feet and sometimes progressing to the arms and legs. Chemotherapy drugs associated with CIPN include thalidomide, epothilones, vinca alkaloids, taxanes, proteasome inhibitors, and the platinum-based drugs. Whether CIPN arises, and to what degree, is determined by the choice of drug, duration of use, the total amount consumed and whether the person already has peripheral neuropathy. Though the symptoms are mainly sensory, in some cases motor nerves and the autonomic nervous system are affected. CIPN often follows the first chemotherapy dose and increases in severity as treatment continues, but this progression usually levels off at completion of treatment. The platinum-based drugs are the exception; with these drugs, sensation may continue to deteriorate for several months after the end of treatment. Some CIPN appears to be irreversible. Pain can often be managed with drug or other treatment but the numbness is usually resistant to treatment.
Cognitive impairment
Some people receiving chemotherapy report fatigue or non-specific neurocognitive problems, such as an inability to concentrate; this is sometimes called post-chemotherapy cognitive impairment, referred to as "chemo brain" in popular and social media.
Tumor lysis syndrome
In particularly large tumors and cancers with high white cell counts, such as lymphomas, teratomas, and some leukemias, some people develop tumor lysis syndrome. The rapid breakdown of cancer cells causes the release of chemicals from the inside of the cells. Following this, high levels of uric acid, potassium and phosphate are found in the blood. High levels of phosphate induce secondary hypoparathyroidism, resulting in low levels of calcium in the blood. This causes kidney damage and the high levels of potassium can cause cardiac arrhythmia. Although prophylaxis is available and is often initiated in people with large tumors, this is a dangerous side-effect that can lead to death if left untreated.
Organ damage
Cardiotoxicity (heart damage) is especially prominent with the use of anthracycline drugs (doxorubicin, epirubicin, idarubicin, and liposomal doxorubicin). The cause of this is most likely due to the production of free radicals in the cell and subsequent DNA damage. Other chemotherapeutic agents that cause cardiotoxicity, but at a lower incidence, are cyclophosphamide, docetaxel and clofarabine.
Hepatotoxicity (liver damage) can be caused by many cytotoxic drugs. The susceptibility of an individual to liver damage can be altered by other factors such as the cancer itself, viral hepatitis, immunosuppression and nutritional deficiency. The liver damage can consist of damage to liver cells, hepatic sinusoidal syndrome (obstruction of the veins in the liver), cholestasis (where bile does not flow from the liver to the intestine) and liver fibrosis.
Nephrotoxicity (kidney damage) can be caused by tumor lysis syndrome and also due direct effects of drug clearance by the kidneys. Different drugs will affect different parts of the kidney and the toxicity may be asymptomatic (only seen on blood or urine tests) or may cause acute kidney injury.
Ototoxicity (damage to the inner ear) is a common side effect of platinum based drugs that can produce symptoms such as dizziness and vertigo. Children treated with platinum analogues have been found to be at risk for developing hearing loss.
Other side-effects
Less common side-effects include red skin (erythema), dry skin, damaged fingernails, a dry mouth (xerostomia), water retention, and sexual impotence. Some medications can trigger allergic or pseudoallergic reactions.
Specific chemotherapeutic agents are associated with organ-specific toxicities, including cardiovascular disease (e.g., doxorubicin), interstitial lung disease (e.g., bleomycin) and occasionally secondary neoplasm (e.g., MOPP therapy for Hodgkin's disease).
Hand-foot syndrome is another side effect to cytotoxic chemotherapy.
Nutritional problems are also frequently seen in cancer patients at diagnosis and through chemotherapy treatment. Research suggests that in children and young people undergoing cancer treatment, parenteral nutrition may help with this leading to weight gain and increased calorie and protein intake, when compared to enteral nutrition.
Limitations
Chemotherapy does not always work, and even when it is useful, it may not completely destroy the cancer. People frequently fail to understand its limitations. In one study of people who had been newly diagnosed with incurable, stage 4 cancer, more than two-thirds of people with lung cancer and more than four-fifths of people with colorectal cancer still believed that chemotherapy was likely to cure their cancer.
The blood–brain barrier poses an obstacle to delivery of chemotherapy to the brain. This is because the brain has an extensive system in place to protect it from harmful chemicals. Drug transporters can pump out drugs from the brain and brain's blood vessel cells into the cerebrospinal fluid and blood circulation. These transporters pump out most chemotherapy drugs, which reduces their efficacy for treatment of brain tumors. Only small lipophilic alkylating agents such as lomustine or temozolomide are able to cross this blood–brain barrier.
Blood vessels in tumors are very different from those seen in normal tissues. As a tumor grows, tumor cells furthest away from the blood vessels become low in oxygen (hypoxic). To counteract this they then signal for new blood vessels to grow. The newly formed tumor vasculature is poorly formed and does not deliver an adequate blood supply to all areas of the tumor. This leads to issues with drug delivery because many drugs will be delivered to the tumor by the circulatory system.
Resistance
Resistance is a major cause of treatment failure in chemotherapeutic drugs. There are a few possible causes of resistance in cancer, one of which is the presence of small pumps on the surface of cancer cells that actively move chemotherapy from inside the cell to the outside. Cancer cells produce high amounts of these pumps, known as p-glycoprotein, in order to protect themselves from chemotherapeutics. Research on p-glycoprotein and other such chemotherapy efflux pumps is currently ongoing. Medications to inhibit the function of p-glycoprotein are undergoing investigation, but due to toxicities and interactions with anti-cancer drugs their development has been difficult. Another mechanism of resistance is gene amplification, a process in which multiple copies of a gene are produced by cancer cells. This overcomes the effect of drugs that reduce the expression of genes involved in replication. With more copies of the gene, the drug can not prevent all expression of the gene and therefore the cell can restore its proliferative ability. Cancer cells can also cause defects in the cellular pathways of apoptosis (programmed cell death). As most chemotherapy drugs kill cancer cells in this manner, defective apoptosis allows survival of these cells, making them resistant. Many chemotherapy drugs also cause DNA damage, which can be repaired by enzymes in the cell that carry out DNA repair. Upregulation of these genes can overcome the DNA damage and prevent the induction of apoptosis. Mutations in genes that produce drug target proteins, such as tubulin, can occur which prevent the drugs from binding to the protein, leading to resistance to these types of drugs. Drugs used in chemotherapy can induce cell stress, which can kill a cancer cell; however, under certain conditions, cells stress can induce changes in gene expression that enables resistance to several types of drugs. In lung cancer, the transcription factor NFκB is thought to play a role in resistance to chemotherapy, via inflammatory pathways.
Cytotoxics and targeted therapies
Targeted therapies are a relatively new class of cancer drugs that can overcome many of the issues seen with the use of cytotoxics. They are divided into two groups: small molecule and antibodies. The massive toxicity seen with the use of cytotoxics is due to the lack of cell specificity of the drugs. They will kill any rapidly dividing cell, tumor or normal. Targeted therapies are designed to affect cellular proteins or processes that are utilised by the cancer cells. This allows a high dose to cancer tissues with a relatively low dose to other tissues. Although the side effects are often less severe than that seen of cytotoxic chemotherapeutics, life-threatening effects can occur. Initially, the targeted therapeutics were supposed to be solely selective for one protein. Now it is clear that there is often a range of protein targets that the drug can bind. An example target for targeted therapy is the BCR-ABL1 protein produced from the Philadelphia chromosome, a genetic lesion found commonly in chronic myelogenous leukemia and in some patients with acute lymphoblastic leukemia. This fusion protein has enzyme activity that can be inhibited by imatinib, a small molecule drug.
Mechanism of action
Cancer is the uncontrolled growth of cells coupled with malignant behaviour: invasion and metastasis (among other features). It is caused by the interaction between genetic susceptibility and environmental factors. These factors lead to accumulations of genetic mutations in oncogenes (genes that control the growth rate of cells) and tumor suppressor genes (genes that help to prevent cancer), which gives cancer cells their malignant characteristics, such as uncontrolled growth.
In the broad sense, most chemotherapeutic drugs work by impairing mitosis (cell division), effectively targeting fast-dividing cells. As these drugs cause damage to cells, they are termed cytotoxic. They prevent mitosis by various mechanisms including damaging DNA and inhibition of the cellular machinery involved in cell division. One theory as to why these drugs kill cancer cells is that they induce a programmed form of cell death known as apoptosis.
As chemotherapy affects cell division, tumors with high growth rates (such as acute myelogenous leukemia and the aggressive lymphomas, including Hodgkin's disease) are more sensitive to chemotherapy, as a larger proportion of the targeted cells are undergoing cell division at any time. Malignancies with slower growth rates, such as indolent lymphomas, tend to respond to chemotherapy much more modestly. Heterogeneic tumours may also display varying sensitivities to chemotherapy agents, depending on the subclonal populations within the tumor.
Cells from the immune system also make crucial contributions to the antitumor effects of chemotherapy. For example, the chemotherapeutic drugs oxaliplatin and cyclophosphamide can cause tumor cells to die in a way that is detectable by the immune system (called immunogenic cell death), which mobilizes immune cells with antitumor functions. Chemotherapeutic drugs that cause cancer immunogenic tumor cell death can make unresponsive tumors sensitive to immune checkpoint therapy.
Other uses
Some chemotherapy drugs are used in diseases other than cancer, such as in autoimmune disorders, and noncancerous plasma cell dyscrasia. In some cases they are often used at lower doses, which means that the side effects are minimized, while in other cases doses similar to ones used to treat cancer are used. Methotrexate is used in the treatment of rheumatoid arthritis (RA), psoriasis, ankylosing spondylitis and multiple sclerosis. The anti-inflammatory response seen in RA is thought to be due to increases in adenosine, which causes immunosuppression; effects on immuno-regulatory cyclooxygenase-2 enzyme pathways; reduction in pro-inflammatory cytokines; and anti-proliferative properties. Although methotrexate is used to treat both multiple sclerosis and ankylosing spondylitis, its efficacy in these diseases is still uncertain. Cyclophosphamide is sometimes used to treat lupus nephritis, a common symptom of systemic lupus erythematosus. Dexamethasone along with either bortezomib or melphalan is commonly used as a treatment for AL amyloidosis. Recently, bortezomid in combination with cyclophosphamide and dexamethasone has also shown promise as a treatment for AL amyloidosis. Other drugs used to treat myeloma such as lenalidomide have shown promise in treating AL amyloidosis.
Chemotherapy drugs are also used in conditioning regimens prior to bone marrow transplant (hematopoietic stem cell transplant). Conditioning regimens are used to suppress the recipient's immune system in order to allow a transplant to engraft. Cyclophosphamide is a common cytotoxic drug used in this manner and is often used in conjunction with total body irradiation. Chemotherapeutic drugs may be used at high doses to permanently remove the recipient's bone marrow cells (myeloablative conditioning) or at lower doses that will prevent permanent bone marrow loss (non-myeloablative and reduced intensity conditioning). When used in non-cancer setting, the treatment is still called "chemotherapy", and is often done in the same treatment centers used for people with cancer.
Occupational exposure and safe handling
In the 1970s, antineoplastic (chemotherapy) drugs were identified as hazardous, and the American Society of Health-System Pharmacists (ASHP) has since then introduced the concept of hazardous drugs after publishing a recommendation in 1983 regarding handling hazardous drugs. The adaptation of federal regulations came when the U.S. Occupational Safety and Health Administration (OSHA) first released its guidelines in 1986 and then updated them in 1996, 1999, and, most recently, 2006.
The National Institute for Occupational Safety and Health (NIOSH) has been conducting an assessment in the workplace since then regarding these drugs. Occupational exposure to antineoplastic drugs has been linked to multiple health effects, including infertility and possible carcinogenic effects. A few cases have been reported by the NIOSH alert report, such as one in which a female pharmacist was diagnosed with papillary transitional cell carcinoma. Twelve years before the pharmacist was diagnosed with the condition, she had worked for 20 months in a hospital where she was responsible for preparing multiple antineoplastic drugs. The pharmacist did not have any other risk factor for cancer, and therefore, her cancer was attributed to the exposure to the antineoplastic drugs, although a cause-and-effect relationship has not been established in the literature. Another case happened when a malfunction in biosafety cabinetry is believed to have exposed nursing personnel to antineoplastic drugs. Investigations revealed evidence of genotoxic biomarkers two and nine months after that exposure.
Routes of exposure
Antineoplastic drugs are usually given through intravenous, intramuscular, intrathecal, or subcutaneous administration. In most cases, before the medication is administered to the patient, it needs to be prepared and handled by several workers. Any worker who is involved in handling, preparing, or administering the drugs, or with cleaning objects that have come into contact with antineoplastic drugs, is potentially exposed to hazardous drugs. Health care workers are exposed to drugs in different circumstances, such as when pharmacists and pharmacy technicians prepare and handle antineoplastic drugs and when nurses and physicians administer the drugs to patients. Additionally, those who are responsible for disposing antineoplastic drugs in health care facilities are also at risk of exposure.
Dermal exposure is thought to be the main route of exposure due to the fact that significant amounts of the antineoplastic agents have been found in the gloves worn by healthcare workers who prepare, handle, and administer the agents. Another noteworthy route of exposure is inhalation of the drugs' vapors. Multiple studies have investigated inhalation as a route of exposure, and although air sampling has not shown any dangerous levels, it is still a potential route of exposure. Ingestion by hand to mouth is a route of exposure that is less likely compared to others because of the enforced hygienic standard in the health institutions. However, it is still a potential route, especially in the workplace, outside of a health institute. One can also be exposed to these hazardous drugs through injection by needle sticks. Research conducted in this area has established that occupational exposure occurs by examining evidence in multiple urine samples from health care workers.
Hazards
Hazardous drugs expose health care workers to serious health risks. Many studies show that antineoplastic drugs could have many side effects on the reproductive system, such as fetal loss, congenital malformation, and infertility. Health care workers who are exposed to antineoplastic drugs on many occasions have adverse reproductive outcomes such as spontaneous abortions, stillbirths, and congenital malformations. Moreover, studies have shown that exposure to these drugs leads to menstrual cycle irregularities. Antineoplastic drugs may also increase the risk of learning disabilities among children of health care workers who are exposed to these hazardous substances.
Moreover, these drugs have carcinogenic effects. In the past five decades, multiple studies have shown the carcinogenic effects of exposure to antineoplastic drugs. Similarly, there have been research studies that linked alkylating agents with humans developing leukemias. Studies have reported elevated risk of breast cancer, nonmelanoma skin cancer, and cancer of the rectum among nurses who are exposed to these drugs. Other investigations revealed that there is a potential genotoxic effect from anti-neoplastic drugs to workers in health care settings.
Safe handling in health care settings
As of 2018, there were no occupational exposure limits set for antineoplastic drugs, i.e., OSHA or the American Conference of Governmental Industrial Hygienists (ACGIH) have not set workplace safety guidelines.
Preparation
NIOSH recommends using a ventilated cabinet that is designed to decrease worker exposure. Additionally, it recommends training of all staff, the use of cabinets, implementing an initial evaluation of the technique of the safety program, and wearing protective gloves and gowns when opening drug packaging, handling vials, or labeling. When wearing personal protective equipment, one should inspect gloves for physical defects before use and always wear double gloves and protective gowns. Health care workers are also required to wash their hands with water and soap before and after working with antineoplastic drugs, change gloves every 30 minutes or whenever punctured, and discard them immediately in a chemotherapy waste container.
The gowns used should be disposable gowns made of polyethylene-coated polypropylene. When wearing gowns, individuals should make sure that the gowns are closed and have long sleeves. When preparation is done, the final product should be completely sealed in a plastic bag.
The health care worker should also wipe all waste containers inside the ventilated cabinet before removing them from the cabinet. Finally, workers should remove all protective wear and put them in a bag for their disposal inside the ventilated cabinet.
Administration
Drugs should only be administered using protective medical devices such as needle lists and closed systems and techniques such as priming of IV tubing by pharmacy personnel inside a ventilated cabinet. Workers should always wear personal protective equipment such as double gloves, goggles, and protective gowns when opening the outer bag and assembling the delivery system to deliver the drug to the patient, and when disposing of all material used in the administration of the drugs.
Hospital workers should never remove tubing from an IV bag that contains an antineoplastic drug, and when disconnecting the tubing in the system, they should make sure the tubing has been thoroughly flushed. After removing the IV bag, the workers should place it together with other disposable items directly in the yellow chemotherapy waste container with the lid closed. Protective equipment should be removed and put into a disposable chemotherapy waste container. After this has been done, one should double bag the chemotherapy waste before or after removing one's inner gloves. Moreover, one must always wash one's hands with soap and water before leaving the drug administration site.
Employee training
All employees whose jobs in health care facilities expose them to hazardous drugs must receive training. Training should include shipping and receiving personnel, housekeepers, pharmacists, assistants, and all individuals involved in the transportation and storage of antineoplastic drugs. These individuals should receive information and training to inform them of the hazards of the drugs present in their areas of work. They should be informed and trained on operations and procedures in their work areas where they can encounter hazards, different methods used to detect the presence of hazardous drugs and how the hazards are released, and the physical and health hazards of the drugs, including their reproductive and carcinogenic hazard potential. Additionally, they should be informed and trained on the measures they should take to avoid and protect themselves from these hazards. This information ought to be provided when health care workers come into contact with the drugs, that is, perform the initial assignment in a work area with hazardous drugs. Moreover, training should also be provided when new hazards emerge as well as when new drugs, procedures, or equipment are introduced.
Housekeeping and waste disposal
When performing cleaning and decontaminating the work area where antineoplastic drugs are used, one should make sure that there is sufficient ventilation to prevent the buildup of airborne drug concentrations. When cleaning the work surface, hospital workers should use deactivation and cleaning agents before and after each activity as well as at the end of their shifts. Cleaning should always be done using double protective gloves and disposable gowns. After employees finish up cleaning, they should dispose of the items used in the activity in a yellow chemotherapy waste container while still wearing protective gloves. After removing the gloves, they should thoroughly wash their hands with soap and water. Anything that comes into contact or has a trace of the antineoplastic drugs, such as needles, empty vials, syringes, gowns, and gloves, should be put in the chemotherapy waste container.
Spill control
A written policy needs to be in place in case of a spill of antineoplastic products. The policy should address the possibility of various sizes of spills as well as the procedure and personal protective equipment required for each size. A trained worker should handle a large spill and always dispose of all cleanup materials in the chemical waste container according to EPA regulations, not in a yellow chemotherapy waste container.
Occupational monitoring
A medical surveillance program must be established. In case of exposure, occupational health professionals need to ask for a detailed history and do a thorough physical exam. They should test the urine of the potentially exposed worker by doing a urine dipstick or microscopic examination, mainly looking for blood, as several antineoplastic drugs are known to cause bladder damage.
Urinary mutagenicity is a marker of exposure to antineoplastic drugs that was first used by Falck and colleagues in 1979 and uses bacterial mutagenicity assays. Apart from being nonspecific, the test can be influenced by extraneous factors such as dietary intake and smoking and is, therefore, used sparingly. However, the test played a significant role in changing the use of horizontal flow cabinets to vertical flow biological safety cabinets during the preparation of antineoplastic drugs because the former exposed health care workers to high levels of drugs. This changed the handling of drugs and effectively reduced workers' exposure to antineoplastic drugs.
Biomarkers of exposure to antineoplastic drugs commonly include urinary platinum, methotrexate, urinary cyclophosphamide and ifosfamide, and urinary metabolite of 5-fluorouracil. In addition to this, there are other drugs used to measure the drugs directly in the urine, although they are rarely used. A measurement of these drugs directly in one's urine is a sign of high exposure levels and that an uptake of the drugs is happening either through inhalation or dermally.
Available agents
There is an extensive list of antineoplastic agents. Several classification schemes have been used to subdivide the medicines used for cancer into several different types.
History
The first use of small-molecule drugs to treat cancer was in the early 20th century, although the specific chemicals first used were not originally intended for that purpose. Mustard gas was used as a chemical warfare agent during World War I and was discovered to be a potent suppressor of hematopoiesis (blood production). A similar family of compounds known as nitrogen mustards were studied further during World War II at the Yale School of Medicine. It was reasoned that an agent that damaged the rapidly growing white blood cells might have a similar effect on cancer. Therefore, in December 1942, several people with advanced lymphomas (cancers of the lymphatic system and lymph nodes) were given the drug by vein, rather than by breathing the irritating gas. Their improvement, although temporary, was remarkable. Concurrently, during a military operation in World War II, following a German air raid on the Italian harbour of Bari, several hundred people were accidentally exposed to mustard gas, which had been transported there by the Allied forces to prepare for possible retaliation in the event of German use of chemical warfare. The survivors were later found to have very low white blood cell counts. After WWII was over and the reports declassified, the experiences converged and led researchers to look for other substances that might have similar effects against cancer. The first chemotherapy drug to be developed from this line of research was mustine. Since then, many other drugs have been developed to treat cancer, and drug development has exploded into a multibillion-dollar industry, although the principles and limitations of chemotherapy discovered by the early researchers still apply.
The term chemotherapy
The word chemotherapy without a modifier usually refers to cancer treatment, but its historical meaning was broader. The term was coined in the early 1900s by Paul Ehrlich as meaning any use of chemicals to treat any disease (chemo- + -therapy), such as the use of antibiotics (antibacterial chemotherapy). Ehrlich was not optimistic that effective chemotherapy drugs would be found for the treatment of cancer. The first modern chemotherapeutic agent was arsphenamine, an arsenic compound discovered in 1907 and used to treat syphilis. This was later followed by sulfonamides (sulfa drugs) and penicillin. In today's usage, the sense "any treatment of disease with drugs" is often expressed with the word pharmacotherapy.
Research
Targeted delivery vehicles
Specially targeted delivery vehicles aim to increase effective levels of chemotherapy for tumor cells while reducing effective levels for other cells. This should result in an increased tumor kill or reduced toxicity or both.
Antibody-drug conjugates
Antibody-drug conjugates (ADCs) comprise an antibody, drug and a linker between them. The antibody will be targeted at a preferentially expressed protein in the tumour cells (known as a tumor antigen) or on cells that the tumor can utilise, such as blood vessel endothelial cells. They bind to the tumor antigen and are internalised, where the linker releases the drug into the cell. These specially targeted delivery vehicles vary in their stability, selectivity, and choice of target, but, in essence, they all aim to increase the maximum effective dose that can be delivered to the tumor cells. Reduced systemic toxicity means that they can also be used in people who are sicker and that they can carry new chemotherapeutic agents that would have been far too toxic to deliver via traditional systemic approaches.
The first approved drug of this type was gemtuzumab ozogamicin (Mylotarg), released by Wyeth (now Pfizer). The drug was approved to treat acute myeloid leukemia. Two other drugs, trastuzumab emtansine and brentuximab vedotin, are both in late clinical trials, and the latter has been granted accelerated approval for the treatment of refractory Hodgkin's lymphoma and systemic anaplastic large cell lymphoma.
Nanoparticles
Nanoparticles are 1–1000 nanometer (nm) sized particles that can promote tumor selectivity and aid in delivering low-solubility drugs. Nanoparticles can be targeted passively or actively. Passive targeting exploits the difference between tumor blood vessels and normal blood vessels. Blood vessels in tumors are "leaky" because they have gaps from 200 to 2000 nm, which allow nanoparticles to escape into the tumor. Active targeting uses biological molecules (antibodies, proteins, DNA and receptor ligands) to preferentially target the nanoparticles to the tumor cells. There are many types of nanoparticle delivery systems, such as silica, polymers, liposomes and magnetic particles. Nanoparticles made of magnetic material can also be used to concentrate agents at tumor sites using an externally applied magnetic field. They have emerged as a useful vehicle in magnetic drug delivery for poorly soluble agents such as paclitaxel.
Electrochemotherapy
Electrochemotherapy is the combined treatment in which injection of a chemotherapeutic drug is followed by application of high-voltage electric pulses locally to the tumor. The treatment enables the chemotherapeutic drugs, which otherwise cannot or hardly go through the membrane of cells (such as bleomycin and cisplatin), to enter the cancer cells. Hence, greater effectiveness of antitumor treatment is achieved.
Clinical electrochemotherapy has been successfully used for treatment of cutaneous and subcutaneous tumors irrespective of their histological origin. The method has been reported as safe, simple and highly effective in all reports on clinical use of electrochemotherapy. According to the ESOPE project (European Standard Operating Procedures of Electrochemotherapy), the Standard Operating Procedures (SOP) for electrochemotherapy were prepared, based on the experience of the leading European cancer centres on electrochemotherapy. Recently, new electrochemotherapy modalities have been developed for treatment of internal tumors using surgical procedures, endoscopic routes or percutaneous approaches to gain access to the treatment area.
Hyperthermia therapy
Hyperthermia therapy is heat treatment for cancer that can be a powerful tool when used in combination with chemotherapy (thermochemotherapy) or radiation for the control of a variety of cancers. The heat can be applied locally to the tumor site, which will dilate blood vessels to the tumor, allowing more chemotherapeutic medication to enter the tumor. Additionally, the tumor cell membrane will become more porous, further allowing more of the chemotherapeutic medicine to enter the tumor cell.
Hyperthermia has also been shown to help prevent or reverse "chemo-resistance." Chemotherapy resistance sometimes develops over time as the tumors adapt and can overcome the toxicity of the chemo medication. "Overcoming chemoresistance has been extensively studied within the past, especially using CDDP-resistant cells. In regard to the potential benefit that drug-resistant cells can be recruited for effective therapy by combining chemotherapy with hyperthermia, it was important to show that chemoresistance against several anticancer drugs (e.g. mitomycin C, anthracyclines, BCNU, melphalan) including CDDP could be reversed at least partially by the addition of heat.
Other animals
Chemotherapy is used in veterinary medicine similar to how it is used in human medicine.
See also
References
External links
Chemotherapy, American Cancer Society
Hazardous Drug Exposures in Health Care, National Institute for Occupational Safety and Health
NIOSH List of Antineoplastic and Other Hazardous Drugs in Healthcare Settings, 2016, National Institute for Occupational Safety and Health
International Ototoxicity Management Group (IOMG) - Wikiversity
Category:Antineoplastic drugs
Category:Cancer treatments
Category:Occupational safety and health
|
medicine_health
| 10,028
|
7376
|
Cosmic microwave background
|
https://en.wikipedia.org/wiki/Cosmic_microwave_background
|
The cosmic microwave background (CMB, CMBR), or relic radiation, is microwave radiation that fills all space in the observable universe. With a standard optical telescope, the background space between stars and galaxies is almost completely dark. However, a sufficiently sensitive radio telescope detects a faint background glow that is almost uniform and is not associated with any star, galaxy, or other object. This glow is strongest in the microwave region of the electromagnetic spectrum. Its energy density exceeds that of all the photons emitted by all the stars in the history of the universe. The accidental discovery of the CMB in 1964 by American radio astronomers Arno Allan Penzias and Robert Woodrow Wilson was the culmination of work initiated in the 1940s.
The CMB is landmark evidence of the Big Bang theory for the origin of the universe. In the Big Bang cosmological models, during the earliest periods, the universe was filled with an opaque fog of dense, hot plasma of sub-atomic particles. As the universe expanded, this plasma cooled to the point where protons and electrons combined to form neutral atoms of mostly hydrogen. Unlike the plasma, these atoms could not scatter thermal radiation by Thomson scattering, and so the universe became transparent. Known as the recombination epoch, this decoupling event released photons to travel freely through space. However, the photons have grown less energetic due to the cosmological redshift associated with the expansion of the universe. The surface of last scattering refers to a shell at the right distance in space so photons are now received that were originally emitted at the time of decoupling.
The CMB is very smooth and uniform, but maps by sensitive detectors detect small but important temperature variations. Ground and space-based experiments such as COBE, WMAP and Planck have been used to measure these temperature inhomogeneities. The anisotropy structure is influenced by various interactions of matter and photons up to the point of decoupling, which results in a characteristic pattern of tiny ripples that varies with angular scale. The distribution of the anisotropy across the sky has frequency components that can be represented by a power spectrum displaying a sequence of peaks and valleys. The peak values of this spectrum hold important information about the physical properties of the early universe: the first peak determines the overall curvature of the universe, while the second and third peak detail the density of normal matter and so-called dark matter, respectively. Extracting fine details from the CMB data can be challenging, since the emission has undergone modification by foreground features such as galaxy clusters.
Features
The cosmic microwave background radiation is an emission of uniform black body thermal energy coming from all directions. Intensity of the CMB is expressed in kelvin (K), the SI unit of temperature. The CMB has a thermal black body spectrum at a temperature of . Variations in intensity are expressed as variations in temperature. The blackbody temperature uniquely characterizes the intensity of the radiation at all wavelengths; a measured brightness temperature at any wavelength can be converted to a blackbody temperature.
The radiation is remarkably uniform across the sky, very unlike the almost point-like structure of stars or clumps of stars in galaxies. The radiation is isotropic to roughly one part in 25,000: the root mean square variations are just over 100 μK, after subtracting a dipole anisotropy from the Doppler shift of the background radiation. The latter is caused by the peculiar velocity of the Sun relative to the comoving cosmic rest frame as it moves at 369.82 ± 0.11 km/s towards the constellation Crater near its boundary with the constellation Leo. The CMB dipole and aberration at higher multipoles have been measured, consistent with galactic motion.
Despite the very small degree of anisotropy in the CMB, many aspects can be measured with high precision and such measurements are critical for cosmological theories.
In addition to temperature anisotropy, the CMB should have an angular variation in polarization. The polarisation at each direction in the sky has an orientation described in terms of E-mode and B-mode polarization. The E-mode signal is a factor of 10 less strong than the temperature anisotropy; it supplements the temperature data as they are correlated. The B-mode signal is even weaker but may contain additional cosmological data.
The anisotropy is related to physical origin of the polarisation. Excitation of an electron by linear polarised light generates polarized light at 90 degrees to the incident direction. If the incoming radiation is isotropic, different incoming directions create polarizations that cancel out. If the incoming radiation has quadrupole anisotropy, residual polarization will be seen.Hu, Wayne, and Martin White. "A CMB polarization primer." arXiv preprint astro-ph/9706147 (1997).
Other than the temperature and polarization anisotropy, the CMB frequency spectrum is expected to feature tiny departures from the black-body law known as spectral distortions. These are also at the focus of an active research effort with the hope of a first measurement within the forthcoming decades, as they contain a wealth of information about the primordial universe and the formation of structures at late time.
The CMB contains the vast majority of photons in the universe by a factor of 400 to 1; the number density of photons in the CMB is one billion times (109) the number density of matter in the universe. The present-day energy density of CMB photons greatly exceeds that of the photons emitted by all the stars over the history of the universe. Without the expansion of the universe to cause the cooling of the CMB, the night sky would shine as brightly as the Sun.K.A. Olive and J.A. Peacock
(September 2017) "21. Big-Bang Cosmology"
in .S. Navas et al. (Particle Data Group), to be published in Phys. Rev. D 110, 030001 (2024) The energy density of the CMB is , about 411 photons/cm3.
History
Early speculations
In 1931, Georges Lemaître speculated that remnants of the early universe may be observable as radiation, but his candidate was cosmic rays. Richard C. Tolman showed in 1934 that expansion of the universe would cool blackbody radiation while maintaining a thermal spectrum.
The cosmic microwave background was first predicted in 1948 by Ralph Alpher and Robert Herman, in a correction they prepared for a paper by Alpher's PhD advisor George Gamow. Alpher and Herman were able to estimate the temperature of the cosmic microwave background to be 5 K.
Discovery
The first published recognition of the CMB radiation as a detectable phenomenon appeared in a brief paper by Soviet astrophysicists A. G. Doroshkevich and Igor Novikov, in the spring of 1964. In 1964, David Todd Wilkinson and Peter Roll, Robert H. Dicke's colleagues at Princeton University, began constructing a Dicke radiometer to measure the cosmic microwave background. This basic design for a radiometer has been used in most subsequent cosmic microwave background experiments. In 1964, Arno Penzias and Robert Woodrow Wilson at the Crawford Hill location of Bell Telephone Laboratories in nearby Holmdel Township, New Jersey had built a Dicke radiometer that they intended to use for radio astronomy and satellite communication experiments. The antenna was constructed in 1959 to support Project Echo—the National Aeronautics and Space Administration's passive communications satellites, which used large Earth orbiting aluminized plastic balloons as reflectors to bounce radio signals from one point on the Earth to another. On 20 May 1964 they made their first measurement clearly showing the presence of the microwave background, with their instrument having an excess 4.2K antenna temperature which they could not account for. After receiving a telephone call from Crawford Hill, Dicke said "Boys, we've been scooped." A meeting between the Princeton and Crawford Hill groups determined that the antenna temperature was indeed due to the microwave background. Penzias and Wilson received the 1978 Nobel Prize in Physics for their discovery.
Cosmic origin
The interpretation of the cosmic microwave background was a controversial issue in the late 1960s. Alternative explanations included energy from within the Solar System, from galaxies, from intergalactic plasma and from multiple extragalactic radio sources. Two requirements would show that the microwave radiation was truly "cosmic". First, the intensity vs frequency or spectrum needed to be shown to match a thermal or blackbody source. This was accomplished by 1968 in a series of measurements of the radiation temperature at higher and lower wavelengths. Second, the radiation needed be shown to be isotropic, the same from all directions. This was also accomplished by 1970, demonstrating that this radiation was truly cosmic in origin.
Progress on theory
In the 1970s numerous studies showed that tiny deviations from isotropy in the CMB could result from events in the early universe.
Harrison, Peebles and Yu, and Zel'dovich realized that the early universe would require quantum inhomogeneities that would result in temperature anisotropy at the level of 10−4 or 10−5. Rashid Sunyaev, using the alternative name relic radiation, calculated the observable imprint that these inhomogeneities would have on the cosmic microwave background.
COBE
After a lull in the 1970s caused in part by the many experimental difficulties in measuring CMB at high precision,
increasingly stringent limits on the anisotropy of the cosmic microwave background were set by ground-based experiments during the 1980s. RELIKT-1, a Soviet cosmic microwave background anisotropy experiment on board the Prognoz 9 satellite (launched 1 July 1983), gave the first upper limits on the large-scale anisotropy.
The other key event in the 1980s was the proposal by Alan Guth for cosmic inflation. This theory of rapid spatial expansion gave an explanation for large-scale isotropy by allowing causal connection just before the epoch of last scattering. With this and similar theories, detailed prediction encouraged larger and more ambitious experiments.
The NASA Cosmic Background Explorer (COBE) satellite orbited Earth in 1989–1996 detected and quantified the large-scale anisotropies at the limit of its detection capabilities.
The NASA COBE mission clearly confirmed the primary anisotropy with the Differential Microwave Radiometer instrument, publishing their findings in 1992. The team received the Nobel Prize in physics for 2006 for this discovery.
Precision cosmology
Inspired by the COBE results, a series of ground and balloon-based experiments measured cosmic microwave background anisotropies on smaller angular scales over the two decades. The sensitivity of the new experiments improved dramatically, with a reduction in internal noise by three orders of magnitude. The primary goal of these experiments was to measure the scale of the first acoustic peak, which COBE did not have sufficient resolution to resolve. This peak corresponds to large scale density variations in the early universe that are created by gravitational instabilities, resulting in acoustical oscillations in the plasma. The first peak in the anisotropy was tentatively detected by the MAT/TOCO experiment and the result was confirmed by the BOOMERanG and MAXIMA experiments. These measurements demonstrated that the geometry of the universe is approximately flat, rather than curved. They ruled out cosmic strings as a major component of cosmic structure formation and suggested cosmic inflation was the right theory of structure formation.
Observations after COBE
Inspired by the initial COBE results of an extremely isotropic and homogeneous background, a series of ground- and balloon-based experiments quantified CMB anisotropies on smaller angular scales over the next decade. The primary goal of these experiments was to measure the angular scale of the first acoustic peak, for which COBE did not have sufficient resolution. These measurements were able to rule out cosmic strings as the leading theory of cosmic structure formation, and suggested cosmic inflation was the right theory.
During the 1990s, the first peak was measured with increasing sensitivity and by 2000 the BOOMERanG experiment reported that the highest power fluctuations occur at scales of approximately one angular degree. Together with other cosmological data, these results implied that the geometry of the universe is flat. A number of ground-based interferometers provided measurements of the fluctuations with higher accuracy over the next three years, including the Very Small Array, Degree Angular Scale Interferometer (DASI), and the Cosmic Background Imager (CBI). DASI made the first detection of the polarization of the CMB and the CBI provided the first E-mode polarization spectrum with compelling evidence that it is out of phase with the T-mode spectrum.
Wilkinson Microwave Anisotropy Probe
In June 2001, NASA launched a second CMB space mission, WMAP, to make much more precise measurements of the large-scale anisotropies over the full sky. WMAP used symmetric, rapid-multi-modulated scanning, rapid switching radiometers at five frequencies to minimize non-sky signal noise. The data from the mission was released in five installments, the last being the nine-year summary.
The results are broadly consistent Lambda CDM models based on 6 free parameters and fitting in to Big Bang cosmology with cosmic inflation.
Degree Angular Scale Interferometer
Atacama Cosmology Telescope
Planck Surveyor
A third space mission, the ESA (European Space Agency) Planck Surveyor, was launched in May 2009 and performed an even more detailed investigation until it was shut down in October 2013. Planck employed both HEMT radiometers and bolometer technology and measured the CMB at a smaller scale than WMAP. Its detectors were trialled in the Antarctic Viper telescope as ACBAR (Arcminute Cosmology Bolometer Array Receiver) experiment—which has produced the most precise measurements at small angular scales to date—and in the Archeops balloon telescope.
On 21 March 2013, the European-led research team behind the Planck cosmology probe released the mission's all-sky map (565x318 jpeg, 3600x1800 jpeg) of the cosmic microwave background. The map suggests the universe is slightly older than researchers expected. According to the map, subtle fluctuations in temperature were imprinted on the deep sky when the cosmos was about years old. The imprint reflects ripples that arose as early, in the existence of the universe, as the first nonillionth (10−30) of a second. Apparently, these ripples gave rise to the present vast cosmic web of galaxy clusters and dark matter. Based on the 2013 data, the universe contains 4.9% ordinary matter, 26.8% dark matter and 68.3% dark energy. On 5 February 2015, new data was released by the Planck mission, according to which the age of the universe is billion years old and the Hubble constant was measured to be .
South Pole Telescope
Theoretical models
The cosmic microwave background radiation and the cosmological redshift-distance relation are together regarded as the best available evidence for the Big Bang event. Measurements of the CMB have made the inflationary Big Bang model the Standard Cosmological Model. The discovery of the CMB in the mid-1960s curtailed interest in alternatives such as the steady state theory.
In the Big Bang model for the formation of the universe, inflationary cosmology predicts that after about 10−37 seconds the nascent universe underwent exponential growth that smoothed out nearly all irregularities. The remaining irregularities were caused by quantum fluctuations in the inflaton field that caused the inflation event. Long before the formation of stars and planets, the early universe was more compact, much hotter and, starting 10−6 seconds after the Big Bang, filled with a uniform glow from its white-hot fog of interacting plasma of photons, electrons, and baryons.
As the universe expanded, adiabatic cooling caused the energy density of the plasma to decrease until it became favorable for electrons to combine with protons, forming hydrogen atoms. This recombination event happened when the temperature was around 3000 K or when the universe was approximately 379,000 years old. As photons did not interact with these electrically neutral atoms, the former began to travel freely through space, resulting in the decoupling of matter and radiation.
The color temperature of the ensemble of decoupled photons has continued to diminish ever since; now down to , it will continue to drop as the universe expands. The intensity of the radiation corresponds to black-body radiation at 2.726 K because red-shifted black-body radiation is just like black-body radiation at a lower temperature. According to the Big Bang model, the radiation from the sky we measure today comes from a spherical surface called the surface of last scattering. This represents the set of locations in space at which the decoupling event is estimated to have occurred and at a point in time such that the photons from that distance have just reached observers. Most of the radiation energy in the universe is in the cosmic microwave background, making up a fraction of roughly of the total density of the universe.
Two of the greatest successes of the Big Bang theory are its prediction of the almost perfect black body spectrum and its detailed prediction of the anisotropies in the cosmic microwave background. The CMB spectrum has become the most precisely measured black body spectrum in nature.
Predictions based on the Big Bang model
In the late 1940s Alpher and Herman reasoned that if there was a Big Bang, the expansion of the universe would have stretched the high-energy radiation of the very early universe into the microwave region of the electromagnetic spectrum, and down to a temperature of about 5 K. They were slightly off with their estimate, but they had the right idea. They predicted the CMB. It took another 15 years for Penzias and Wilson to discover that the microwave background was actually there.
According to standard cosmology, the CMB gives a snapshot of the hot early universe at the point in time when the temperature dropped enough to allow electrons and protons to form hydrogen atoms. This event made the universe nearly transparent to radiation because light was no longer being scattered off free electrons. When this occurred some 380,000 years after the Big Bang, the temperature of the universe was about 3,000 K. This corresponds to an ambient energy of about , which is much less than the ionization energy of hydrogen. This epoch is generally known as the "time of last scattering" or the period of recombination or decoupling.
Since decoupling, the color temperature of the background radiation has dropped by an average factor of 1,089 due to the expansion of the universe. As the universe expands, the CMB photons are redshifted, causing them to decrease in energy. The color temperature of this radiation stays inversely proportional to a parameter that describes the relative expansion of the universe over time, known as the scale length. The color temperature Tr of the CMB as a function of redshift, z, can be shown to be proportional to the color temperature of the CMB as observed in the present day (2.725 K or 0.2348 meV):
Tr = 2.725 K × (1 + z)
The high degree of uniformity throughout the observable universe and its faint but measured anisotropy lend strong support for the Big Bang model in general and the ΛCDM ("Lambda Cold Dark Matter") model in particular. Moreover, the fluctuations are coherent on angular scales that are larger than the apparent cosmological horizon at recombination. Either such coherence is acausally fine-tuned, or cosmic inflation occurred.
Primary anisotropy
The anisotropy, or directional dependency, of the cosmic microwave background is divided into two types: primary anisotropy, due to effects that occur at the surface of last scattering and before; and secondary anisotropy, due to effects such as interactions of the background radiation with intervening hot gas or gravitational potentials, which occur between the last scattering surface and the observer.
The structure of the cosmic microwave background anisotropies is principally determined by two effects: acoustic oscillations and diffusion damping (also called collisionless damping or Silk damping). The acoustic oscillations arise because of a conflict in the photon–baryon plasma in the early universe. The pressure of the photons tends to erase anisotropies, whereas the gravitational attraction of the baryons, moving at speeds much slower than light, makes them tend to collapse to form overdensities. These two effects compete to create acoustic oscillations, which give the microwave background its characteristic peak structure. The peaks correspond, roughly, to resonances in which the photons decouple when a particular mode is at its peak amplitude.
The peaks contain interesting physical signatures. The angular scale of the first peak determines the curvature of the universe (but not the topology of the universe). The next peak—ratio of the odd peaks to the even peaks—determines the reduced baryon density. The third peak can be used to get information about the dark-matter density.
The locations of the peaks give important information about the nature of the primordial density perturbations. There are two fundamental types of density perturbations called adiabatic and isocurvature. A general density perturbation is a mixture of both, and different theories that purport to explain the primordial density perturbation spectrum predict different mixtures.
Adiabatic density perturbationsIn an adiabatic density perturbation, the fractional additional number density of each type of particle (baryons, photons, etc.) is the same. That is, if at one place there is a 1% higher number density of baryons than average, then at that place there is a 1% higher number density of photons (and a 1% higher number density in neutrinos) than average. Cosmic inflation predicts that the primordial perturbations are adiabatic.
Isocurvature density perturbationsIn an isocurvature density perturbation, the sum (over different types of particle) of the fractional additional densities is zero. That is, a perturbation where at some spot there is 1% more energy in baryons than average, 1% more energy in photons than average, and 2% energy in neutrinos than average, would be a pure isocurvature perturbation. Hypothetical cosmic strings would produce mostly isocurvature primordial perturbations.
The CMB spectrum can distinguish between these two because these two types of perturbations produce different peak locations. Isocurvature density perturbations produce a series of peaks whose angular scales (ℓ values of the peaks) are roughly in the ratio 1 : 3 : 5 : ..., while adiabatic density perturbations produce peaks whose locations are in the ratio 1 : 2 : 3 : ... Observations are consistent with the primordial density perturbations being entirely adiabatic, providing key support for inflation, and ruling out many models of structure formation involving, for example, cosmic strings.
Collisionless damping is caused by two effects, when the treatment of the primordial plasma as fluid begins to break down:
the increasing mean free path of the photons as the primordial plasma becomes increasingly rarefied in an expanding universe,
the finite depth of the last scattering surface (LSS), which causes the mean free path to increase rapidly during decoupling, even while some Compton scattering is still occurring.
These effects contribute about equally to the suppression of anisotropies at small scales and give rise to the characteristic exponential damping tail seen in the very small angular scale anisotropies.
The depth of the LSS refers to the fact that the decoupling of the photons and baryons does not happen instantaneously, but instead requires an appreciable fraction of the age of the universe up to that era. One method of quantifying how long this process took uses the photon visibility function (PVF). This function is defined so that, denoting the PVF by P(t), the probability that a CMB photon last scattered between time t and is given by P(t)dt.
The maximum of the PVF (the time when it is most likely that a given CMB photon last scattered) is known quite precisely. The first-year WMAP results put the time at which P(t) has a maximum as 372,000 years. This is often taken as the "time" at which the CMB formed. However, to figure out how it took the photons and baryons to decouple, we need a measure of the width of the PVF. The WMAP team finds that the PVF is greater than half of its maximal value (the "full width at half maximum", or FWHM) over an interval of 115,000 years. By this measure, decoupling took place over roughly 115,000 years, and thus when it was complete, the universe was roughly 487,000 years old.
Late time anisotropy
Since the CMB came into existence, it has apparently been modified by several subsequent physical processes, which are collectively referred to as late-time anisotropy, or secondary anisotropy. When the CMB photons became free to travel unimpeded, ordinary matter in the universe was mostly in the form of neutral hydrogen and helium atoms. However, observations of galaxies today seem to indicate that most of the volume of the intergalactic medium (IGM) consists of ionized material (since there are few absorption lines due to hydrogen atoms). This implies a period of reionization during which some of the material of the universe was broken into hydrogen ions.
The CMB photons are scattered by free charges such as electrons that are not bound in atoms. In an ionized universe, such charged particles have been liberated from neutral atoms by ionizing (ultraviolet) radiation. Today these free charges are at sufficiently low density in most of the volume of the universe that they do not measurably affect the CMB. However, if the IGM was ionized at very early times when the universe was still denser, then there are two main effects on the CMB:
Small scale anisotropies are erased. (Just as when looking at an object through fog, details of the object appear fuzzy.)
The physics of how photons are scattered by free electrons (Thomson scattering) induces polarization anisotropies on large angular scales. This broad angle polarization is correlated with the broad angle temperature perturbation.
Both of these effects have been observed by the WMAP spacecraft, providing evidence that the universe was ionized at very early times, at a redshift around 10. The detailed provenance of this early ionizing radiation is still a matter of scientific debate. It may have included starlight from the very first population of stars (population III stars), supernovae when these first stars reached the end of their lives, or the ionizing radiation produced by the accretion disks of massive black holes.
The time following the emission of the cosmic microwave background—and before the observation of the first stars—is semi-humorously referred to by cosmologists as the Dark Age, and is a period which is under intense study by astronomers (see 21 centimeter radiation).
Two other effects which occurred between reionization and our observations of the cosmic microwave background, and which appear to cause anisotropies, are the Sunyaev–Zel'dovich effect, where a cloud of high-energy electrons scatters the radiation, transferring some of its energy to the CMB photons, and the Sachs–Wolfe effect, which causes photons from the Cosmic Microwave Background to be gravitationally redshifted or blueshifted due to changing gravitational fields.
Alternative theories
The standard cosmology that includes the Big Bang "enjoys considerable popularity among the practicing cosmologists"
However, there are challenges to the standard big bang framework for explaining CMB data. In particular standard cosmology requires fine-tuning of some free parameters, with different values supported by different experimental data.
As an example of the fine-tuning issue, standard cosmology cannot predict the present temperature of the relic radiation, . This value of is one of the best results of experimental cosmology and the steady state model can predict it.
However, alternative models have their own set of problems and they have only made post-facto explanations of existing observations. Nevertheless, these alternatives have played an important historic role in providing ideas for and challenges to the standard explanation.
Polarization
The cosmic microwave background is polarized at the level of a few microkelvin. There are two types of polarization, called E-mode (or gradient-mode) and B-mode (or curl mode). This is in analogy to electrostatics, in which the electric field (E-field) has a vanishing curl and the magnetic field (B-field) has a vanishing divergence.
E-modes
The E-modes arise from Thomson scattering in a heterogeneous plasma.
E-modes were first seen in 2002 by the Degree Angular Scale Interferometer (DASI).
B-modes
B-modes are expected to be an order of magnitude weaker than the E-modes. The former are not produced by standard scalar type perturbations, but are generated by gravitational waves during cosmic inflation shortly after the big bang.
However, gravitational lensing of the stronger E-modes can also produce B-mode polarization. Detecting the original B-modes signal requires analysis of the contamination caused by lensing of the relatively strong E-mode signal.
Primordial gravitational waves
Models of "slow-roll" cosmic inflation in the early universe predicts primordial gravitational waves that would impact the polarisation of the cosmic microwave background, creating a specific pattern of B-mode polarization. Detection of this pattern would support the theory of inflation and their strength can confirm and exclude different models of inflation.
While claims that this characteristic pattern of B-mode polarization had been measured by BICEP2 instrument were later attributed to cosmic dust due to new results of the Planck experiment, subsequent reanalysis with compensation for foreground dust show limits in agreement with results from Lambda-CDM models.
Gravitational lensing
The second type of B-modes was discovered in 2013 using the South Pole Telescope with help from the Herschel Space Observatory. In October 2014, a measurement of the B-mode polarization at 150 GHz was published by the POLARBEAR experiment. Compared to BICEP2, POLARBEAR focuses on a smaller patch of the sky and is less susceptible to dust effects. The team reported that POLARBEAR's measured B-mode polarization was of cosmological origin (and not just due to dust) at a 97.2% confidence level.
Multipole analysis
The CMB angular anisotropies are usually presented in terms of power per multipole. Cosmic Microwave Background review by Scott and Smoot.
The map of temperature across the sky, is written as coefficients of spherical harmonics,
where the term measures the strength of the angular oscillation in , and ℓ is the multipole number while m is the azimuthal number.
The azimuthal variation is not significant and is removed by applying the angular correlation function, giving power spectrum term Increasing values of ℓ correspond to higher multipole moments of CMB, meaning more rapid variation with angle.
CMBR monopole term (ℓ = 0)
The monopole term, , is the constant isotropic mean temperature of the CMB, with one standard deviation confidence. This term must be measured with absolute temperature devices, such as the FIRAS instrument on the COBE satellite.
CMBR dipole anisotropy (ℓ = 1)
CMB dipole represents the largest anisotropy, which is in the first spherical harmonic (), a cosine function. The amplitude of CMB dipole is around . The CMB dipole moment is interpreted as the peculiar motion of the Earth relative to the CMB. Its amplitude depends on the time due to the Earth's orbit about the barycenter of the solar system. This enables us to add a time-dependent term to the dipole expression. The modulation of this term is 1 year, which fits the observation done by COBE FIRAS. The dipole moment does not encode any primordial information.
From the CMB data, it is seen that the Sun appears to be moving at relative to the reference frame of the CMB (also called the CMB rest frame, or the frame of reference in which there is no motion through the CMB). The Local Group — the galaxy group that includes our own Milky Way galaxy — appears to be moving at in the direction of galactic longitude , . The dipole is now used to calibrate mapping studies.
Multipole (ℓ ≥ 2)
The temperature variation in the CMB temperature maps at higher multipoles, or , is considered to be the result of perturbations of the density in the early Universe, before the recombination epoch at a redshift of around . Before recombination, the Universe consisted of a hot, dense plasma of electrons and baryons. In such a hot dense environment, electrons and protons could not form any neutral atoms. The baryons in such early Universe remained highly ionized and so were tightly coupled with photons through the effect of Thompson scattering. These phenomena caused the pressure and gravitational effects to act against each other, and triggered fluctuations in the photon-baryon plasma. Quickly after the recombination epoch, the rapid expansion of the universe caused the plasma to cool down and these fluctuations are "frozen into" the CMB maps we observe today.
Anomalies
With the increasingly precise data provided by WMAP, there have been a number of claims that the CMB exhibits anomalies, such as very large scale anisotropies, anomalous alignments, and non-Gaussian distributions. The most longstanding of these is the low-ℓ multipole controversy. Even in the COBE map, it was observed that the quadrupole (, spherical harmonic) has a low amplitude compared to the predictions of the Big Bang. In particular, the quadrupole and octupole () modes appear to have an unexplained alignment with each other and with both the ecliptic plane and equinoxes. A number of groups have suggested that this could be the signature of quantum corrections or new physics at the greatest observable scales; other groups suspect systematic errors in the data.
Ultimately, due to the foregrounds and the cosmic variance problem, the greatest modes will never be as well measured as the small angular scale modes. The analyses were performed on two maps that have had the foregrounds removed as far as possible: the "internal linear combination" map of the WMAP collaboration and a similar map prepared by Max Tegmark and others. This paper warns that "the statistics of this internal linear combination map are complex and inappropriate for most CMB analyses." This paper states, "Not surprisingly, the two most contaminated multipoles are [the quadrupole and octupole], which most closely trace the galactic plane morphology." Later analyses have pointed out that these are the modes most susceptible to foreground contamination from synchrotron, dust, and bremsstrahlung emission, and from experimental uncertainty in the monopole and dipole.
A full Bayesian analysis of the WMAP power spectrum demonstrates that the quadrupole prediction of Lambda-CDM cosmology is consistent with the data at the 10% level and that the observed octupole is not remarkable. Carefully accounting for the procedure used to remove the foregrounds from the full sky map further reduces the significance of the alignment by ~5%.
Recent observations with the Planck telescope, which is very much more sensitive than WMAP and has a larger angular resolution, record the same anomaly, and so instrumental error (but not foreground contamination) appears to be ruled out. Coincidence is a possible explanation, chief scientist from WMAP, Charles L. Bennett suggested coincidence and human psychology were involved, "I do think there is a bit of a psychological effect; people want to find unusual things."
Measurements of the density of quasars based on Wide-field Infrared Survey Explorer data finds a dipole significantly different from the one extracted from the CMB anisotropy. This difference is conflict with the cosmological principle.
Future evolution
Assuming the universe keeps expanding and it does not suffer a Big Crunch, a Big Rip, or another similar fate, the cosmic microwave background will continue redshifting until it will no longer be detectable, and will be superseded first by the one produced by starlight, and perhaps, later by the background radiation fields of processes that may take place in the far future of the universe such as proton decay, evaporation of black holes, and positronium decay.
Timeline of prediction, discovery and interpretation
Thermal (non-microwave background) temperature predictions
1896 – Charles Édouard Guillaume estimates the "radiation of the stars" to be 5–6 K.Guillaume, C.-É., 1896, La Nature 24, series 2, p. 234
1926 – Sir Arthur Eddington estimates the non-thermal radiation of starlight in the galaxy "... by the formula the effective temperature corresponding to this density is 3.18° absolute ... black body".
1930s – Cosmologist Erich Regener calculates that the non-thermal spectrum of cosmic rays in the galaxy has an effective temperature of 2.8 K.
1931 – Term microwave first used in print: "When trials with wavelengths as low as 18 cm. were made known, there was undisguised surprise+that the problem of the micro-wave had been solved so soon." Telegraph & Telephone Journal XVII. 179/1
1934 – Richard Tolman shows that black-body radiation in an expanding universe cools but remains thermal.
1946 – Robert Dicke predicts "... radiation from cosmic matter" at < 20 K, but did not refer to background radiation. "In 1946, Robert Dicke and coworkers at MIT tested equipment that could test a cosmic microwave background of intensity corresponding to about 20K in the microwave region. However, they did not refer to such a background, but only to 'radiation from cosmic matter'. Also, this work was unrelated to cosmology and is only mentioned because it suggests that by 1950, detection of the background radiation might have been technically possible, and also because of Dicke's later role in the discovery". See also
1946 – George Gamow calculates a temperature of 50 K (assuming a 3-billion year old universe),George Gamow, The Creation Of The Universe p.50 (Dover reprint of revised 1961 edition) commenting it "... is in reasonable agreement with the actual temperature of interstellar space", but does not mention background radiation.
1953 – Erwin Finlay-Freundlich in support of his tired light theory, derives a blackbody temperature for intergalactic space of 2.3 K and in the following year values of 1.9K and 6.0K.Erwin Finlay-Freundlich, "Ueber die Rotverschiebung der Spektrallinien" (1953) Contributions from the Observatory, University of St. Andrews; no. 4, p. 96–102. Finlay-Freundlich gave two extreme values of 1.9K and 6.0K in Finlay-Freundlich, E.: 1954, "Red shifts in the spectra of celestial bodies", Phil. Mag., Vol. 45, pp. 303–319.
Microwave background radiation predictions and measurements
1941 – Andrew McKellar detected a "rotational" temperature of 2.3 K for the interstellar medium by comparing the population of CN doublet lines measured by W. S. Adams in a B star.
1948 – Ralph Alpher and Robert Herman estimate "the temperature in the universe" at 5 K. Although they do not specifically mention microwave background radiation, it may be inferred.Helge Kragh, Cosmology and Controversy: The Historical Development of Two Theories of the Universe (1999) . "Alpher and Herman first calculated the present temperature of the decoupled primordial radiation in 1948, when they reported a value of 5 K. Although it was not mentioned either then or in later publications that the radiation is in the microwave region, this follows immediately from the temperature ... Alpher and Herman made it clear that what they had called "the temperature in the universe" the previous year referred to a blackbody distributed background radiation quite different from the starlight."
1953 – George Gamow estimates 7 K based on a model that does not rely on a free parameter
1955 – Émile Le Roux of the Nançay Radio Observatory, in a sky survey at λ = 33 cm, initially reported a near-isotropic background radiation of 3 kelvins, plus or minus 2; he did not recognize the cosmological significance and later revised the error bars to 20K.Delannoy, J., Denisse, J. F., Le Roux, E., & Morlet, B. (1957). Mesures absolues de faibles densités de flux de rayonnement à 900 MHz. Annales d'Astrophysique, Vol. 20, p. 222, 20, 222.
1957 – Tigran Shmaonov reports that "the absolute effective temperature of the radioemission background ... is 4±3 K". with radiation intensity was independent of either time or direction of observation. Although Shamonov did not recognize it at the time, it is now clear that Shmaonov did observe the cosmic microwave background at a wavelength of 3.2 cm
1964 – A. G. Doroshkevich and Igor Dmitrievich Novikov publish a brief paper suggesting microwave searches for the black-body radiation predicted by Gamow, Alpher, and Herman, where they name the CMB radiation phenomenon as detectable.
1964–65 – Arno Penzias and Robert Woodrow Wilson measure the temperature to be approximately 3 K. Robert Dicke, James Peebles, P. G. Roll, and D. T. Wilkinson interpret this radiation as a signature of the Big Bang.
1966 – Rainer K. Sachs and Arthur M. Wolfe theoretically predict microwave background fluctuation amplitudes created by gravitational potential variations between observers and the last scattering surface (see Sachs–Wolfe effect).
1968 – Martin Rees and Dennis Sciama theoretically predict microwave background fluctuation amplitudes created by photons traversing time-dependent wells of potential.
1969 – R. A. Sunyaev and Yakov Zel'dovich study the inverse Compton scattering of microwave background photons by hot electrons (see Sunyaev–Zel'dovich effect).
1983 – Researchers from the Cambridge Radio Astronomy Group and the Owens Valley Radio Observatory first detect the Sunyaev–Zel'dovich effect from clusters of galaxies.
1983 – RELIKT-1 Soviet CMB anisotropy experiment was launched.
1990 – FIRAS on the Cosmic Background Explorer (COBE) satellite measures the black body form of the CMB spectrum with exquisite precision, and shows that the microwave background has a nearly perfect black-body spectrum with T = 2.73 K and thereby strongly constrains the density of the intergalactic medium.
January 1992 – Scientists that analysed data from the RELIKT-1 report the discovery of anisotropy in the cosmic microwave background at the Moscow astrophysical seminar.Nobel Prize In Physics: Russia's Missed Opportunities, RIA Novosti, Nov 21, 2006
1992 – Scientists that analysed data from COBE DMR report the discovery of anisotropy in the cosmic microwave background.
1995 – The Cosmic Anisotropy Telescope performs the first high resolution observations of the cosmic microwave background.
1999 – First measurements of acoustic oscillations in the CMB anisotropy angular power spectrum from the MAT/TOCO, BOOMERANG, and Maxima Experiments. The BOOMERanG experiment makes higher quality maps at intermediate resolution, and confirms that the universe is "flat".
2002 – Polarization discovered by DASI.
2003 – E-mode polarization spectrum obtained by the CBI. The CBI and the Very Small Array produces yet higher quality maps at high resolution (covering small areas of the sky).
2003 – The Wilkinson Microwave Anisotropy Probe spacecraft produces an even higher quality map at low and intermediate resolution of the whole sky (WMAP provides high-resolution data, but improves on the intermediate resolution maps from BOOMERanG).
2004 – E-mode polarization spectrum obtained by the CBI.A. Readhead et al., "Polarization observations with the Cosmic Background Imager", Science 306, 836–844 (2004).
2004 – The Arcminute Cosmology Bolometer Array Receiver produces a higher quality map of the high resolution structure not mapped by WMAP.
2005 – The Arcminute Microkelvin Imager and the Sunyaev–Zel'dovich Array begin the first surveys for very high redshift clusters of galaxies using the Sunyaev–Zel'dovich effect.
2005 – Ralph A. Alpher is awarded the National Medal of Science for his groundbreaking work in nucleosynthesis and prediction that the universe expansion leaves behind background radiation, thus providing a model for the Big Bang theory.
2006 – The long-awaited three-year WMAP results are released, confirming previous analysis, correcting several points, and including polarization data.
2006 – Two of COBE's principal investigators, George Smoot and John Mather, received the Nobel Prize in Physics in 2006 for their work on precision measurement of the CMBR.
2006–2011 – Improved measurements from WMAP, new supernova surveys ESSENCE and SNLS, and baryon acoustic oscillations from SDSS and WiggleZ, continue to be consistent with the standard Lambda-CDM model.
2010 – The first all-sky map from the Planck telescope is released.
2013 – An improved all-sky map from the Planck telescope is released, improving the measurements of WMAP and extending them to much smaller scales.
2014 – On March 17, 2014, astrophysicists of the BICEP2 collaboration announced the detection of inflationary gravitational waves in the B-mode power spectrum, which if confirmed, would provide clear experimental evidence for the theory of inflation. However, on 19 June 2014, lowered confidence in confirming the cosmic inflation findings was reported.
2015 – On January 30, 2015, the same team of astronomers from BICEP2 withdrew the claim made on the previous year. Based on the combined data of BICEP2 and Planck, the European Space Agency announced that the signal can be entirely attributed to dust in the Milky Way.
2018 – The final data and maps from the Planck telescope is released, with improved measurements of the polarization on large scales.
2019 – Planck telescope analyses of their final 2018 data continue to be released.
In popular culture
In the Stargate Universe TV series (2009–2011), an ancient spaceship, Destiny, was built to study patterns in the CMBR which is a sentient message left over from the beginning of time.
In Wheelers, a novel (2000) by Ian Stewart & Jack Cohen, CMBR is explained as the encrypted transmissions of an ancient civilization. This allows the Jovian "blimps" to have a society older than the currently-observed age of the universe.
In The Three-Body Problem, a 2008 novel by Liu Cixin, a probe from an alien civilization compromises instruments monitoring the CMBR in order to deceive a character into believing the civilization has the power to manipulate the CMBR itself.
The 2017 issue of the Swiss 20 francs bill lists several astronomical objects with their distances – the CMB is mentioned with 430 · 1015 light-seconds.
In the 2021 Marvel series WandaVision, a mysterious television broadcast is discovered within the Cosmic Microwave Background.
See also
Notes
References
Further reading
External links
Student Friendly Intro to the CMB A pedagogic, step-by-step introduction to the cosmic microwave background power spectrum analysis suitable for those with an undergraduate physics background. More in depth than typical online sites. Less dense than cosmology texts.
CMBR Theme on arxiv.org
Audio: Fraser Cain and Dr. Pamela Gay – Astronomy Cast. The Big Bang and Cosmic Microwave Background – October 2006
Visualization of the CMB data from the Planck mission
Category:Astronomical radio sources
Category:Astrophysics
Category:Cosmic background radiation
*B-modes
Category:Cosmic inflation
Category:Observational astronomy
Category:Physical cosmological concepts
Category:Radio astronomy
|
physics
| 7,492
|
7543
|
Computational complexity theory
|
https://en.wikipedia.org/wiki/Computational_complexity_theory
|
In theoretical computer science and mathematics, computational complexity theory focuses on classifying computational problems according to their resource usage, and explores the relationships between these classifications. A computational problem is a task solved by a computer. A computation problem is solvable by mechanical application of mathematical steps, such as an algorithm.
A problem is regarded as inherently difficult if its solution requires significant resources, whatever the algorithm used. The theory formalizes this intuition, by introducing mathematical models of computation to study these problems and quantifying their computational complexity, i.e., the amount of resources needed to solve them, such as time and storage. Other measures of complexity are also used, such as the amount of communication (used in communication complexity), the number of gates in a circuit (used in circuit complexity) and the number of processors (used in parallel computing). One of the roles of computational complexity theory is to determine the practical limits on what computers can and cannot do. The P versus NP problem, one of the seven Millennium Prize Problems, is part of the field of computational complexity.
Closely related fields in theoretical computer science are analysis of algorithms and computability theory. A key distinction between analysis of algorithms and computational complexity theory is that the former is devoted to analyzing the amount of resources needed by a particular algorithm to solve a problem, whereas the latter asks a more general question about all possible algorithms that could be used to solve the same problem. More precisely, computational complexity theory tries to classify problems that can or cannot be solved with appropriately restricted resources. In turn, imposing restrictions on the available resources is what distinguishes computational complexity from computability theory: the latter theory asks what kinds of problems can, in principle, be solved algorithmically.
Computational problems
Problem instances
A computational problem can be viewed as an infinite collection of instances together with a set (possibly empty) of solutions for every instance. The input string for a computational problem is referred to as a problem instance, and should not be confused with the problem itself. In computational complexity theory, a problem refers to the abstract question to be solved. In contrast, an instance of this problem is a rather concrete utterance, which can serve as the input for a decision problem. For example, consider the problem of primality testing. The instance is a number (e.g., 15) and the solution is "yes" if the number is prime and "no" otherwise (in this case, 15 is not prime and the answer is "no"). Stated another way, the instance is a particular input to the problem, and the solution is the output corresponding to the given input.
To further highlight the difference between a problem and an instance, consider the following instance of the decision version of the travelling salesman problem: Is there a route of at most 2000 kilometres passing through all of Germany's 14 largest cities? The quantitative answer to this particular problem instance is of little use for solving other instances of the problem, such as asking for a round trip through 14 sites in Milan whose total length is at most 10 km. For this reason, complexity theory addresses computational problems and not particular problem instances.
Representing problem instances
When considering computational problems, a problem instance is a string over an alphabet. Usually, the alphabet is taken to be the binary alphabet (i.e., the set {0,1}), and thus the strings are bitstrings. As in a real-world computer, mathematical objects other than bitstrings must be suitably encoded. For example, integers can be represented in binary notation, and graphs can be encoded directly via their adjacency matrices, or by encoding their adjacency lists in binary.
Even though some proofs of complexity-theoretic theorems regularly assume some concrete choice of input encoding, one tries to keep the discussion abstract enough to be independent of the choice of encoding. This can be achieved by ensuring that different representations can be transformed into each other efficiently.
Decision problems as formal languages
Decision problems are one of the central objects of study in computational complexity theory. A decision problem is a type of computational problem where the answer is either yes or no (alternatively, 1 or 0). A decision problem can be viewed as a formal language, where the members of the language are instances whose output is yes, and the non-members are those instances whose output is no. The objective is to decide, with the aid of an algorithm, whether a given input string is a member of the formal language under consideration. If the algorithm deciding this problem returns the answer yes, the algorithm is said to accept the input string, otherwise it is said to reject the input.
An example of a decision problem is the following. The input is an arbitrary graph. The problem consists in deciding whether the given graph is connected or not. The formal language associated with this decision problem is then the set of all connected graphs—to obtain a precise definition of this language, one has to decide how graphs are encoded as binary strings.
Function problems
A function problem is a computational problem where a single output (of a total function) is expected for every input, but the output is more complex than that of a decision problem—that is, the output is not just yes or no. Notable examples include the traveling salesman problem and the integer factorization problem.
It is tempting to think that the notion of function problems is much richer than the notion of decision problems. However, this is not really the case, since function problems can be recast as decision problems. For example, the multiplication of two integers can be expressed as the set of triples such that the relation holds. Deciding whether a given triple is a member of this set corresponds to solving the problem of multiplying two numbers.
Measuring the size of an instance
To measure the difficulty of solving a computational problem, one may wish to see how much time the best algorithm requires to solve the problem. However, the running time may, in general, depend on the instance. In particular, larger instances will require more time to solve. Thus the time required to solve a problem (or the space required, or any measure of complexity) is calculated as a function of the size of the instance. The input size is typically measured in bits. Complexity theory studies how algorithms scale as input size increases. For instance, in the problem of finding whether a graph is connected, how much more time does it take to solve a problem for a graph with vertices compared to the time taken for a graph with vertices?
If the input size is , the time taken can be expressed as a function of . Since the time taken on different inputs of the same size can be different, the worst-case time complexity is defined to be the maximum time taken over all inputs of size . If is a polynomial in , then the algorithm is said to be a polynomial time algorithm. Cobham's thesis argues that a problem can be solved with a feasible amount of resources if it admits a polynomial-time algorithm.
Machine models and complexity measures
Turing machine
A Turing machine is a mathematical model of a general computing machine. It is a theoretical device that manipulates symbols contained on a strip of tape. Turing machines are not intended as a practical computing technology, but rather as a general model of a computing machine—anything from an advanced supercomputer to a mathematician with a pencil and paper. It is believed that if a problem can be solved by an algorithm, there exists a Turing machine that solves the problem. Indeed, this is the statement of the Church–Turing thesis. Furthermore, it is known that everything that can be computed on other models of computation known to us today, such as a RAM machine, Conway's Game of Life, cellular automata, lambda calculus or any programming language can be computed on a Turing machine. Since Turing machines are easy to analyze mathematically, and are believed to be as powerful as any other model of computation, the Turing machine is the most commonly used model in complexity theory.
Many types of Turing machines are used to define complexity classes, such as deterministic Turing machines, probabilistic Turing machines, non-deterministic Turing machines, quantum Turing machines, symmetric Turing machines and alternating Turing machines. They are all equally powerful in principle, but when resources (such as time or space) are bounded, some of these may be more powerful than others.
A deterministic Turing machine is the most basic Turing machine, which uses a fixed set of rules to determine its future actions. A probabilistic Turing machine is a deterministic Turing machine with an extra supply of random bits. The ability to make probabilistic decisions often helps algorithms solve problems more efficiently. Algorithms that use random bits are called randomized algorithms. A non-deterministic Turing machine is a deterministic Turing machine with an added feature of non-determinism, which allows a Turing machine to have multiple possible future actions from a given state. One way to view non-determinism is that the Turing machine branches into many possible computational paths at each step, and if it solves the problem in any of these branches, it is said to have solved the problem. Clearly, this model is not meant to be a physically realizable model, it is just a theoretically interesting abstract machine that gives rise to particularly interesting complexity classes. For examples, see non-deterministic algorithm.
Other machine models
Many machine models different from the standard multi-tape Turing machines have been proposed in the literature, for example random-access machines. Perhaps surprisingly, each of these models can be converted to another without providing any extra computational power. The time and memory consumption of these alternate models may vary.See What all these models have in common is that the machines operate deterministically.
However, some computational problems are easier to analyze in terms of more unusual resources. For example, a non-deterministic Turing machine is a computational model that is allowed to branch out to check many different possibilities at once. The non-deterministic Turing machine has very little to do with how we physically want to compute algorithms, but its branching exactly captures many of the mathematical models we want to analyze, so that non-deterministic time is a very important resource in analyzing computational problems.
Complexity measures
For a precise definition of what it means to solve a problem using a given amount of time and space, a computational model such as the deterministic Turing machine is used. The time required by a deterministic Turing machine on input is the total number of state transitions, or steps, the machine makes before it halts and outputs the answer ("yes" or "no"). A Turing machine is said to operate within time if the time required by on each input of length is at most . A decision problem can be solved in time if there exists a Turing machine operating in time that solves the problem. Since complexity theory is interested in classifying problems based on their difficulty, one defines sets of problems based on some criteria. For instance, the set of problems solvable within time on a deterministic Turing machine is then denoted by DTIME().
Analogous definitions can be made for space requirements. Although time and space are the most well-known complexity resources, any complexity measure can be viewed as a computational resource. Complexity measures are very generally defined by the Blum complexity axioms. Other complexity measures used in complexity theory include communication complexity, circuit complexity, and decision tree complexity.
The complexity of an algorithm is often expressed using big O notation.
Best, worst and average case complexity
The best, worst and average case complexity refer to three different ways of measuring the time complexity (or any other complexity measure) of different inputs of the same size. Since some inputs of size may be faster to solve than others, we define the following complexities:
Best-case complexity: This is the complexity of solving the problem for the best input of size .
Average-case complexity: This is the complexity of solving the problem on an average. This complexity is only defined with respect to a probability distribution over the inputs. For instance, if all inputs of the same size are assumed to be equally likely to appear, the average case complexity can be defined with respect to the uniform distribution over all inputs of size .
Amortized analysis: Amortized analysis considers both the costly and less costly operations together over the whole series of operations of the algorithm.
Worst-case complexity: This is the complexity of solving the problem for the worst input of size .
The order from cheap to costly is: Best, average (of discrete uniform distribution), amortized, worst.
For example, the deterministic sorting algorithm quicksort addresses the problem of sorting a list of integers. The worst-case is when the pivot is always the largest or smallest value in the list (so the list is never divided). In this case, the algorithm takes time O(). If we assume that all possible permutations of the input list are equally likely, the average time taken for sorting is . The best case occurs when each pivoting divides the list in half, also needing time.
Upper and lower bounds on the complexity of problems
To classify the computation time (or similar resources, such as space consumption), it is helpful to demonstrate upper and lower bounds on the maximum amount of time required by the most efficient algorithm to solve a given problem. The complexity of an algorithm is usually taken to be its worst-case complexity unless specified otherwise. Analyzing a particular algorithm falls under the field of analysis of algorithms. To show an upper bound on the time complexity of a problem, one needs to show only that there is a particular algorithm with running time at most . However, proving lower bounds is much more difficult, since lower bounds make a statement about all possible algorithms that solve a given problem. The phrase "all possible algorithms" includes not just the algorithms known today, but any algorithm that might be discovered in the future. To show a lower bound of for a problem requires showing that no algorithm can have time complexity lower than .
Upper and lower bounds are usually stated using the big O notation, which hides constant factors and smaller terms. This makes the bounds independent of the specific details of the computational model used. For instance, if , in big O notation one would write .
Complexity classes
Defining complexity classes
A complexity class is a set of problems of related complexity. Simpler complexity classes are defined by the following factors:
The type of computational problem: The most commonly used problems are decision problems. However, complexity classes can be defined based on function problems, counting problems, optimization problems, promise problems, etc.
The model of computation: The most common model of computation is the deterministic Turing machine, but many complexity classes are based on non-deterministic Turing machines, Boolean circuits, quantum Turing machines, monotone circuits, etc.
The resource (or resources) that is being bounded and the bound: These two properties are usually stated together, such as "polynomial time", "logarithmic space", "constant depth", etc.
Some complexity classes have complicated definitions that do not fit into this framework. Thus, a typical complexity class has a definition like the following:
The set of decision problems solvable by a deterministic Turing machine within time . (This complexity class is known as DTIME().)
But bounding the computation time above by some concrete function often yields complexity classes that depend on the chosen machine model. For instance, the language can be solved in linear time on a multi-tape Turing machine, but necessarily requires quadratic time in the model of single-tape Turing machines. If we allow polynomial variations in running time, Cobham-Edmonds thesis states that "the time complexities in any two reasonable and general models of computation are polynomially related" . This forms the basis for the complexity class P, which is the set of decision problems solvable by a deterministic Turing machine within polynomial time. The corresponding set of function problems is FP.
Important complexity classes
Many important complexity classes can be defined by bounding the time or space used by the algorithm. Some important complexity classes of decision problems defined in this manner are the following:
Resource Determinism Complexity class Resource constraint Space Non-Deterministic NSPACE() NL NPSPACE NEXPSPACE Deterministic DSPACE() L PSPACE EXPSPACE Time Non-Deterministic NTIME() NP NEXPTIME Deterministic DTIME() P EXPTIME
Logarithmic-space classes do not account for the space required to represent the problem.
It turns out that PSPACE = NPSPACE and EXPSPACE = NEXPSPACE by Savitch's theorem.
Other important complexity classes include BPP, ZPP and RP, which are defined using probabilistic Turing machines; AC and NC, which are defined using Boolean circuits; and BQP and QMA, which are defined using quantum Turing machines. #P is an important complexity class of counting problems (not decision problems). Classes like IP and AM are defined using Interactive proof systems. ALL is the class of all decision problems.
Hierarchy theorems
For the complexity classes defined in this way, it is desirable to prove that relaxing the requirements on (say) computation time indeed defines a bigger set of problems. In particular, although DTIME() is contained in DTIME(), it would be interesting to know if the inclusion is strict. For time and space requirements, the answer to such questions is given by the time and space hierarchy theorems respectively. They are called hierarchy theorems because they induce a proper hierarchy on the classes defined by constraining the respective resources. Thus there are pairs of complexity classes such that one is properly included in the other. Having deduced such proper set inclusions, we can proceed to make quantitative statements about how much more additional time or space is needed in order to increase the number of problems that can be solved.
More precisely, the time hierarchy theorem states that
.
The space hierarchy theorem states that
.
The time and space hierarchy theorems form the basis for most separation results of complexity classes. For instance, the time hierarchy theorem tells us that P is strictly contained in EXPTIME, and the space hierarchy theorem tells us that L is strictly contained in PSPACE.
Reduction
Many complexity classes are defined using the concept of a reduction. A reduction is a transformation of one problem into another problem. It captures the informal notion of a problem being at most as difficult as another problem. For instance, if a problem can be solved using an algorithm for , is no more difficult than , and we say that reduces to . There are many different types of reductions, based on the method of reduction, such as Cook reductions, Karp reductions and Levin reductions, and the bound on the complexity of reductions, such as polynomial-time reductions or log-space reductions.
The most commonly used reduction is a polynomial-time reduction. This means that the reduction process takes polynomial time. For example, the problem of squaring an integer can be reduced to the problem of multiplying two integers. This means an algorithm for multiplying two integers can be used to square an integer. Indeed, this can be done by giving the same input to both inputs of the multiplication algorithm. Thus we see that squaring is not more difficult than multiplication, since squaring can be reduced to multiplication.
This motivates the concept of a problem being hard for a complexity class. A problem is hard for a class of problems if every problem in can be reduced to . Thus no problem in is harder than , since an algorithm for allows us to solve any problem in . The notion of hard problems depends on the type of reduction being used. For complexity classes larger than P, polynomial-time reductions are commonly used. In particular, the set of problems that are hard for NP is the set of NP-hard problems.
If a problem is in and hard for , then is said to be complete for . This means that is the hardest problem in . (Since many problems could be equally hard, one might say that is one of the hardest problems in .) Thus the class of NP-complete problems contains the most difficult problems in NP, in the sense that they are the ones most likely not to be in P. Because the problem P = NP is not solved, being able to reduce a known NP-complete problem, , to another problem, , would indicate that there is no known polynomial-time solution for . This is because a polynomial-time solution to would yield a polynomial-time solution to . Similarly, because all NP problems can be reduced to the set, finding an NP-complete problem that can be solved in polynomial time would mean that P = NP.
Important open problems
P versus NP problem
The complexity class P is often seen as a mathematical abstraction modeling those computational tasks that admit an efficient algorithm. This hypothesis is called the Cobham–Edmonds thesis. The complexity class NP, on the other hand, contains many problems that people would like to solve efficiently, but for which no efficient algorithm is known, such as the Boolean satisfiability problem, the Hamiltonian path problem and the vertex cover problem. Since deterministic Turing machines are special non-deterministic Turing machines, it is easily observed that each problem in P is also member of the class NP.
The question of whether P equals NP is one of the most important open questions in theoretical computer science because of the wide implications of a solution.See If the answer is yes, many important problems can be shown to have more efficient solutions. These include various types of integer programming problems in operations research, many problems in logistics, protein structure prediction in biology, and the ability to find formal proofs of pure mathematics theorems. The P versus NP problem is one of the Millennium Prize Problems proposed by the Clay Mathematics Institute. There is a US$1,000,000 prize for resolving the problem.
Problems in NP not known to be in P or NP-complete
It was shown by Ladner that if then there exist problems in that are neither in nor -complete. Such problems are called NP-intermediate problems. The graph isomorphism problem, the discrete logarithm problem and the integer factorization problem are examples of problems believed to be NP-intermediate. They are some of the very few NP problems not known to be in or to be -complete.
The graph isomorphism problem is the computational problem of determining whether two finite graphs are isomorphic. An important unsolved problem in complexity theory is whether the graph isomorphism problem is in , -complete, or NP-intermediate. The answer is not known, but it is believed that the problem is at least not NP-complete. If graph isomorphism is NP-complete, the polynomial time hierarchy collapses to its second level. Since it is widely believed that the polynomial hierarchy does not collapse to any finite level, it is believed that graph isomorphism is not NP-complete. The best algorithm for this problem, due to László Babai and Eugene Luks has run time for graphs with vertices, although some recent work by Babai offers some potentially new perspectives on this.
The integer factorization problem is the computational problem of determining the prime factorization of a given integer. Phrased as a decision problem, it is the problem of deciding whether the input has a prime factor less than . No efficient integer factorization algorithm is known, and this fact forms the basis of several modern cryptographic systems, such as the RSA algorithm. The integer factorization problem is in and in (and even in UP and co-UP). If the problem is -complete, the polynomial time hierarchy will collapse to its first level (i.e., will equal ). The best known algorithm for integer factorization is the general number field sieve, which takes time Wolfram MathWorld: Number Field Sieve to factor an odd integer . However, the best known quantum algorithm for this problem, Shor's algorithm, does run in polynomial time. Unfortunately, this fact doesn't say much about where the problem lies with respect to non-quantum complexity classes.
Separations between other complexity classes
Many known complexity classes are suspected to be unequal, but this has not been proved. For instance , but it is possible that . If is not equal to , then is not equal to either. Since there are many known complexity classes between and , such as , , , , , , etc., it is possible that all these complexity classes collapse to one class. Proving that any of these classes are unequal would be a major breakthrough in complexity theory.
Along the same lines, is the class containing the complement problems (i.e. problems with the yes/no answers reversed) of problems. It is believedBoaz Barak's course on Computational Complexity Lecture 2 that is not equal to ; however, it has not yet been proven. It is clear that if these two complexity classes are not equal then is not equal to , since . Thus if we would have whence .
Similarly, it is not known if (the set of all problems that can be solved in logarithmic space) is strictly contained in or equal to . Again, there are many complexity classes between the two, such as and , and it is not known if they are distinct or equal classes.
It is suspected that and are equal. However, it is currently open if .
Intractability
A problem that can theoretically be solved, but requires impractical and infinite resources (e.g., time) to do so, is known as an .Hopcroft, J.E., Motwani, R. and Ullman, J.D. (2007) Introduction to Automata Theory, Languages, and Computation, Addison Wesley, Boston/San Francisco/New York (page 368) Conversely, a problem that can be solved in practice is called a , literally "a problem that can be handled". The term infeasible (literally "cannot be done") is sometimes used interchangeably with intractable, though this risks confusion with a feasible solution in mathematical optimization.
Tractable problems are frequently identified with problems that have polynomial-time solutions (, ); this is known as the Cobham–Edmonds thesis. Problems that are known to be intractable in this sense include those that are EXPTIME-hard. If is not the same as , then NP-hard problems are also intractable in this sense.
However, this identification is inexact: a polynomial-time solution with large degree or large leading coefficient grows quickly, and may be impractical for practical size problems; conversely, an exponential-time solution that grows slowly may be practical on realistic input, or a solution that takes a long time in the worst case may take a short time in most cases or the average case, and thus still be practical. Saying that a problem is not in does not imply that all large cases of the problem are hard or even that most of them are. For example, the decision problem in Presburger arithmetic has been shown not to be in , yet algorithms have been written that solve the problem in reasonable times in most cases. Similarly, algorithms can solve the NP-complete knapsack problem over a wide range of sizes in less than quadratic time and SAT solvers routinely handle large instances of the NP-complete Boolean satisfiability problem.
To see why exponential-time algorithms are generally unusable in practice, consider a program that makes operations before halting. For small , say 100, and assuming for the sake of example that the computer does operations each second, the program would run for about years, which is the same order of magnitude as the age of the universe. Even with a much faster computer, the program would only be useful for very small instances and in that sense the intractability of a problem is somewhat independent of technological progress. However, an exponential-time algorithm that takes operations is practical until gets relatively large.
Similarly, a polynomial time algorithm is not always practical. If its running time is, say, , it is unreasonable to consider it efficient and it is still useless except on small instances. Indeed, in practice even or algorithms are often impractical on realistic sizes of problems.
Continuous complexity theory
Continuous complexity theory can refer to complexity theory of problems that involve continuous functions that are approximated by discretizations, as studied in numerical analysis. One approach to complexity theory of numerical analysis is information based complexity.
Continuous complexity theory can also refer to complexity theory of the use of analog computation, which uses continuous dynamical systems and differential equations. Control theory can be considered a form of computation and differential equations are used in the modelling of continuous-time and hybrid discrete-continuous-time systems.
History
An early example of algorithm complexity analysis is the running time analysis of the Euclidean algorithm done by Gabriel Lamé in 1844.
Before the actual research explicitly devoted to the complexity of algorithmic problems started off, numerous foundations were laid out by various researchers. Most influential among these was the definition of Turing machines by Alan Turing in 1936, which turned out to be a very robust and flexible simplification of a computer.
The beginning of systematic studies in computational complexity is attributed to the seminal 1965 paper "On the Computational Complexity of Algorithms" by Juris Hartmanis and Richard E. Stearns, which laid out the definitions of time complexity and space complexity, and proved the hierarchy theorems. In addition, in 1965 Edmonds suggested to consider a "good" algorithm to be one with running time bounded by a polynomial of the input size.Richard M. Karp, "Combinatorics, Complexity, and Randomness", 1985 Turing Award Lecture
Earlier papers studying problems solvable by Turing machines with specific bounded resources include John Myhill's definition of linear bounded automata (Myhill 1960), Raymond Smullyan's study of rudimentary sets (1961), as well as Hisao Yamada's paper on real-time computations (1962). Somewhat earlier, Boris Trakhtenbrot (1956), a pioneer in the field from the USSR, studied another specific complexity measure.Trakhtenbrot, B.A.: Signalizing functions and tabular operators. Uchionnye Zapiski
Penzenskogo Pedinstituta (Transactions of the Penza Pedagogoical Institute) 4, 75–87 (1956) (in Russian) As he remembers:
In 1967, Manuel Blum formulated a set of axioms (now known as Blum axioms) specifying desirable properties of complexity measures on the set of computable functions and proved an important result, the so-called speed-up theorem. The field began to flourish in 1971 when Stephen Cook and Leonid Levin proved the existence of practically relevant problems that are NP-complete. In 1972, Richard Karp took this idea a leap forward with his landmark paper, "Reducibility Among Combinatorial Problems", in which he showed that 21 diverse combinatorial and graph theoretical problems, each infamous for its computational intractability, are NP-complete.
See also
Computational complexity
Descriptive complexity theory
Game complexity
Leaf language
Limits of computation
List of complexity classes
List of computability and complexity topics
List of unsolved problems in computer science
Parameterized complexity
Proof complexity
Quantum complexity theory
Structural complexity theory
Transcomputational problem
Computational complexity of mathematical operations
Works on complexity
References
Citations
Textbooks
Surveys
External links
The Complexity Zoo
Scott Aaronson: Why Philosophers Should Care About Computational Complexity
Category:Computational fields of study
|
computer_science
| 5,197
|
7783
|
Coriolis force
|
https://en.wikipedia.org/wiki/Coriolis_force
|
In physics, the Coriolis force is a pseudo force that acts on objects in motion within a frame of reference that rotates with respect to an inertial frame. In a reference frame with clockwise rotation, the force acts to the left of the motion of the object. In one with anticlockwise (or counterclockwise) rotation, the force acts to the right. Deflection of an object due to the Coriolis force is called the Coriolis effect. Though recognized previously by others, the mathematical expression for the Coriolis force appeared in an 1835 paper by French scientist Gaspard-Gustave de Coriolis, in connection with the theory of water wheels. Early in the 20th century, the term Coriolis force began to be used in connection with meteorology.
Newton's laws of motion describe the motion of an object in an inertial (non-accelerating) frame of reference. When Newton's laws are transformed to a rotating frame of reference, the Coriolis and centrifugal accelerations appear. When applied to objects with masses, the respective forces are proportional to their masses. The magnitude of the Coriolis force is proportional to the rotation rate, and the magnitude of the centrifugal force is proportional to the square of the rotation rate. The Coriolis force acts in a direction perpendicular to two quantities: the angular velocity of the rotating frame relative to the inertial frame and the velocity of the body relative to the rotating frame, and its magnitude is proportional to the object's speed in the rotating frame (more precisely, to the component of its velocity that is perpendicular to the axis of rotation). The centrifugal force acts outwards in the radial direction and is proportional to the distance of the body from the axis of the rotating frame. These additional forces are termed inertial forces, fictitious forces, or pseudo forces. By introducing these fictitious forces to a rotating frame of reference, Newton's laws of motion can be applied to the rotating system as though it were an inertial system; these forces are correction factors that are not required in a non-rotating system.
In popular (non-technical) usage of the term "Coriolis effect", the rotating reference frame implied is almost always the Earth. Because the Earth rotates, Earth-bound observers need to account for the Coriolis force to correctly analyze the motion of objects. The Earth completes one rotation for each sidereal day, so for motions of everyday objects the Coriolis force is imperceptible; its effects become noticeable only for motions occurring over large distances and long periods of time, such as large-scale movement of air in the atmosphere or water in the ocean, or where high precision is important, such as artillery or missile trajectories. Such motions are constrained by the surface of the Earth, so only the horizontal component of the Coriolis force is generally important. This force causes moving objects on the surface of the Earth to be deflected to the right (with respect to the direction of travel) in the Northern Hemisphere and to the left in the Southern Hemisphere. The horizontal deflection effect is greater near the poles, since the effective rotation rate about a local vertical axis is largest there, and decreases to zero at the equator. Rather than flowing directly from areas of high pressure to low pressure, as they would in a non-rotating system, winds and currents tend to flow to the right of this direction north of the equator ("clockwise") and to the left of this direction south of it ("anticlockwise"). This effect is responsible for the rotation and thus formation of cyclones .
History
Italian scientist Giovanni Battista Riccioli and his assistant Francesco Maria Grimaldi described the effect in connection with artillery in the 1651 Almagestum Novum, writing that rotation of the Earth should cause a cannonball fired to the north to deflect to the east. In 1674, Claude François Milliet Dechales described in his Cursus seu Mundus Mathematicus how the rotation of the Earth should cause a deflection in the trajectories of both falling bodies and projectiles aimed toward one of the planet's poles. Riccioli, Grimaldi, and Dechales all described the effect as part of an argument against the heliocentric system of Copernicus. In other words, they argued that the Earth's rotation should create the effect, and so failure to detect the effect was evidence for an immobile Earth. The Coriolis acceleration equation was derived by Euler in 1749,Truesdell, Clifford. Essays in the History of Mechanics. Springer Science & Business Media, 2012., p. 225Persson, A. "The Coriolis Effect: Four centuries of conflict between common sense and mathematics, Part I: A history to 1885." History of Meteorology 2 (2005): 1–24. and the effect was described in the tidal equations of Pierre-Simon Laplace in 1778.
Gaspard-Gustave de Coriolis published a paper in 1835 on the energy yield of machines with rotating parts, such as waterwheels. That paper considered the supplementary forces that are detected in a rotating frame of reference. Coriolis divided these supplementary forces into two categories. The second category contained a force that arises from the cross product of the angular velocity of a coordinate system and the projection of a particle's velocity into a plane perpendicular to the system's axis of rotation. Coriolis referred to this force as the "compound centrifugal force" due to its analogies with the centrifugal force already considered in category one.Dugas, René and J. R. Maddox (1988). A History of Mechanics. Courier Dover Publications: p. 374. The effect was known in the early 20th century as the "acceleration of Coriolis", and by 1920 as "Coriolis force".
In 1856, William Ferrel proposed the existence of a circulation cell in the mid-latitudes with air being deflected by the Coriolis force to create the prevailing westerly winds. Retrieved on 1 January 2009.
The understanding of the kinematics of how exactly the rotation of the Earth affects airflow was partial at first. Late in the 19th century, the full extent of the large scale interaction of pressure-gradient force and deflecting force that in the end causes air masses to move along isobars was understood.
Formula
In Newtonian mechanics, the equation of motion for an object in an inertial reference frame is:
where is the vector sum of the physical forces acting on the object, is the mass of the object, and is the acceleration of the object relative to the inertial reference frame.
Transforming this equation to a reference frame rotating about a fixed axis through the origin with angular velocity having variable rotation rate, the equation takes the form:
where the prime (') variables denote coordinates of the rotating reference frame (not a derivative) and:
is the vector sum of the physical forces acting on the object
is the angular velocity of the rotating reference frame relative to the inertial frame
is the position vector of the object relative to the rotating reference frame
is the velocity of the object relative to the rotating reference frame
is the acceleration of the object relative to the rotating reference frame
The fictitious forces as they are perceived in the rotating frame act as additional forces that contribute to the apparent acceleration just like the real external forces.Taylor (2005). p. 329. The fictitious force terms of the equation are, reading from left to right:
Euler force,
Coriolis force,
centrifugal force,
As seen in these formulas the Euler and centrifugal forces depend on the position vector of the object, while the Coriolis force depends on the object's velocity as measured in the rotating reference frame. As expected, for a non-rotating inertial frame of reference the Coriolis force and all other fictitious forces disappear.
Direction of Coriolis force for simple cases
As the Coriolis force is proportional to a cross product of two vectors, it is perpendicular to both vectors, in this case the object's velocity and the frame's rotation vector. It therefore follows that:
if the velocity is parallel to the rotation axis, the Coriolis force is zero. For example, on Earth, this situation occurs for a body at the equator moving north or south relative to the Earth's surface. (At any latitude other than the equator, however, the north–south motion would have a component perpendicular to the rotation axis and a force specified by the inward or outward cases mentioned below).
if the velocity is straight inward to the axis, the Coriolis force is in the direction of local rotation. For example, on Earth, this situation occurs for a body at the equator falling downward, as in the Dechales illustration above, where the falling ball travels further to the east than does the tower. Note also that heading north in the Northern Hemisphere would have a velocity component toward the rotation axis, resulting in a Coriolis force to the east (more pronounced the further north one is).
if the velocity is straight outward from the axis, the Coriolis force is against the direction of local rotation. In the tower example, a ball launched upward would move toward the west.
if the velocity is in the direction of rotation, the Coriolis force is outward from the axis. For example, on Earth, this situation occurs for a body at the equator moving east relative to Earth's surface. It would move upward as seen by an observer on the surface. This effect (see Eötvös effect below) was discussed by Galileo Galilei in 1632 and by Riccioli in 1651.
if the velocity is against the direction of rotation, the Coriolis force is inward to the axis. For example, on Earth, this situation occurs for a body at the equator moving west, which would deflect downward as seen by an observer.
Intuitive explanation
For an intuitive explanation of the origin of the Coriolis force, consider an object moving northward on the ground in the Northern Hemisphere. Viewed from outer space, the object does not appear to go due north, but has an eastward motion (it rotates around toward the right along with the surface of the Earth). The further north it travels, the smaller the "radius of its parallel (latitude)" (the minimum distance from the surface point to the axis of rotation, which is in a plane orthogonal to the axis), and so the slower the eastward motion of its surface. As the object moves north it has a tendency to maintain the eastward speed it started with (rather than slowing down to match the reduced eastward speed of local objects on the Earth's surface), so it veers east (i.e. to the right of its initial motion). Extract of page 116 Extract of page 48
Though not obvious from this example, which considers northward motion, the horizontal deflection occurs equally for objects moving eastward or westward (or in any other direction). However, the theory that the effect determines the rotation of draining water in a household bathtub, sink or toilet has been repeatedly disproven by modern-day scientists; the force is negligibly small compared to the many other influences on the rotation.
Length scales and the Rossby number
The time, space, and velocity scales are important in determining the importance of the Coriolis force. Whether rotation is important in a system can be determined by its Rossby number (Ro), which is the ratio of the velocity, U, of a system to the product of the Coriolis parameter, , and the length scale, L, of the motion:
Hence, it is the ratio of inertial to Coriolis forces; a small Rossby number indicates a system is strongly affected by Coriolis forces, and a large Rossby number indicates a system in which inertial forces dominate. For example, in tornadoes, the Rossby number is large, so the Coriolis force is negligible; the balance is between pressure and centrifugal forces. In low-pressure systems the Rossby number is low, as the centrifugal force is negligible; the balance is between Coriolis and pressure forces. In oceanic systems the Rossby number is often around 1, with all three forces comparable.
An atmospheric system moving at U = occupying a spatial distance of L = , has a Rossby number of approximately 0.1.
An unguided missile can travel far enough and be in the air long enough to experience the effect of Coriolis force. Long-range shells in the Northern Hemisphere can land to the right of where they were aimed until the effect was noted (those fired in the Southern Hemisphere landed to the left.) It was this effect that first drew the attention of Coriolis himself.
Simple cases
Tossed ball on a rotating carousel
300px|thumb|Left Figure: The trajectory of a ball thrown from the edge of a rotating disc, as seen by an external observer. Because of the rotation, the ball has both an initial tangential velocity and a radial velocity given by the thrower. These velocities bring it to the right of the center. Right Figure: The trajectory of a ball thrown from the edge of a rotating disc, as seen by the thrower, the rotating observer. It is deviating from the straight line.
The figures illustrate a ball tossed from 12:00 o'clock toward the center of a counter-clockwise rotating carousel. In the first figure, the ball is seen by a stationary observer above the carousel, and the ball travels in a straight line slightly to the right of the center, because it had an initial tangential velocity given by the rotation (blue arrow) and a radial velocity given by the thrower (green arrow). The resulting combined velocity is shown as a solid red line, and the trajectory is shown as a dotted red line. In the second figure, the ball is seen by an observer rotating with the carousel, so the ball-thrower appears to stay at 12:00 o'clock, and the ball trajectory has a slight curve.
Bounced ball
The figure describes a more complex situation where the tossed ball on a turntable bounces off the edge of the carousel and then returns to the tosser, who catches the ball. The effect of Coriolis force on its trajectory is shown again as seen by two observers: an observer (referred to as the "camera") that rotates with the carousel, and an inertial observer. The figure shows a bird's-eye view based upon the same ball speed on forward and return paths. Within each circle, plotted dots show the same time points. In the left panel, from the camera's viewpoint at the center of rotation, the tosser (smiley face) and the rail both are at fixed locations, and the ball makes a very considerable arc on its travel toward the rail, and takes a more direct route on the way back. From the ball tosser's viewpoint, the ball seems to return more quickly than it went (because the tosser is rotating toward the ball on the return flight).
On the carousel, instead of tossing the ball straight at a rail to bounce back, the tosser must throw the ball toward the right of the target and the ball then seems to the camera to bear continuously to the left of its direction of travel to hit the rail (left because the carousel is turning clockwise). The ball appears to bear to the left from direction of travel on both inward and return trajectories. The curved path demands this observer to recognize a leftward net force on the ball. (This force is "fictitious" because it disappears for a stationary observer, as is discussed shortly.) For some angles of launch, a path has portions where the trajectory is approximately radial, and Coriolis force is primarily responsible for the apparent deflection of the ball (centrifugal force is radial from the center of rotation, and causes little deflection on these segments). When a path curves away from radial, however, centrifugal force contributes significantly to deflection.
The ball's path through the air is straight when viewed by observers standing on the ground (right panel). In the right panel (stationary observer), the ball tosser (smiley face) is at 12 o'clock and the rail the ball bounces from is at position 1. From the inertial viewer's standpoint, positions 1, 2, and 3 are occupied in sequence. At position 2, the ball strikes the rail, and at position 3, the ball returns to the tosser. Straight-line paths are followed because the ball is in free flight, so this observer requires that no net force is applied.
Applied to the Earth
The acceleration affecting the motion of air "sliding" over the Earth's surface is the horizontal component of the Coriolis term
This component is orthogonal to the velocity over the Earth surface and is given by the expression
where
is the spin rate of the Earth
is the latitude, positive in the Northern Hemisphere and negative in the Southern Hemisphere
In the Northern Hemisphere, where the latitude is positive, this acceleration, as viewed from above, is to the right of the direction of motion. Conversely, it is to the left in the southern hemisphere.
Rotating sphere
Consider a location with latitude φ on a sphere that is rotating around the north–south axis. A local coordinate system is set up with the x axis horizontally due east, the y axis horizontally due north and the z axis vertically upwards. The rotation vector, velocity of movement and Coriolis acceleration expressed in this local coordinate system [listing components in the order east (e), north (n) and upward (u)] are:
When considering atmospheric or oceanic dynamics, the vertical velocity is small, and the vertical component of the Coriolis acceleration () is small compared with the acceleration due to gravity (g, approximately near Earth's surface). For such cases, only the horizontal (east and north) components matter. The restriction of the above to the horizontal plane is (setting vu = 0):
where is called the Coriolis parameter.
By setting vn = 0, it can be seen immediately that (for positive φ and ω) a movement due east results in an acceleration due south; similarly, setting ve = 0, it is seen that a movement due north results in an acceleration due east. In general, observed horizontally, looking along the direction of the movement causing the acceleration, the acceleration always is turned 90° to the right (for positive φ) and of the same size regardless of the horizontal orientation.
In the case of equatorial motion, setting φ = 0° yields:
Ω in this case is parallel to the north–south axis.
Accordingly, an eastward motion (that is, in the same direction as the rotation of the sphere) provides an upward acceleration known as the Eötvös effect, and an upward motion produces an acceleration due west.
Meteorology and oceanography
Perhaps the most important impact of the Coriolis effect is in the large-scale dynamics of the oceans and the atmosphere. In meteorology and oceanography, it is convenient to postulate a rotating frame of reference wherein the Earth is stationary. In accommodation of that provisional postulation, the centrifugal and Coriolis forces are introduced. Their relative importance is determined by the applicable Rossby numbers. Tornadoes have high Rossby numbers, so, while tornado-associated centrifugal forces are quite substantial, Coriolis forces associated with tornadoes are for practical purposes negligible.
Because surface ocean currents are driven by the movement of wind over the water's surface, the Coriolis force also affects the movement of ocean currents and cyclones as well. Many of the ocean's largest currents circulate around warm, high-pressure areas called gyres. Though the circulation is not as significant as that in the air, the deflection caused by the Coriolis effect is what creates the spiralling pattern in these gyres. The spiralling wind pattern helps the hurricane form. The stronger the force from the Coriolis effect, the faster the wind spins and picks up additional energy, increasing the strength of the hurricane.
Air within high-pressure systems rotates in a direction such that the Coriolis force is directed radially inwards, and nearly balanced by the outwardly radial pressure gradient. As a result, air travels clockwise around high pressure in the Northern Hemisphere and anticlockwise in the Southern Hemisphere. Air around low-pressure rotates in the opposite direction, so that the Coriolis force is directed radially outward and nearly balances an inwardly radial pressure gradient.
Flow around a low-pressure area
If a low-pressure area forms in the atmosphere, air tends to flow in towards it, but is deflected perpendicular to its velocity by the Coriolis force. A system of equilibrium can then establish itself creating circular movement, or a cyclonic flow. Because the Rossby number is low, the force balance is largely between the pressure-gradient force acting towards the low-pressure area and the Coriolis force acting away from the center of the low pressure.
Instead of flowing down the gradient, large scale motions in the atmosphere and ocean tend to occur perpendicular to the pressure gradient. This is known as geostrophic flow. On a non-rotating planet, fluid would flow along the straightest possible line, quickly eliminating pressure gradients. The geostrophic balance is thus very different from the case of "inertial motions" (see below), which explains why mid-latitude cyclones are larger by an order of magnitude than inertial circle flow would be.
This pattern of deflection, and the direction of movement, is called Buys-Ballot's law. In the atmosphere, the pattern of flow is called a cyclone. In the Northern Hemisphere the direction of movement around a low-pressure area is anticlockwise. In the Southern Hemisphere, the direction of movement is clockwise because the rotational dynamics is a mirror image there. At high altitudes, outward-spreading air rotates in the opposite direction.For instance, see the image appearing in this source: Cyclones rarely form along the equator due to the weak Coriolis effect present in this region.
Inertial circles
An air or water mass moving with speed subject only to the Coriolis force travels in a circular trajectory called an inertial circle. Since the force is directed at right angles to the motion of the particle, it moves with a constant speed around a circle whose radius is given by:
where is the Coriolis parameter , introduced above (where is the latitude). The time taken for the mass to complete a full circle is therefore . The Coriolis parameter typically has a mid-latitude value of about 10−4 s−1; hence for a typical atmospheric speed of , the radius is with a period of about 17 hours. For an ocean current with a typical speed of , the radius of an inertial circle is . These inertial circles are clockwise in the Northern Hemisphere (where trajectories are bent to the right) and anticlockwise in the Southern Hemisphere.
If the rotating system is a parabolic turntable, then is constant and the trajectories are exact circles. On a rotating planet, varies with latitude and the paths of particles do not form exact circles. Since the parameter varies as the sine of the latitude, the radius of the oscillations associated with a given speed are smallest at the poles (latitude of ±90°), and increase toward the equator.
Other terrestrial effects
The Coriolis effect strongly affects the large-scale oceanic and atmospheric circulation, leading to the formation of robust features like jet streams and western boundary currents. Such features are in geostrophic balance, meaning that the Coriolis and pressure gradient forces balance each other. Coriolis acceleration is also responsible for the propagation of many types of waves in the ocean and atmosphere, including Rossby waves and Kelvin waves. It is also instrumental in the so-called Ekman dynamics in the ocean, and in the establishment of the large-scale ocean flow pattern called the Sverdrup balance.
Eötvös effect
The practical impact of the "Coriolis effect" is mostly caused by the horizontal acceleration component produced by horizontal motion.
There are other components of the Coriolis effect. Westward-traveling objects are deflected downwards, while eastward-traveling objects are deflected upwards. Extract of page 45 This is known as the Eötvös effect. This aspect of the Coriolis effect is greatest near the equator. The force produced by the Eötvös effect is similar to the horizontal component, but the much larger vertical forces due to gravity and pressure suggest that it is unimportant in the hydrostatic equilibrium. However, in the atmosphere, winds are associated with small deviations of pressure from the hydrostatic equilibrium. In the tropical atmosphere, the order of magnitude of the pressure deviations is so small that the contribution of the Eötvös effect to the pressure deviations is considerable.
In addition, objects traveling upwards (i.e. out) or downwards (i.e. in) are deflected to the west or east respectively. This effect is also the greatest near the equator. Since vertical movement is usually of limited extent and duration, the size of the effect is smaller and requires precise instruments to detect. For example, idealized numerical modeling studies suggest that this effect can directly affect tropical large-scale wind field by roughly 10% given long-duration (2 weeks or more) heating or cooling in the atmosphere. Moreover, in the case of large changes of momentum, such as a spacecraft being launched into orbit, the effect becomes significant. The fastest and most fuel-efficient path to orbit is a launch from the equator that curves to a directly eastward heading.
Intuitive example
Imagine a train that travels through a frictionless railway line along the equator. Assume that, when in motion, it moves at the necessary speed to complete a trip around the world in one day (465 m/s). The Coriolis effect can be considered in three cases: when the train travels west, when it is at rest, and when it travels east. In each case, the Coriolis effect can be calculated from the rotating frame of reference on Earth first, and then checked against a fixed inertial frame. The image below illustrates the three cases as viewed by an observer at rest in a (near) inertial frame from a fixed point above the North Pole along the Earth's axis of rotation; the train is denoted by a few red pixels, fixed at the left side in the leftmost picture, moving in the others
The train travels toward the west: In that case, it moves against the direction of rotation. Therefore, on the Earth's rotating frame the Coriolis term is pointed inwards towards the axis of rotation (down). This additional force downwards should cause the train to be heavier while moving in that direction.If one looks at this train from the fixed non-rotating frame on top of the center of the Earth, at that speed it remains stationary as the Earth spins beneath it. Hence, the only force acting on it is gravity and the reaction from the track. This force is greater (by 0.34%) than the force that the passengers and the train experience when at rest (rotating along with Earth). This difference is what the Coriolis effect accounts for in the rotating frame of reference.
The train comes to a stop: From the point of view on the Earth's rotating frame, the velocity of the train is zero, thus the Coriolis force is also zero and the train and its passengers recuperate their usual weight.From the fixed inertial frame of reference above Earth, the train now rotates along with the rest of the Earth. 0.34% of the force of gravity provides the centripetal force needed to achieve the circular motion on that frame of reference. The remaining force, as measured by a scale, makes the train and passengers "lighter" than in the previous case.
The train travels east. In this case, because it moves in the direction of Earth's rotating frame, the Coriolis term is directed outward from the axis of rotation (up). This upward force makes the train seem lighter still than when at rest.thumb|350 px|alt=|Graph of the force experienced by a object as a function of its speed moving along Earth's equator (as measured within the rotating frame). (Positive force in the graph is directed upward. Positive speed is directed eastward and negative speed is directed westward). From the fixed inertial frame of reference above Earth, the train traveling east now rotates at twice the rate as when it was at rest—so the amount of centripetal force needed to cause that circular path increases leaving less force from gravity to act on the track. This is what the Coriolis term accounts for on the previous paragraph.As a final check one can imagine a frame of reference rotating along with the train. Such frame would be rotating at twice the angular velocity as Earth's rotating frame. The resulting centrifugal force component for that imaginary frame would be greater. Since the train and its passengers are at rest, that would be the only component in that frame explaining again why the train and the passengers are lighter than in the previous two cases.
This also explains why high-speed projectiles that travel west are deflected down, and those that travel east are deflected up. This vertical component of the Coriolis effect is called the Eötvös effect.
The above example can be used to explain why the Eötvös effect starts diminishing when an object is traveling westward as its tangential speed increases above Earth's rotation (465 m/s). If the westward train in the above example increases speed, part of the force of gravity that pushes against the track accounts for the centripetal force needed to keep it in circular motion on the inertial frame. Once the train doubles its westward speed at that centripetal force becomes equal to the force the train experiences when it stops. From the inertial frame, in both cases it rotates at the same speed but in the opposite directions. Thus, the force is the same cancelling completely the Eötvös effect. Any object that moves westward at a speed above experiences an upward force instead. In the figure, the Eötvös effect is illustrated for a object on the train at different speeds. The parabolic shape is because the centripetal force is proportional to the square of the tangential speed. On the inertial frame, the bottom of the parabola is centered at the origin. The offset is because this argument uses the Earth's rotating frame of reference. The graph shows that the Eötvös effect is not symmetrical, and that the resulting downward force experienced by an object that travels west at high velocity is less than the resulting upward force when it travels east at the same speed.
Draining in bathtubs and toilets
Contrary to popular misconception, bathtubs, toilets, and other water receptacles do not drain in opposite directions in the Northern and Southern Hemispheres. This is because the magnitude of the Coriolis force is negligible at this scale. Forces determined by the initial conditions of the water (e.g. the geometry of the drain, the geometry of the receptacle, preexisting momentum of the water, etc.) are likely to be orders of magnitude greater than the Coriolis force and hence will determine the direction of water rotation, if any. For example, identical toilets flushed in both hemispheres drain in the same direction, and this direction is determined mostly by the shape of the toilet bowl.
Under real-world conditions, the Coriolis force does not influence the direction of water flow perceptibly. Only if the water is so still that the effective rotation rate of the Earth is faster than that of the water relative to its container, and if externally applied torques (such as might be caused by flow over an uneven bottom surface) are small enough, the Coriolis effect may indeed determine the direction of the vortex. Without such careful preparation, the Coriolis effect will be much smaller than various other influences on drain direction such as any residual rotation of the water and the geometry of the container.
Laboratory testing of draining water under atypical conditions
In 1962, Ascher Shapiro performed an experiment at MIT to test the Coriolis force on a large basin of water, across, with a small wooden cross above the plug hole to display the direction of rotation, covering it and waiting for at least 24 hours for the water to settle. Under these precise laboratory conditions, he demonstrated the effect and consistent counterclockwise rotation. The experiment required extreme precision, since the acceleration due to Coriolis effect is only that of gravity. The vortex was measured by a cross made of two slivers of wood pinned above the draining hole. It takes 20 minutes to drain, and the cross starts turning only around 15 minutes. At the end it is turning at 1 rotation every 3 to 4 seconds.
He reported that,
Lloyd Trefethen reported clockwise rotation in the Southern Hemisphere at the University of Sydney in five tests with settling times of 18 h or more.
Ballistic trajectories
The Coriolis force is important in external ballistics for calculating the trajectories of very long-range artillery shells. The most famous historical example was the Paris gun, used by the Germans during World War I to bombard Paris from a range of about . The Coriolis force minutely changes the trajectory of a bullet, affecting accuracy at extremely long distances. It is adjusted for by accurate long-distance shooters, such as snipers. At the latitude of Sacramento, California, a northward shot would be deflected to the right. There is also a vertical component, explained in the Eötvös effect section above, which causes westward shots to hit low, and eastward shots to hit high.The claim is made that in the Falklands in WW I, the British failed to correct their sights for the southern hemisphere, and so missed their targets. For set up of the calculations, see Carlucci & Jacobson (2007), p. 225
The effects of the Coriolis force on ballistic trajectories should not be confused with the curvature of the paths of missiles, satellites, and similar objects when the paths are plotted on two-dimensional (flat) maps, such as the Mercator projection. The projections of the three-dimensional curved surface of the Earth to a two-dimensional surface (the map) necessarily results in distorted features. The apparent curvature of the path is a consequence of the sphericity of the Earth and would occur even in a non-rotating frame.
The Coriolis force on a moving projectile depends on velocity components in all three directions, latitude, and azimuth. The directions are typically downrange (the direction that the gun is initially pointing), vertical, and cross-range.
where
, down-range acceleration.
, vertical acceleration with positive indicating acceleration upward.
, cross-range acceleration with positive indicating acceleration to the right.
, down-range velocity.
, vertical velocity with positive indicating upward.
, cross-range velocity with positive indicating velocity to the right.
= 0.00007292 rad/sec, angular velocity of the Earth (based on a sidereal day).
, latitude with positive indicating Northern Hemisphere.
, azimuth measured clockwise from due North.
Visualization
To demonstrate the Coriolis effect, a parabolic turntable can be used.
On a flat turntable, the inertia of a co-rotating object forces it off the edge. However, if the turntable surface has the correct paraboloid (parabolic bowl) shape (see the figure) and rotates at the corresponding rate, the force components shown in the figure make the component of gravity tangential to the bowl surface exactly equal to the centripetal force necessary to keep the object rotating at its velocity and radius of curvature (assuming no friction). (See banked turn.) This carefully contoured surface allows the Coriolis force to be displayed in isolation.When a container of fluid is rotating on a turntable, the surface of the fluid naturally assumes the correct parabolic shape. This fact may be exploited to make a parabolic turntable by using a fluid that sets after several hours, such as a synthetic resin. For a video of the Coriolis effect on such a parabolic surface, see Geophysical fluid dynamics lab demonstration John Marshall, Massachusetts Institute of Technology.For a java applet of the Coriolis effect on such a parabolic surface, see Brian Fiedler School of Meteorology at the University of Oklahoma.
Discs cut from cylinders of dry ice can be used as pucks, moving around almost frictionlessly over the surface of the parabolic turntable, allowing effects of Coriolis on dynamic phenomena to show themselves. To get a view of the motions as seen from the reference frame rotating with the turntable, a video camera is attached to the turntable so as to co-rotate with the turntable, with results as shown in the figure. In the left panel of the figure, which is the viewpoint of a stationary observer, the gravitational force in the inertial frame pulling the object toward the center (bottom ) of the dish is proportional to the distance of the object from the center. A centripetal force of this form causes the elliptical motion. In the right panel, which shows the viewpoint of the rotating frame, the inward gravitational force in the rotating frame (the same force as in the inertial frame) is balanced by the outward centrifugal force (present only in the rotating frame). With these two forces balanced, in the rotating frame the only unbalanced force is Coriolis (also present only in the rotating frame), and the motion is an inertial circle. Analysis and observation of circular motion in the rotating frame is a simplification compared with analysis and observation of elliptical motion in the inertial frame.
Because this reference frame rotates several times a minute rather than only once a day like the Earth, the Coriolis acceleration produced is many times larger and so easier to observe on small time and spatial scales than is the Coriolis acceleration caused by the rotation of the Earth.
In a manner of speaking, the Earth is analogous to such a turntable. The rotation has caused the planet to settle on a spheroid shape, such that the normal force, the gravitational force and the centrifugal force exactly balance each other on a "horizontal" surface. (See equatorial bulge.)
The Coriolis effect caused by the rotation of the Earth can be seen indirectly through the motion of a Foucault pendulum.
In other areas
Coriolis flow meter
A practical application of the Coriolis effect is the mass flow meter, an instrument that measures the mass flow rate and density of a fluid flowing through a tube. The operating principle involves inducing a vibration of the tube through which the fluid passes. The vibration, though not completely circular, provides the rotating reference frame that gives rise to the Coriolis effect. While specific methods vary according to the design of the flow meter, sensors monitor and analyze changes in frequency, phase shift, and amplitude of the vibrating flow tubes. The changes observed represent the mass flow rate and density of the fluid.
Molecular physics
In polyatomic molecules, the molecule motion can be described by a rigid body rotation and internal vibration of atoms about their equilibrium position. As a result of the vibrations of the atoms, the atoms are in motion relative to the rotating coordinate system of the molecule. Coriolis effects are therefore present, and make the atoms move in a direction perpendicular to the original oscillations. This leads to a mixing in molecular spectra between the rotational and vibrational levels, from which Coriolis coupling constants can be determined.
Insect flight
Flies (Diptera) and some moths (Lepidoptera) exploit the Coriolis effect in flight with specialized appendages and organs that relay information about the angular velocity of their bodies. Coriolis forces resulting from linear motion of these appendages are detected within the rotating frame of reference of the insects' bodies. In the case of flies, their specialized appendages are dumbbell shaped organs located just behind their wings called "halteres".
The fly's halteres oscillate in a plane at the same beat frequency as the main wings so that any body rotation results in lateral deviation of the halteres from their plane of motion.
In moths, their antennae are known to be responsible for the sensing of Coriolis forces in the similar manner as with the halteres in flies. In both flies and moths, a collection of mechanosensors at the base of the appendage are sensitive to deviations at the beat frequency, correlating to rotation in the pitch and roll planes, and at twice the beat frequency, correlating to rotation in the yaw plane.
Lagrangian point stability
In astronomy, Lagrangian points are five positions in the orbital plane of two large orbiting bodies where a small object affected only by gravity can maintain a stable position relative to the two large bodies. The first three Lagrangian points (L1, L2, L3) lie along the line connecting the two large bodies, while the last two points (L4 and L5) each form an equilateral triangle with the two large bodies. The L4 and L5 points, although they correspond to maxima of the effective potential in the coordinate frame that rotates with the two large bodies, are stable due to the Coriolis effect. The stability can result in orbits around just L4 or L5, known as tadpole orbits, where trojans can be found. It can also result in orbits that encircle L3, L4, and L5, known as horseshoe orbits.
See also
Analytical mechanics
Applied mechanics
Classical mechanics
Earth's rotation
Equatorial Rossby wave
Frenet–Serret formulas
Gyroscope
Kinetics (physics)
Reactive centrifugal force
Secondary flow
Statics
Uniform circular motion
Whirlpool
Physics and meteorology
Riccioli, G. B., 1651: Almagestum Novum, Bologna, pp. 425–427 (Original book [in Latin], scanned images of complete pages.)
Coriolis, G. G., 1832: "Mémoire sur le principe des forces vives dans les mouvements relatifs des machines." Journal de l'école Polytechnique, Vol 13, pp. 268–302. (Original article [in French], PDF file, 1.6 MB, scanned images of complete pages.)
Coriolis, G. G., 1835: "Mémoire sur les équations du mouvement relatif des systèmes de corps." Journal de l'école Polytechnique, Vol 15, pp. 142–154 (Original article [in French] PDF file, 400 KB, scanned images of complete pages.)
Gill, A. E. Atmosphere-Ocean dynamics, Academic Press, 1982.
Durran, D. R., 1993: Is the Coriolis force really responsible for the inertial oscillation?, Bull. Amer. Meteor. Soc., 74, pp. 2179–2184; Corrigenda. Bulletin of the American Meteorological Society, 75, p. 261
Durran, D. R., and S. K. Domonkos, 1996: An apparatus for demonstrating the inertial oscillation, Bulletin of the American Meteorological Society, 77, pp. 557–559.
Marion, Jerry B. 1970, Classical Dynamics of Particles and Systems, Academic Press.
Persson, A., 1998 How do we Understand the Coriolis Force? Bulletin of the American Meteorological Society 79, pp. 1373–1385.
Symon, Keith. 1971, Mechanics, Addison–Wesley
Akira Kageyama & Mamoru Hyodo: Eulerian derivation of the Coriolis force
James F. Price: A Coriolis tutorial Woods Hole Oceanographic Institute (2003)
. Elementary, non-mathematical; but well written.
Historical
Grattan-Guinness, I., Ed., 1994: Companion Encyclopedia of the History and Philosophy of the Mathematical Sciences. Vols. I and II. Routledge, 1840 pp. 1997: The Fontana History of the Mathematical Sciences. Fontana, 817 pp. 710 pp.
Khrgian, A., 1970: Meteorology: A Historical Survey. Vol. 1. Keter Press, 387 pp.
Kuhn, T. S., 1977: Energy conservation as an example of simultaneous discovery. The Essential Tension, Selected Studies in Scientific Tradition and Change, University of Chicago Press, 66–104.
Kutzbach, G., 1979: The Thermal Theory of Cyclones. A History of Meteorological Thought in the Nineteenth Century. Amer. Meteor. Soc., 254 pp.
References
External links
The definition of the Coriolis effect from the Glossary of Meteorology
The Coriolis Effect — a conflict between common sense and mathematics PDF-file. 20 pages. A general discussion by Anders Persson of various aspects of the coriolis effect, including Foucault's Pendulum and Taylor columns.
The coriolis effect in meteorology PDF-file. 5 pages. A detailed explanation by Mats Rosengren of how the gravitational force and the rotation of the Earth affect the atmospheric motion over the Earth surface. 2 figures
10 Coriolis Effect Videos and Games- from the About.com Weather Page
Coriolis Force – from ScienceWorld
Coriolis Effect and Drains An article from the NEWTON web site hosted by the Argonne National Laboratory.
Catalog of Coriolis videos
Coriolis Effect: A graphical animation, a visual Earth animation with precise explanation
An introduction to fluid dynamics SPINLab Educational Film explains the Coriolis effect with the aid of lab experiments
Do bathtubs drain counterclockwise in the Northern Hemisphere? by Cecil Adams.
Bad Coriolis. An article uncovering misinformation about the Coriolis effect. By Alistair B. Fraser, emeritus professor of meteorology at Pennsylvania State University
The Coriolis Effect: A (Fairly) Simple Explanation, an explanation for the layperson
Observe an animation of the Coriolis effect over Earth's surface
Animation clip showing scenes as viewed from both an inertial frame and a rotating frame of reference, visualizing the Coriolis and centrifugal forces.
Vincent Mallette The Coriolis Force @ INWIT
NASA notes
Interactive Coriolis Fountain lets you control rotation speed, droplet speed and frame of reference to explore the Coriolis effect.
Rotating Co-ordinating Systems , transformation from inertial systems
Category:Classical mechanics
Category:Force
Category:Atmospheric dynamics
Category:Physical phenomena
Category:Fictitious forces
Category:Rotation
|
physics
| 7,394
|
7844
|
Chimpanzee
|
https://en.wikipedia.org/wiki/Chimpanzee
|
The chimpanzee (; Pan troglodytes), also simply known as the chimp, is a species of great ape native to the forests and savannahs of tropical Africa. It has four confirmed subspecies and a fifth proposed one. When its close relative, the bonobo, was more commonly known as the pygmy chimpanzee, this species was often called the common chimpanzee or the robust chimpanzee. The chimpanzee and the bonobo are the only species in the genus Pan. Evidence from fossils and DNA sequencing shows that Pan is a sister taxon to the human lineage and is thus humans' closest living relative.
The chimpanzee is covered in coarse black hair but has a bare face, fingers, toes, palms of the hands, and soles of the feet. It is larger and more robust than the bonobo, weighing for males and for females and standing .
The chimpanzee lives in groups that range in size from 15 to 150 members, although individuals travel and forage in much smaller groups during the day. The species lives in a strict male-dominated hierarchy, where disputes are generally settled without the need for violence. Nearly all chimpanzee populations have been recorded using tools, modifying sticks, rocks, grass and leaves and using them for hunting and acquiring honey, termites, ants, nuts and water. The species has also been found creating sharpened sticks to spear small mammals. Its gestation period is eight months. The infant is weaned at about three years old but usually maintains a close relationship with its mother for several years more.
The chimpanzee is listed on the IUCN Red List as an endangered species. Between 170,000 and 300,000 individuals are estimated across its range. The biggest threats to the chimpanzee are habitat loss, poaching, and disease. Chimpanzees appear in Western popular culture as stereotyped clown-figures and have featured in entertainments such as chimpanzees' tea parties, circus acts and stage shows. Although chimpanzees have been kept as pets, their strength, aggressiveness, and unpredictability makes them dangerous in this role. Some hundreds have been kept in laboratories for research, especially in the United States. Many attempts have been made to teach languages such as American Sign Language to chimpanzees, with limited success.
Etymology
The English word chimpanzee is first recorded in 1738. It is derived from Vili ci-mpenze or Tshiluba language chimpenze, with a meaning of "ape", or "mockman". The colloquialism "chimp" was most likely coined some time in the late 1870s. The genus name Pan derives from the Greek god, while the specific name troglodytes was taken from the Troglodytae, a mythical race of cave-dwellers.
Taxonomy
The first great ape known to Western science in the 17th century was the "orang-outang" (genus Pongo), the local Malay name being recorded in Java by the Dutch physician Jacobus Bontius. In 1641, the Dutch anatomist Nicolaes Tulp applied the name to a chimpanzee or bonobo brought to the Netherlands from Angola. Another Dutch anatomist, Peter Camper, dissected specimens from Central Africa and Southeast Asia in the 1770s, noting the differences between the African and Asian apes. The German naturalist Johann Friedrich Blumenbach classified the chimpanzee as Simia troglodytes by 1775. Another German naturalist, Lorenz Oken, coined the genus Pan in 1816. The bonobo was recognised as distinct from the chimpanzee by 1933.
Evolution
Despite a large number of Homo fossil finds, Pan fossils were not described until 2005. Existing chimpanzee populations in West and Central Africa do not overlap with the major human fossil sites in East Africa, but chimpanzee fossils have now been reported from Kenya. This indicates that both humans and members of the Pan clade were present in the East African Rift Valley during the Middle Pleistocene.
According to studies published in 2017 by researchers at George Washington University, bonobos, along with chimpanzees, split from the human line about 8 million years ago; then bonobos split from the common chimpanzee line about 2 million years ago. Another 2017 genetic study suggests ancient gene flow (introgression) between 200,000 and 550,000 years ago from the bonobo into the ancestors of central and eastern chimpanzees.
Subspecies and population status
Four subspecies of the chimpanzee have been recognised, with the possibility of a fifth:
Central chimpanzee or the tschego (Pan troglodytes troglodytes), found in Cameroon, the Central African Republic, Equatorial Guinea, Gabon, the Republic of the Congo, and the Democratic Republic of the Congo, with about 140,000 individuals existing in the wild.
Western chimpanzee (P. troglodytes verus), found in Ivory Coast, Guinea, Liberia, Mali, Sierra Leone, Guinea-Bissau, Senegal, and Ghana with about 52,800 individuals still in existence.
Nigeria-Cameroon chimpanzee (P. troglodytes ellioti (also known as P. t. vellerosus)), that live within forested areas across Nigeria and Cameroon, with 6000–9000 individuals still in existence.
Eastern chimpanzee (P. troglodytes schweinfurthii), found in the Central African Republic, South Sudan, the Democratic Republic of the Congo, Uganda, Rwanda, Burundi, Tanzania, and Zambia, with approximately 180,000–256,000 individuals still existing in the wild.
Southeastern chimpanzee, P. troglodytes marungensis, in Burundi, Rwanda, Tanzania, and Uganda. Colin Groves argues that this is a subspecies, created by enough variation between the northern and southern populations of P. t. schweinfurthii, but it is not recognised by the IUCN.
Genome
A draft version of the chimpanzee genome was published in 2005 and encodes 18,759 proteins, (compared to 20,383 in the human proteome). The DNA sequences of humans and chimpanzees are very similar and the difference in protein number mostly arises from incomplete sequences in the chimpanzee genome. Both species differ by about 35 million single-nucleotide changes, five million insertion/deletion events and various chromosomal rearrangements. Typical human and chimpanzee protein homologs differ in an average of only two amino acids. About 30% of all human proteins are identical in sequence to the corresponding chimpanzee protein. Duplications of small parts of chromosomes have been the major source of differences between human and chimpanzee genetic material; about 2.7% of the corresponding modern genomes represent differences, produced by gene duplications or deletions, since humans and chimpanzees diverged from their common evolutionary ancestor.
Characteristics
Adult chimpanzees have an average standing height of . Wild adult males weigh between , and females weigh between . In exceptional cases, certain individuals may considerably exceed these measurements, standing over on two legs and weighing up to in captivity.
The chimpanzee is more robustly built than the bonobo but less than the gorilla. The arms of a chimpanzee are longer than its legs and can reach below the knees. The hands have long fingers with short thumbs and flat fingernails. The feet are adapted for grasping, and the big toe is opposable. The pelvis is long with an extended ilium. A chimpanzee's head is rounded with a prominent and prognathous face and a pronounced brow ridge. It has forward-facing eyes, a small nose, rounded non-lobed ears and a long mobile upper lip. Additionally, adult males have sharp canine teeth. Like all great apes, it has a dental formula of , that is, two incisors, one canine, two premolars, and three molars on both halves of each jaw. Chimpanzees lack the prominent sagittal crest and associated head and neck musculature of gorillas.
Chimpanzee bodies are covered by coarse hair, except for the face, fingers, toes, palms of the hands, and soles of the feet. Chimpanzees lose more hair as they age and develop bald spots. The hair of a chimpanzee is typically black but can be brown or ginger. As they get older, white or grey patches may appear, particularly on the chin and lower region. Chimpanzee skin that is covered with body hair is white, while exposed areas vary: white which ages into a dark muddy colour in eastern chimpanzees, freckled on white which ages to a heavily mottled muddy colour in central chimpanzees, and black with a butterfly-shaped white mask that darkens with age in western chimpanzees. Facial pigmentation increases with age and exposure to ultraviolet light. Females develop swelling pink skin when in oestrus. Like bonobos, male chimpanzees have a long filiform penis with a small baculum, but without a glans.
Chimpanzees are adapted for both arboreal and terrestrial locomotion. Arboreal locomotion consists of vertical climbing and brachiation. On the ground, chimpanzees move both quadrupedally and bipedally. These movements appear to have similar energy costs. As with bonobos and gorillas, chimpanzees move quadrupedally by knuckle-walking, which probably evolved independently in Pan and Gorilla. Their muscles are 50% stronger per weight than those of humans due to higher content of fast twitch muscle fibres, one of the chimpanzee's adaptations for climbing and swinging. According to Japan's Asahiyama Zoo, the grip strength of an adult chimpanzee is estimated to be , while other sources claim figures of up to .
Ecology
The chimpanzee is a highly adaptable species. It lives in a variety of habitats, including dry savanna, evergreen rainforest, montane forest, swamp forest, and dry woodland-savanna mosaic. In Gombe, the chimpanzee mostly uses semideciduous and evergreen forest as well as open woodland. At Bossou, the chimpanzee inhabits multistage secondary deciduous forest, which has grown after shifting cultivation, as well as primary forest and grassland. At Taï, it is found in the last remaining tropical rain forest in Ivory Coast. The chimpanzee has an advanced cognitive map of its home range and can repeatedly find food. The chimpanzee builds a sleeping nest in a tree in a different location each night, never using the same nest more than once. Chimpanzees sleep alone in separate nests except for infants or juvenile chimpanzees, which sleep with their mothers.
Diet
The chimpanzee is an omnivorous frugivore. It prefers fruit above all other food, but it also eats leaves, leaf buds, seeds, blossoms, stems, pith, bark, and resin. A study in Budongo Forest, Uganda found that 64.5% of their feeding time concentrated on fruits (84.6% of which being ripe), particularly those from two species of Ficus, Maesopsis eminii, and Celtis gomphophylla. In addition, 19% of feeding time was spent on arboreal leaves, mostly Broussonetia papyrifera and Celtis mildbraedii. While the chimpanzee is mostly herbivorous, it does eat honey, soil, insects, birds and their eggs, and small to medium-sized mammals, including other primates. Insect species consumed include the weaver ant Oecophylla longinoda, Macrotermes termites, and honey bees. The red colobus ranks at the top of preferred mammal prey. Other mammalian prey include red-tailed monkeys, infant and juvenile yellow baboons, bush babies, blue duikers, bushbucks, and common warthogs.
Despite the fact that chimpanzees are known to hunt and to collect both insects and other invertebrates, such food actually makes up a very small portion of their diet, from as little as 2% yearly to as much as 65 grams of animal flesh per day for each adult chimpanzee in peak hunting seasons. This also varies from troop to troop and year to year. However, in all cases, the majority of their diet consists of fruits, leaves, roots, and other plant matter. Female chimpanzees appear to consume much less animal flesh than males, according to several studies. Jane Goodall documented many occasions within Gombe Stream National Park of chimpanzees and western red colobus monkeys ignoring each other despite close proximity.
Chimpanzees do not appear to directly compete with gorillas in areas where they overlap. When fruit is abundant, gorilla and chimpanzee diets converge, but when fruit is scarce gorillas resort to vegetation. The two apes may also feed on different species, whether fruit or insects. Interactions between them can range from friendly and even stable social bonding, to avoidance, to aggression and even predation of infants on the part of chimpanzees.
Mortality and health
The average lifespan of a wild chimpanzee is relatively short. They usually live less than 15 years, although individuals that reach 12 years may live an additional 15 years. On rare occasions, wild chimpanzees may live nearly 60 years. Captive chimpanzees tend to live longer than most wild ones, with median lifespans of 31.7 years for males and 38.7 years for females. The oldest-known male captive chimpanzee to have been documented lived to 66 years, and the oldest female, Little Mama, was nearly 80 years old.
Leopards prey on chimpanzees in some areas. It is possible that much of the mortality caused by leopards can be attributed to individuals that have specialised in killing chimpanzees. Chimpanzees may react to a leopard's presence with loud vocalising, branch shaking, and throwing objects. There is at least one record of chimpanzees killing a leopard cub after mobbing it and its mother in their den. Four chimpanzees could have fallen prey to lions at Mahale Mountains National Park. Although no other instances of lion predation on chimpanzees have been recorded, lions likely do kill chimpanzees occasionally, and the larger group sizes of savanna chimpanzees may have developed as a response to threats from these big cats. Chimpanzees may react to lions by fleeing up trees, vocalising, or hiding in silence.
Chimpanzees and humans share only 50% of their parasite and microbe species. This is due to the differences in environmental and dietary adaptations; human internal parasite species overlap more with omnivorous, savanna-dwelling baboons. The chimpanzee is host to the louse species Pediculus schaeffi, a close relative of P. humanus, which infests human head and body hair. By contrast, the human pubic louse Pthirus pubis is closely related to Pthirus gorillae, which infests gorillas. A 2017 study of gastrointestinal parasites of wild chimpanzees in degraded forest in Uganda found nine species of protozoa, five nematodes, one cestode, and one trematode. The most prevalent species was the protozoan Troglodytella abrassarti.
Behaviour
Recent studies have suggested that human observers influence chimpanzee behaviour. One suggestion is that drones, camera traps, and remote microphones should be used to record and monitor chimpanzees rather than direct human observation.
Group structure
Chimpanzees live in communities that typically range from around 15 to more than 150 members but spend most of their time traveling in small, temporary groups consisting of a few individuals. These groups may consist of any combination of age and sexes. Both males and females sometimes travel alone. This fission–fusion society may include groups of four types: all-male, adult females and offspring, adults of both sexes, or one female and her offspring. These smaller groups emerge in a variety of types, for a variety of purposes. For example, an all-male troop may be organised to hunt for meat, while a group consisting of lactating females serves to act as a "nursery group" for the young.
At the core of social structures are males, which patrol the territory, protect group members, and search for food. Males remain in their natal communities, while females generally emigrate at adolescence. Males in a community are more likely to be related to one another than females are to each other. Among males, there is generally a dominance hierarchy, and males are dominant over females. However, this unusual fission-fusion social structure, "in which portions of the parent group may on a regular basis separate from and then rejoin the rest," is highly variable in terms of which particular individual chimpanzees congregate at a given time. This is caused mainly by the large measure of individual autonomy that individuals have within their fission-fusion social groups. As a result, individual chimpanzees often forage for food alone, or in smaller groups, as opposed to the much larger "parent" group, which encompasses all the chimpanzees which regularly come into contact with each other and congregate into parties in a particular area.
Male chimpanzees exist in a linear dominance hierarchy. Top-ranking males tend to be aggressive even during dominance stability. This is probably due to the chimpanzee's fission-fusion society, with male chimpanzees leaving groups and returning after extended periods of time. With this, a dominant male is unsure if any "political maneuvering" has occurred in his absence and must re-establish his dominance. Thus, a large amount of aggression occurs within five to fifteen minutes after a reunion. During these encounters, displays of aggression are generally preferred over physical attacks.
Males maintain and improve their social ranks by forming coalitions, which have been characterised as "exploitative" and based on an individual's influence in agonistic interactions. Being in a coalition allows males to dominate a third individual when they could not by themselves, as politically apt chimpanzees can exert power over aggressive interactions regardless of their rank. Coalitions can also give an individual male the confidence to challenge a dominant or larger male. The more allies a male has, the better his chance of becoming dominant. However, most changes in hierarchical rank are caused by dyadic interactions. Chimpanzee alliances can be very fickle, and one member may suddenly turn on another if it is to his advantage.
Low-ranking males frequently switch sides in disputes between more dominant individuals. Low-ranking males benefit from an unstable hierarchy and often find increased sexual opportunities if a dispute or conflict occurs. In addition, conflicts between dominant males cause them to focus on each other rather than the lower-ranking males. Social hierarchies among adult females tend to be weaker. Nevertheless, the status of an adult female may be important for her offspring. Females in Taï have also been recorded to form alliances. While chimpanzee social structure is often referred to as patriarchal, it is not entirely unheard of for females to forge coalitions against males. There is also at least one recorded case of females securing a dominant position over males in their respective troop, albeit in a captive environment. Social grooming appears to be important in the formation and maintenance of coalitions. It is more common among adult males than either between adult females or between males and females.
Chimpanzees have been described as highly territorial and will frequently kill other chimpanzees, although Margaret Power wrote in her 1991 book The Egalitarians that the field studies from which the aggressive data came, Gombe and Mahale, used artificial feeding systems that increased aggression in the chimpanzee populations studied. Thus, the behaviour may not reflect innate characteristics of the species as a whole. In the years following her artificial feeding conditions at Gombe, Jane Goodall described groups of male chimpanzees patrolling the borders of their territory, brutally attacking chimpanzees that had split off from the Gombe group. A study published in 2010 found that the chimpanzees wage wars over territory, not mates. Patrols from smaller groups are more likely to avoid contact with their neighbours. Patrols from large groups even take over a smaller group's territory, gaining access to more resources, food, and females. While it was traditionally accepted that only female chimpanzees immigrate and males remain in their natal troop for life, there are confirmed cases of adult males safely integrating themselves into new communities among West African chimpanzees, suggesting they are less territorial than other subspecies. West African chimpanzee males are also less aggressive with female chimpanzees in general.
Mating and parenting
Chimpanzees mate throughout the year, although the number of females in oestrus varies seasonally in a group. Female chimpanzees are more likely to come into oestrus when food is readily available. Oestrous females exhibit sexual swellings. Chimpanzees are promiscuous: during oestrus, females mate with several males in their community, while males have large testicles for sperm competition. Other forms of mating also exist. A community's dominant males sometimes restrict reproductive access to females. A male and female can form a consortship and mate outside their community. In addition, females sometimes leave their community and mate with males from neighboring communities. These alternative mating strategies give females more mating opportunities without losing the support of the males in their community. Infanticide has been recorded in chimpanzee communities in some areas, and the victims are often consumed. Male chimpanzees practice infanticide on unrelated young to shorten the interbirth intervals in the females. Females sometimes practice infanticide. This may be related to the dominance hierarchy in females or may simply be pathological.
Inbreeding was studied in a relatively undisturbed eastern chimpanzee community that displayed substantial bisexual philopatry. Despite an increased inbreeding risk incurred by females who do not disperse before reaching reproductive age, these females were still able to avoid producing inbred offspring.
Copulation is brief, lasting approximately seven seconds. The gestation period is eight months. Care for the young is provided mostly by their mothers. The survival and emotional health of the young is dependent on maternal care. Mothers provide their young with food, warmth, and protection, and teach them certain skills. In addition, a chimpanzee's future rank may be dependent on its mother's status. Male chimpanzees continue to associate with the females they impregnated and interact with and support their offspring. Newborn chimpanzees are helpless. For example, their grasping reflex is not strong enough to support them for more than a few seconds. For their first 30 days, infants cling to their mother's bellies. Infants are unable to support their own weight for their first two months and need their mothers' support.
Wild chimps are seen to exhibit both "secure" and "insecure" attachment styles, with the offspring looking to the mother for comfort in the former and more independent offspring in the latter. However, wild chimps rarely demonstrate "disorganized" attachment styles (maladaptive parent-offspring bonds caused by abuse or neglect); researchers note such attachment styles are mostly observed in captive chimps raised around humans.
When they reach five to six months, infants ride on their mothers' backs. They remain in continual contact for the rest of their first year. When they reach two years of age, they are able to move and sit independently and start moving beyond the arms' reach of their mothers. By four to six years, chimpanzees are weaned and infancy ends. The juvenile period for chimpanzees lasts from their sixth to ninth years. Juveniles remain close to their mothers, but interact an increasing amount with other members of their community. Adolescent females move between groups and are supported by their mothers in agonistic encounters. Adolescent males spend time with adult males in social activities like hunting and boundary patrolling. A captive study suggests males can safely immigrate to a new group if accompanied by immigrant females who have an existing relationship with this male. This gives the resident males reproductive advantages with these females, as they are more inclined to remain in the group if their male friend is also accepted.
Orphaned chimpanzees are occasionally adopted by adult males who will be "as protective as any mother" and tend to their needs.
Communication
Chimpanzees use facial expressions, postures, and sounds to communicate with each other. Chimpanzees have expressive faces that are important in close-up communications. When frightened, a "full closed grin" causes nearby individuals to be fearful, as well. Playful chimpanzees display an open-mouthed grin. Chimpanzees may also express themselves with the "pout", which is made in distress, the "sneer", which is made when threatening or fearful, and "compressed-lips face", which is a type of display. When submitting to a dominant individual, a chimpanzee crunches, bobs, and extends a hand. When in an aggressive mode, a chimpanzee swaggers bipedally, hunched over and arms waving, in an attempt to exaggerate its size. While travelling, chimpanzees keep in contact by beating their hands and feet against the trunks of large trees, an act that is known as "drumming". They also do this when encountering individuals from other communities.
Vocalisations are also important in chimpanzee communication. The most common call in adults is the "pant-hoot", which may signal social rank and bond along with keeping groups together. Pant-hoots are made of four parts, starting with soft "hoos", the introduction; that gets louder and louder, the build-up; and climax into screams and sometimes barks; these die down back to soft "hoos" during the letdown phase as the call ends. Grunting is made in situations like feeding and greeting. Submissive individuals make "pant-grunts" towards their superiors. Whimpering is made by young chimpanzees as a form of begging or when lost from the group. Chimpanzees use distance calls to draw attention to danger, food sources, or other community members. "Barks" may be made as "short barks" when hunting and "tonal barks" when sighting large snakes.
Hunting
When hunting small monkeys such as the red colobus, chimpanzees hunt where the forest canopy is interrupted or irregular. This allows them to easily corner the monkeys when chasing them in the appropriate direction. Chimpanzees may also hunt as a coordinated team, so that they can corner their prey even in a continuous canopy. During an arboreal hunt, each chimpanzee in the hunting groups has a role. "Drivers" serve to keep the prey running in a certain direction and follow them without attempting to make a catch. "Blockers" are stationed at the bottom of the trees and climb up to block prey that takes off in a different direction. "Chasers" move quickly and try to make a catch. Finally, "ambushers" hide and rush out when a monkey nears. While both adults and infants are taken, adult male colobus monkeys will attack the hunting chimps. When caught and killed, the meal is distributed to all hunting party members and even bystanders.
Male chimpanzees hunt in groups more than females. Female chimpanzees tend to hunt solitarily. If a female chimpanzee were to participate in the hunting group and catch a Red Colobus, it would likely immediately be taken by an adult male. Female chimpanzees are estimated to hunt ≈ 10-15% of a community's vertebrates.
Intelligence
Chimpanzees display numerous signs of intelligence, from the ability to remember symbols to cooperation, tool use, and varied language capabilities. They are among species that have passed the mirror test, suggesting self-awareness. In one study, two young chimpanzees showed retention of mirror self-recognition after one year without access to mirrors. Chimpanzees have been observed to use insects to treat their own wounds and those of others. They catch them and apply them directly to the injury. Chimpanzees also display signs of culture among groups, with the learning and transmission of variations in grooming, tool use and foraging techniques leading to localized traditions.
A 30-year study at Kyoto University's Primate Research Institute has shown that chimpanzees are able to learn to recognise the numbers 1 to 9 and their values. The chimpanzees further show an aptitude for eidetic memory, demonstrated in experiments in which the jumbled digits are flashed onto a computer screen for less than a quarter of a second. One chimpanzee, Ayumu, was able to correctly and quickly point to the positions where they appeared in ascending order. Ayumu performed better than human adults who were given the same test.
In controlled experiments on cooperation, chimpanzees show a basic understanding of cooperation, and recruit the best collaborators. In a group setting with a device that delivered food rewards only to cooperating chimpanzees, cooperation first increased, then, due to competitive behaviour, decreased, before finally increasing to the highest level through punishment and other arbitrage behaviours.
Great apes show laughter-like vocalisations in response to physical contact, such as wrestling, play chasing, or tickling. This is documented in wild and captive chimpanzees. Chimpanzee laughter is not readily recognisable to humans as such, because it is generated by alternating inhalations and exhalations that sound more like breathing and panting. Instances in which nonhuman primates have expressed joy have been reported. Humans and chimpanzees share similar ticklish areas of the body, such as the armpits and belly. The enjoyment of tickling in chimpanzees does not diminish with age.
A 2022 study reported that chimpanzees crushed and applied insects to their own wounds and the wounds of other chimpanzees. Chimpanzees have displayed different behaviours in response to a dying or dead group member. When witnessing a sudden death, the other group members act in frenzy, with vocalisations, aggressive displays, and touching of the corpse. In one case chimpanzees cared for a dying elder, then attended and cleaned the corpse. Afterward, they avoided the spot where the elder died and behaved in a more subdued manner. Mothers have been reported to carry around and groom their dead infants for several days.
Experimenters now and then witness behaviour that cannot be readily reconciled with chimpanzee intelligence or theory of mind. Wolfgang Köhler, for instance, reported insightful behaviour in chimpanzees, but he likewise often observed that they experienced "special difficulty" in solving simple problems. See also Wiki page The Mentality of Apes. Researchers also reported that, when faced with a choice between two persons, chimpanzees were just as likely to beg food from a person who could see the begging gesture as from a person who could not, thereby raising the possibility that chimpanzees lack theory of mind. By contrast, Hare, Call, and Tomasello found that subordinate chimpanzees were able to use the knowledge state of dominant rival chimpanzees to determine which container of hidden food they approached.
Tool use
Nearly all chimpanzee populations have been recorded using tools. They modify sticks, rocks, grass, and leaves and use them when foraging for termites and ants, nuts, honey, algae or water. Despite the lack of complexity, forethought and skill are apparent in making these tools. Chimpanzees have used stone tools since at least 4,300 years ago.
A chimpanzee from the Kasakela chimpanzee community was the first nonhuman animal reported making a tool, by modifying a twig to use as an instrument for extracting termites from their mound. At Taï, chimpanzees simply use their hands to extract termites. When foraging for honey, chimpanzees use modified short sticks to scoop the honey out of the hive if the bees are stingless. For hives of the dangerous African honeybees, chimpanzees use longer and thinner sticks to extract the honey.
Chimpanzees also fish for ants using the same tactic. Ant dipping is difficult and some chimpanzees never master it. West African chimpanzees crack open hard nuts with stones or branches. Some forethought in this activity is apparent, as these tools are not found together or where the nuts are collected. Nut cracking is also difficult and must be learned. Chimpanzees also use leaves as sponges or spoons to drink water.
West African chimpanzees in Senegal were found to sharpen sticks with their teeth, which were then used to spear Senegal bushbabies out of small holes in trees. An eastern chimpanzee has been observed using a modified branch as a tool to capture a squirrel. Chimpanzees living in Tanzania were found to deliberately choose plants that provide materials that produce more flexible tools for termite fishing.
Whilst experimental studies on captive chimpanzees have found that many of their species-typical tool-use behaviours can be individually learnt by each chimpanzees, a 2021 study on their abilities to make and use stone flakes, in a similar way as hypothesised for early hominins, did not find this behaviour across two populations of chimpanzees—suggesting that this behaviour is outside the chimpanzee species-typical range.
Language
Scientists have attempted to teach human language to several species of great ape. One early attempt by Allen and Beatrix Gardner in the 1960s involved spending 51 months teaching American Sign Language to a chimpanzee named Washoe. The Gardners reported that Washoe learned 151 signs, and had spontaneously taught them to other chimpanzees, including her adopted son, Loulis. Over a longer period of time, Washoe was reported to have learned over 350 signs.
Debate is ongoing among scientists such as David Premack about chimpanzees' ability to learn language. Since the early reports on Washoe, numerous other studies have been conducted, with varying levels of success. One involved a chimpanzee jokingly named Nim Chimpsky (in allusion to the theorist of language Noam Chomsky), trained by Herbert Terrace of Columbia University. Although his initial reports were quite positive, in November 1979, Terrace and his team, including psycholinguist Thomas Bever, re-evaluated the videotapes of Nim with his trainers, analyzing them frame by frame for signs, as well as for exact context (what was happening both before and after Nim's signs). In the reanalysis, Terrace and Bever concluded that Nim's utterances could be explained merely as prompting on the part of the experimenters, as well as mistakes in reporting the data. "Much of the apes' behaviour is pure drill", he said. "Language still stands as an important definition of the human species." In this reversal, Terrace now argued Nim's use of ASL was not like human language acquisition. Nim never initiated conversations himself, rarely introduced new words, and mostly imitated what the humans did. More importantly, Nim's word strings varied in their ordering, suggesting that he was incapable of syntax. Nim's sentences also did not grow in length, unlike human children whose vocabulary and sentence length show a strong positive correlation.
Human relations
In culture
Chimpanzees are rarely represented in African culture, as people find them "too close for comfort". The Gio people of Liberia and the Hemba people of the Congo make chimpanzee masks. Gio masks are crude and blocky, and worn when teaching young people how not to behave. The Hemba masks have a smile that suggests drunken anger, insanity or horror and are worn during rituals at funerals, representing the "awful reality of death". The masks may also serve to guard households and protect both human and plant fertility. Stories have been told of chimpanzees kidnapping and raping women.
In Western popular culture, chimpanzees have occasionally been stereotyped as childlike companions, sidekicks or clowns. They are especially suited for the latter role on account of their prominent facial features, long limbs and fast movements, which humans often find amusing. Accordingly, entertainment acts featuring chimpanzees dressed up as humans with lip-synchronised human voices have been traditional staples of circuses, stage shows and TV shows like Lancelot Link, Secret Chimp (1970–1972) and The Chimp Channel (1999). From 1926 until 1972, London Zoo, followed by several other zoos around the world, held a chimpanzees' tea party daily, inspiring a long-running series of advertisements for PG Tips tea featuring such a party. Animal rights groups have urged a stop to such acts, considering them abusive.
Chimpanzees in media include Judy on the television series Daktari in the 1960s and Darwin on The Wild Thornberrys in the 1990s. In contrast to the fictional depictions of other animals, such as dogs (as in Lassie), dolphins (Flipper), horses (Black Beauty) or even other great apes (King Kong), chimpanzee characters and actions are rarely relevant to the plot. Depictions of chimpanzees as individuals rather than stock characters, and as central rather than incidental to the plot can be found in science fiction. Robert A. Heinlein's 1947 short story "Jerry Was a Man" concerns a genetically enhanced chimpanzee suing for better treatment. The 1972 film Conquest of the Planet of the Apes, the third sequel of the 1968 film Planet of the Apes, portrays a futuristic revolt of enslaved apes led by the only talking chimpanzee, Caesar, against their human masters.
As pets
Chimpanzees have traditionally been kept as pets in a few African villages, especially in the Democratic Republic of Congo. In Virunga National Park in the east of the country, the park authorities regularly seize chimpanzees from people keeping them as pets. Outside their range, chimpanzees are popular as exotic pets despite their strength and aggression. Even in places where keeping non-human primates as pets is illegal, the exotic pet trade continues to prosper, leading to injuries from attacks.
Use in research
Hundreds of chimpanzees have been kept in laboratories for research. Most such laboratories either conduct or make the animals available for invasive research, defined as "inoculation with an infectious agent, surgery or biopsy conducted for the sake of research and not for the sake of the chimpanzee, and/or drug testing". Research chimpanzees tend to be used repeatedly over decades for up to 40 years, unlike the pattern of use of most laboratory animals. Two federally funded American laboratories use chimpanzees: the Yerkes National Primate Research Center at Emory University in Atlanta, Georgia, and the Southwest National Primate Center in San Antonio, Texas. Five hundred chimpanzees have been retired from laboratory use in the US and live in animal sanctuaries in the US or Canada.
A five-year moratorium was imposed by the US National Institutes of Health in 1996, because too many chimpanzees had been bred for HIV research, and it has been extended annually since 2001. With the publication of the chimpanzee genome, plans to increase the use of chimpanzees in America were reportedly increasing in 2006, some scientists arguing that the federal moratorium on breeding chimpanzees for research should be lifted. citing However, in 2007, the NIH made the moratorium permanent.
Other researchers argue that chimpanzees either should not be used in research, or should be treated differently, for instance with legal status as persons. Pascal Gagneux, an evolutionary biologist and primate expert at the University of California, San Diego, argues, given chimpanzees' sense of self, tool use, and genetic similarity to human beings, studies using chimpanzees should follow the ethical guidelines used for human subjects unable to give consent. A recent study suggests chimpanzees which are retired from labs exhibit a form of post-traumatic stress disorder. Stuart Zola, director of the Yerkes laboratory, disagrees. He told National Geographic: "I don't think we should make a distinction between our obligation to treat humanely any species, whether it's a rat or a monkey or a chimpanzee. No matter how much we may wish it, chimps are not human."
Only one European laboratory, the Biomedical Primate Research Centre in Rijswijk, the Netherlands, used chimpanzees in research. It formerly held 108 chimpanzees among 1,300 non-human primates. The Dutch ministry of science decided to phase out research at the centre from 2001. Trials already under way were however allowed to run their course. Chimpanzees including the female Ai have been studied at the Primate Research Institute of Kyoto University, Japan, formerly directed by Tetsuro Matsuzawa, since 1978. 12 chimpanzees are currently held at the facility.
Two chimpanzees have been sent into outer space as NASA research subjects. Ham, the first great ape in space, was launched in the Mercury-Redstone 2 capsule on 31 January 1961, and survived the suborbital flight. Enos, the third primate to orbit Earth after Soviet cosmonauts Yuri Gagarin and Gherman Titov, flew on Mercury-Atlas 5 on 29 November of the same year.
Field study
Jane Goodall undertook the first long-term field study of the chimpanzee, begun in Tanzania at Gombe Stream National Park in 1960. Other long-term studies begun in the 1960s include Adriaan Kortlandt's in the eastern Democratic Republic of the Congo and Toshisada Nishida's in Mahale Mountains National Park in Tanzania. Current understanding of the species' typical behaviours and social organisation has been formed largely from Goodall's ongoing 60-year Gombe research study.
Attacks
Chimpanzees have attacked humans. In Uganda, several attacks on children have happened, some of them fatal. Some of these attacks may have been due to the chimpanzees being intoxicated (from alcohol obtained from rural brewing operations) and becoming aggressive towards humans. Human interactions with chimpanzees may be especially dangerous if the chimpanzees perceive humans as potential rivals. At least six cases of chimpanzees snatching and eating human babies are documented.
A chimpanzee's strength and sharp teeth mean that attacks, even on adult humans, can cause severe injuries. This was evident after the attack and near death of former NASCAR driver St. James Davis, who was mauled by two escaped chimpanzees while he and his wife were celebrating the birthday of their former pet chimpanzee. Another example of chimpanzees being aggressive toward humans occurred in 2009 in Stamford, Connecticut, when a , 13-year-old pet chimpanzee named Travis attacked his owner's friend, who lost her hands, eyes, nose, and part of her maxilla from the attack.
Human immunodeficiency virus
Two primary classes of human immunodeficiency virus (HIV) infect humans: HIV-1 and HIV-2. HIV-1 is the more virulent and easily transmitted, and is the source of the majority of HIV infections throughout the world; HIV-2 occurs mostly in west Africa. Both types originated in west and central Africa, jumping from other primates to humans. HIV-1 has evolved from a simian immunodeficiency virus (SIVcpz) found in the subspecies P. t. troglodytes of southern Cameroon. Kinshasa, in the Democratic Republic of Congo, has the greatest genetic diversity of HIV-1 so far discovered, suggesting the virus has been there longer than anywhere else. HIV-2 crossed species from a different strain of HIV, found in the sooty mangabey monkeys in Guinea-Bissau.
Conservation
The chimpanzee is on the IUCN Red List as an endangered species. Chimpanzees are legally protected in most of their range and are found both in and outside national parks. Between 172,700 and 299,700 individuals are thought to be living in the wild, a decrease from about a million chimpanzees in the early 1900s. Chimpanzees are listed in Appendix I of the Convention on International Trade in Endangered Species (CITES), meaning that commercial international trade in wild-sourced specimens is prohibited and all other international trade (including in parts and derivatives) is regulated by the CITES permitting system.
The biggest threats to the chimpanzee are habitat destruction, poaching, and disease. Chimpanzee habitats have been limited by deforestation in both West and Central Africa. Road building has caused habitat degradation and fragmentation of chimpanzee populations and may allow poachers more access to areas that had not been seriously affected by humans. Although deforestation rates are low in western Central Africa, selective logging may take place outside national parks.
Chimpanzees are a common target for poachers. In Ivory Coast, chimpanzees make up 1–3% of bushmeat sold in urban markets. They are also taken, often illegally, for the pet trade and are hunted for medicinal purposes in some areas. Farmers sometimes kill chimpanzees that threaten their crops; others are unintentionally maimed or killed by snares meant for other animals.
Infectious diseases are a main cause of death for chimpanzees. They succumb to many diseases that afflict humans because the two species are so similar. As the human population grows, so does the risk of disease transmission between humans and chimpanzees.
See also
Chimpanzee, 2012 documentary
Chimp Crazy, 2024 TV docuseries about chimps in the U.S. pet trade
Chimp Empire, 2023 documentary
Great Ape Project
International Primate Day
One Small Step: The Story of the Space Chimps, 2008 documentary
Primate archaeology
Notes
References
Literature cited
External links
Chimpanzee Genome resources
Primate Info Net Pan troglodytes Factsheets
U.S. Fish & Wildlife Service Species Profile
View the Pan troglodytes genome in Ensembl
Genome of Pan troglodytes (version Clint_PTRv2/panTro6), via UCSC Genome Browser
Data of the genome of Pan troglodytes, via NCBI
Data of the genome assembly of Pan troglodytes Clint_PTRv2/panTro6, via NCBI
Human Timeline (Interactive) – Smithsonian, National Museum of Natural History (August 2016).
Category:Chimpanzees
Category:Tool-using mammals
Category:Primates of Africa
Category:Extant Pliocene first appearances
Category:Fauna of Sub-Saharan Africa
Category:Mammals described in 1775
Category:National symbols of Sierra_Leone
Category:Taxa named by Johann Friedrich Blumenbach
Category:Articles containing video clips
|
nature_wildlife
| 7,095
|
7955
|
DNA
|
https://en.wikipedia.org/wiki/DNA
|
Deoxyribonucleic acid (;: DNA) is a polymer composed of two polynucleotide chains that coil around each other to form a double helix. The polymer carries genetic instructions for the development, functioning, growth and reproduction of all known organisms and many viruses. DNA and ribonucleic acid (RNA) are nucleic acids. Alongside proteins, lipids and complex carbohydrates (polysaccharides), nucleic acids are one of the four major types of macromolecules that are essential for all known forms of life.
The two DNA strands are known as polynucleotides as they are composed of simpler monomeric units called nucleotides. Each nucleotide is composed of one of four nitrogen-containing nucleobases (cytosine [C], guanine [G], adenine [A] or thymine [T]), a sugar called deoxyribose, and a phosphate group. The nucleotides are joined to one another in a chain by covalent bonds (known as the phosphodiester linkage) between the sugar of one nucleotide and the phosphate of the next, resulting in an alternating sugar-phosphate backbone. The nitrogenous bases of the two separate polynucleotide strands are bound together, according to base pairing rules (A with T and C with G), with hydrogen bonds to make double-stranded DNA. The complementary nitrogenous bases are divided into two groups, the single-ringed pyrimidines and the double-ringed purines. In DNA, the pyrimidines are thymine and cytosine; the purines are adenine and guanine.
Both strands of double-stranded DNA store the same biological information. This information is replicated when the two strands separate. A large part of DNA (more than 98% for humans) is non-coding, meaning that these sections do not serve as patterns for protein sequences. The two strands of DNA run in opposite directions to each other and are thus antiparallel. Attached to each sugar is one of four types of nucleobases (or bases). It is the sequence of these four nucleobases along the backbone that encodes genetic information. RNA strands are created using DNA strands as a template in a process called transcription, where DNA bases are exchanged for their corresponding bases except in the case of thymine (T), for which RNA substitutes uracil (U). Under the genetic code, these RNA strands specify the sequence of amino acids within proteins in a process called translation.
Within eukaryotic cells, DNA is organized into long structures called chromosomes. Before typical cell division, these chromosomes are duplicated in the process of DNA replication, providing a complete set of chromosomes for each daughter cell. Eukaryotic organisms (animals, plants, fungi and protists) store most of their DNA inside the cell nucleus as nuclear DNA, and some in the mitochondria as mitochondrial DNA or in chloroplasts as chloroplast DNA. In contrast, prokaryotes (bacteria and archaea) store their DNA only in the cytoplasm, in circular chromosomes. Within eukaryotic chromosomes, chromatin proteins, such as histones, compact and organize DNA. These compacting structures guide the interactions between DNA and other proteins, helping control which parts of the DNA are transcribed.
Properties
DNA is a long polymer made from repeating units called nucleotides. The structure of DNA is dynamic along its length, being capable of coiling into tight loops and other shapes. In all species it is composed of two helical chains, bound to each other by hydrogen bonds. Both chains are coiled around the same axis, and have the same pitch of . The pair of chains have a radius of . According to another study, when measured in a different solution, the DNA chain measured wide, and one nucleotide unit measured long. The buoyant density of most DNA is 1.7g/cm3.
DNA does not usually exist as a single strand, but instead as a pair of strands that are held tightly together. These two long strands coil around each other, in the shape of a double helix. The nucleotide contains both a segment of the backbone of the molecule (which holds the chain together) and a nucleobase (which interacts with the other DNA strand in the helix). A nucleobase linked to a sugar is called a nucleoside, and a base linked to a sugar and to one or more phosphate groups is called a nucleotide. A biopolymer comprising multiple linked nucleotides (as in DNA) is called a polynucleotide.
The backbone of the DNA strand is made from alternating phosphate and sugar groups. The sugar in DNA is 2-deoxyribose, which is a pentose (five-carbon) sugar. The sugars are joined by phosphate groups that form phosphodiester bonds between the third and fifth carbon atoms of adjacent sugar rings. These are known as the 3′-end (three prime end), and 5′-end (five prime end) carbons, the prime symbol being used to distinguish these carbon atoms from those of the base to which the deoxyribose forms a glycosidic bond.
Therefore, any DNA strand normally has one end at which there is a phosphate group attached to the 5′ carbon of a ribose (the 5′ phosphoryl) and another end at which there is a free hydroxyl group attached to the 3′ carbon of a ribose (the 3′ hydroxyl). The orientation of the 3′ and 5′ carbons along the sugar-phosphate backbone confers directionality (sometimes called polarity) to each DNA strand. In a nucleic acid double helix, the direction of the nucleotides in one strand is opposite to their direction in the other strand: the strands are antiparallel. The asymmetric ends of DNA strands are said to have a directionality of five prime end (5′ ), and three prime end (3′), with the 5′ end having a terminal phosphate group and the 3′ end a terminal hydroxyl group. One major difference between DNA and RNA is the sugar, with the 2-deoxyribose in DNA being replaced by the related pentose sugar ribose in RNA.
The DNA double helix is stabilized primarily by two forces: hydrogen bonds between nucleotides and base-stacking interactions among aromatic nucleobases. The four bases found in DNA are adenine (), cytosine (), guanine () and thymine (). These four bases are attached to the sugar-phosphate to form the complete nucleotide, as shown for adenosine monophosphate. Adenine pairs with thymine and guanine pairs with cytosine, forming and base pairs.
Nucleobase classification
The nucleobases are classified into two types: the purines, and , which are fused five- and six-membered heterocyclic compounds, and the pyrimidines, the six-membered rings and . A fifth pyrimidine nucleobase, uracil (), usually takes the place of thymine in RNA and differs from thymine by lacking a methyl group on its ring. In addition to RNA and DNA, many artificial nucleic acid analogues have been created to study the properties of nucleic acids, or for use in biotechnology.
Non-canonical bases
Modified bases occur in DNA. The first of these recognized was 5-methylcytosine, which was found in the genome of Mycobacterium tuberculosis in 1925. The reason for the presence of these noncanonical bases in bacterial viruses (bacteriophages) is to avoid the restriction enzymes present in bacteria. This enzyme system acts at least in part as a molecular immune system protecting bacteria from infection by viruses. Modifications of the bases cytosine and adenine, the more common and modified DNA bases, play vital roles in the epigenetic control of gene expression in plants and animals.
A number of noncanonical bases are known to occur in DNA. Most of these are modifications of the canonical bases plus uracil.
Modified Adenine
N6-carbamoyl-methyladenine
N6-methyadenine
Modified Guanine
7-Deazaguanine
7-Methylguanine
Modified Cytosine
N4-Methylcytosine
5-Carboxylcytosine
5-Formylcytosine
5-Glycosylhydroxymethylcytosine
5-Hydroxycytosine
5-Methylcytosine
Modified Thymidine
α-Glutamythymidine
α-Putrescinylthymine
Uracil and modifications
Base J
Uracil
5-Dihydroxypentauracil
5-Hydroxymethyldeoxyuracil
Others
Deoxyarchaeosine
2,6-Diaminopurine (2-Aminoadenine)
Grooves
Twin helical strands form the DNA backbone. Another double helix may be found tracing the spaces, or grooves, between the strands. These voids are adjacent to the base pairs and may provide a binding site. As the strands are not symmetrically located with respect to each other, the grooves are unequally sized. The major groove is wide, while the minor groove is in width. Due to the larger width of the major groove, the edges of the bases are more accessible in the major groove than in the minor groove. As a result, proteins such as transcription factors that can bind to specific sequences in double-stranded DNA usually make contact with the sides of the bases exposed in the major groove. This situation varies in unusual conformations of DNA within the cell (see below), but the major and minor grooves are always named to reflect the differences in width that would be seen if the DNA was twisted back into the ordinary B form.
Base pairing
282px
282px
Top, a base pair with three hydrogen bonds. Bottom, an base pair with two hydrogen bonds. Non-covalent hydrogen bonds between the pairs are shown as dashed lines.
In a DNA double helix, each type of nucleobase on one strand bonds with just one type of nucleobase on the other strand. This is called complementary base pairing. Purines form hydrogen bonds to pyrimidines, with adenine bonding only to thymine in two hydrogen bonds, and cytosine bonding only to guanine in three hydrogen bonds. This arrangement of two nucleotides binding together across the double helix (from six-carbon ring to six-carbon ring) is called a Watson-Crick base pair. DNA with high GC-content is more stable than DNA with low -content. A Hoogsteen base pair (hydrogen bonding the 6-carbon ring to the 5-carbon ring) is a rare variation of base-pairing. As hydrogen bonds are not covalent, they can be broken and rejoined relatively easily. The two strands of DNA in a double helix can thus be pulled apart like a zipper, either by a mechanical force or high temperature. As a result of this base pair complementarity, all the information in the double-stranded sequence of a DNA helix is duplicated on each strand, which is vital in DNA replication. This reversible and specific interaction between complementary base pairs is critical for all the functions of DNA in organisms.
ssDNA vs. dsDNA
Most DNA molecules are actually two polymer strands, bound together in a helical fashion by noncovalent bonds; this double-stranded (dsDNA) structure is maintained largely by the intrastrand base stacking interactions, which are strongest for stacks. The two strands can come apart—a process known as melting—to form two single-stranded DNA (ssDNA) molecules. Melting occurs at high temperatures, low salt and high pH (low pH also melts DNA, but since DNA is unstable due to acid depurination, low pH is rarely used).
The stability of the dsDNA form depends not only on the -content (% basepairs) but also on sequence (since stacking is sequence specific) and also length (longer molecules are more stable). The stability can be measured in various ways; a common way is the melting temperature (also called Tm value), which is the temperature at which 50% of the double-strand molecules are converted to single-strand molecules; melting temperature is dependent on ionic strength and the concentration of DNA. As a result, it is both the percentage of base pairs and the overall length of a DNA double helix that determines the strength of the association between the two strands of DNA. Long DNA helices with a high -content have more strongly interacting strands, while short helices with high content have more weakly interacting strands. In biology, parts of the DNA double helix that need to separate easily, such as the Pribnow box in some promoters, tend to have a high content, making the strands easier to pull apart.
In the laboratory, the strength of this interaction can be measured by finding the melting temperature Tm necessary to break half of the hydrogen bonds. When all the base pairs in a DNA double helix melt, the strands separate and exist in solution as two entirely independent molecules. These single-stranded DNA molecules have no single common shape, but some conformations are more stable than others.
Amount
In humans, the total female diploid nuclear genome per cell extends for 6.37 Gigabase pairs (Gbp), is 208.23 cm long and weighs 6.51 picograms (pg). Male values are 6.27 Gbp, 205.00 cm, 6.41 pg. Each DNA polymer can contain hundreds of millions of nucleotides, such as in chromosome 1. Chromosome 1 is the largest human chromosome with approximately 220 million base pairs, and would be long if straightened.
In eukaryotes, in addition to nuclear DNA, there is also mitochondrial DNA (mtDNA) which encodes certain proteins used by the mitochondria. The mtDNA is usually relatively small in comparison to the nuclear DNA. For example, the human mitochondrial DNA forms closed circular molecules, each of which contains 16,569 DNA base pairs, with each such molecule normally containing a full set of the mitochondrial genes. Each human mitochondrion contains, on average, approximately 5 such mtDNA molecules. Each human cell contains approximately 100 mitochondria, giving a total number of mtDNA molecules per human cell of approximately 500. However, the amount of mitochondria per cell also varies by cell type, and an egg cell can contain 100,000 mitochondria, corresponding to up to 1,500,000 copies of the mitochondrial genome (constituting up to 90% of the DNA of the cell).
Sense and antisense
A DNA sequence is called a "sense" sequence if it is the same as that of a messenger RNA copy that is translated into protein.Designation of the two strands of DNA JCBN/NC-IUB Newsletter 1989. Retrieved 7 May 2008 The sequence on the opposite strand is called the "antisense" sequence. Both sense and antisense sequences can exist on different parts of the same strand of DNA (i.e. both strands can contain both sense and antisense sequences). In both prokaryotes and eukaryotes, antisense RNA sequences are produced, but the functions of these RNAs are not entirely clear. One proposal is that antisense RNAs are involved in regulating gene expression through RNA-RNA base pairing.
A few DNA sequences in prokaryotes and eukaryotes, and more in plasmids and viruses, blur the distinction between sense and antisense strands by having overlapping genes. In these cases, some DNA sequences do double duty, encoding one protein when read along one strand, and a second protein when read in the opposite direction along the other strand. In bacteria, this overlap may be involved in the regulation of gene transcription, while in viruses, overlapping genes increase the amount of information that can be encoded within the small viral genome.
Supercoiling
DNA can be twisted like a rope in a process called DNA supercoiling. With DNA in its "relaxed" state, a strand usually circles the axis of the double helix once every 10.4 base pairs, but if the DNA is twisted the strands become more tightly or more loosely wound. If the DNA is twisted in the direction of the helix, this is positive supercoiling, and the bases are held more tightly together. If they are twisted in the opposite direction, this is negative supercoiling, and the bases come apart more easily. In nature, most DNA has slight negative supercoiling that is introduced by enzymes called topoisomerases. These enzymes are also needed to relieve the twisting stresses introduced into DNA strands during processes such as transcription and DNA replication.
Alternative DNA structures
DNA exists in many possible conformations that include A-DNA, B-DNA, and Z-DNA forms, although only B-DNA and Z-DNA have been directly observed in functional organisms. The conformation that DNA adopts depends on the hydration level, DNA sequence, the amount and direction of supercoiling, chemical modifications of the bases, the type and concentration of metal ions, and the presence of polyamines in solution.
The first published reports of A-DNA X-ray diffraction patterns—and also B-DNA—used analyses based on Patterson functions that provided only a limited amount of structural information for oriented fibers of DNA.
An alternative analysis was proposed by Wilkins et al. in 1953 for the in vivo B-DNA X-ray diffraction-scattering patterns of highly hydrated DNA fibers in terms of squares of Bessel functions. In the same journal, James Watson and Francis Crick presented their molecular modeling analysis of the DNA X-ray diffraction patterns to suggest that the structure was a double helix.
Although the B-DNA form is most common under the conditions found in cells, it is not a well-defined conformation but a family of related DNA conformations that occur at the high hydration levels present in cells. Their corresponding X-ray diffraction and scattering patterns are characteristic of molecular paracrystals with a significant degree of disorder.
Compared to B-DNA, the A-DNA form is a wider right-handed spiral, with a shallow, wide minor groove and a narrower, deeper major groove. The A form occurs under non-physiological conditions in partly dehydrated samples of DNA, while in the cell it may be produced in hybrid pairings of DNA and RNA strands, and in enzyme-DNA complexes. Segments of DNA where the bases have been chemically modified by methylation may undergo a larger change in conformation and adopt the Z form. Here, the strands turn about the helical axis in a left-handed spiral, the opposite of the more common B form. These unusual structures can be recognized by specific Z-DNA binding proteins and may be involved in the regulation of transcription.
Alternative DNA chemistry
For many years, exobiologists have proposed the existence of a shadow biosphere, a postulated microbial biosphere of Earth that uses radically different biochemical and molecular processes than currently known life. One of the proposals was the existence of lifeforms that use arsenic instead of phosphorus in DNA. A report in 2010 of the possibility in the bacterium GFAJ-1 was announced, though the research was disputed, and evidence suggests the bacterium actively prevents the incorporation of arsenic into the DNA backbone and other biomolecules.
Quadruplex structures
At the ends of the linear chromosomes are specialized regions of DNA called telomeres. The main function of these regions is to allow the cell to replicate chromosome ends using the enzyme telomerase, as the enzymes that normally replicate DNA cannot copy the extreme 3′ ends of chromosomes. These specialized chromosome caps also help protect the DNA ends, and stop the DNA repair systems in the cell from treating them as damage to be corrected. In human cells, telomeres are usually lengths of single-stranded DNA containing several thousand repeats of a simple TTAGGG sequence.
These guanine-rich sequences may stabilize chromosome ends by forming structures of stacked sets of four-base units, rather than the usual base pairs found in other DNA molecules. Here, four guanine bases, known as a guanine tetrad, form a flat plate. These flat four-base units then stack on top of each other to form a stable G-quadruplex structure. These structures are stabilized by hydrogen bonding between the edges of the bases and chelation of a metal ion in the centre of each four-base unit. Other structures can also be formed, with the central set of four bases coming from either a single strand folded around the bases, or several different parallel strands, each contributing one base to the central structure.
In addition to these stacked structures, telomeres also form large loop structures called telomere loops, or T-loops. Here, the single-stranded DNA curls around in a long circle stabilized by telomere-binding proteins. At the very end of the T-loop, the single-stranded telomere DNA is held onto a region of double-stranded DNA by the telomere strand disrupting the double-helical DNA and base pairing to one of the two strands. This triple-stranded structure is called a displacement loop or D-loop.
Branched DNA
95px95pxSingle branchMultiple branches
Branched DNA can form networks containing multiple branches.
In DNA, fraying occurs when non-complementary regions exist at the end of an otherwise complementary double-strand of DNA. However, branched DNA can occur if a third strand of DNA is introduced and contains adjoining regions able to hybridize with the frayed regions of the pre-existing double-strand. Although the simplest example of branched DNA involves only three strands of DNA, complexes involving additional strands and multiple branches are also possible. Branched DNA can be used in nanotechnology to construct geometric shapes, see the section on uses in technology below.
Artificial bases
Several artificial nucleobases have been synthesized, and successfully incorporated in the eight-base DNA analogue named Hachimoji DNA. Dubbed S, B, P, and Z, these artificial bases are capable of bonding with each other in a predictable way (S–B and P–Z), maintain the double helix structure of DNA, and be transcribed to RNA. Their existence could be seen as an indication that there is nothing special about the four natural nucleobases that evolved on Earth. On the other hand, DNA is tightly related to RNA which does not only act as a transcript of DNA but also performs as molecular machines many tasks in cells. For this purpose it has to fold into a structure. It has been shown that to allow to create all possible structures at least four bases are required for the corresponding RNA, while a higher number is also possible but this would be against the natural principle of least effort.
Acidity
The phosphate groups of DNA give it similar acidic properties to phosphoric acid and it can be considered as a strong acid. It will be fully ionized at a normal cellular pH, releasing protons which leave behind negative charges on the phosphate groups. These negative charges protect DNA from breakdown by hydrolysis by repelling nucleophiles which could hydrolyze it.
Macroscopic appearance
Pure DNA extracted from cells forms white, stringy clumps.
Chemical modifications and altered DNA packaging
Base modifications and DNA packaging
75px95px97pxcytosine5-methylcytosinethymine
Structure of cytosine with and without the 5-methyl group. Deamination converts 5-methylcytosine into thymine.
The expression of genes is influenced by how the DNA is packaged in chromosomes, in a structure called chromatin. Base modifications can be involved in packaging, with regions that have low or no gene expression usually containing high levels of methylation of cytosine bases. DNA packaging and its influence on gene expression can also occur by covalent modifications of the histone protein core around which DNA is wrapped in the chromatin structure or else by remodeling carried out by chromatin remodeling complexes (see Chromatin remodeling). There is, further, crosstalk between DNA methylation and histone modification, so they can coordinately affect chromatin and gene expression.
For one example, cytosine methylation produces 5-methylcytosine, which is important for X-inactivation of chromosomes. The average level of methylation varies between organisms—the worm Caenorhabditis elegans lacks cytosine methylation, while vertebrates have higher levels, with up to 1% of their DNA containing 5-methylcytosine. Despite the importance of 5-methylcytosine, it can deaminate to leave a thymine base, so methylated cytosines are particularly prone to mutations. Other base modifications include adenine methylation in bacteria, the presence of 5-hydroxymethylcytosine in the brain, and the glycosylation of uracil to produce the "J-base" in kinetoplastids.
Damage
DNA can be damaged by many sorts of mutagens, which change the DNA sequence. Mutagens include oxidizing agents, alkylating agents and also high-energy electromagnetic radiation such as ultraviolet light and X-rays. The type of DNA damage produced depends on the type of mutagen. For example, UV light can damage DNA by producing thymine dimers, which are cross-links between pyrimidine bases. On the other hand, oxidants such as free radicals or hydrogen peroxide produce multiple forms of damage, including base modifications, particularly of guanosine, and double-strand breaks. A typical human cell contains about 150,000 bases that have suffered oxidative damage. Of these oxidative lesions, the most dangerous are double-strand breaks, as these are difficult to repair and can produce point mutations, insertions, deletions from the DNA sequence, and chromosomal translocations. These mutations can cause cancer. Because of inherent limits in the DNA repair mechanisms, if humans lived long enough, they would all eventually develop cancer. DNA damages that are naturally occurring, due to normal cellular processes that produce reactive oxygen species, the hydrolytic activities of cellular water, etc., also occur frequently. Although most of these damages are repaired, in any cell some DNA damage may remain despite the action of repair processes. These remaining DNA damages accumulate with age in mammalian postmitotic tissues. This accumulation appears to be an important underlying cause of aging.
Many mutagens fit into the space between two adjacent base pairs, this is called intercalation. Most intercalators are aromatic and planar molecules; examples include ethidium bromide, acridines, daunomycin, and doxorubicin. For an intercalator to fit between base pairs, the bases must separate, distorting the DNA strands by unwinding of the double helix. This inhibits both transcription and DNA replication, causing toxicity and mutations. As a result, DNA intercalators may be carcinogens, and in the case of thalidomide, a teratogen. Others such as benzo[a]pyrene diol epoxide and aflatoxin form DNA adducts that induce errors in replication. Nevertheless, due to their ability to inhibit DNA transcription and replication, other similar toxins are also used in chemotherapy to inhibit rapidly growing cancer cells.
Biological functions
DNA usually occurs as linear chromosomes in eukaryotes, and circular chromosomes in prokaryotes. The set of chromosomes in a cell makes up its genome; the human genome has approximately 3 billion base pairs of DNA arranged into 46 chromosomes. The information carried by DNA is held in the sequence of pieces of DNA called genes. Transmission of genetic information in genes is achieved via complementary base pairing. For example, in transcription, when a cell uses the information in a gene, the DNA sequence is copied into a complementary RNA sequence through the attraction between the DNA and the correct RNA nucleotides. Usually, this RNA copy is then used to make a matching protein sequence in a process called translation, which depends on the same interaction between RNA nucleotides. In an alternative fashion, a cell may copy its genetic information in a process called DNA replication. The details of these functions are covered in other articles; here the focus is on the interactions between DNA and other molecules that mediate the function of the genome.
Genes and genomes
Genomic DNA is tightly and orderly packed in the process called DNA condensation, to fit the small available volumes of the cell. In eukaryotes, DNA is located in the cell nucleus, with small amounts in mitochondria and chloroplasts. In prokaryotes, the DNA is held within an irregularly shaped body in the cytoplasm called the nucleoid. The genetic information in a genome is held within genes, and the complete set of this information in an organism is called its genotype. A gene is a unit of heredity and is a region of DNA that influences a particular characteristic in an organism. Genes contain an open reading frame that can be transcribed, and regulatory sequences such as promoters and enhancers, which control transcription of the open reading frame.
In many species, only a small fraction of the total sequence of the genome encodes protein. For example, only about 1.5% of the human genome consists of protein-coding exons, with over 50% of human DNA consisting of non-coding repetitive sequences. The reasons for the presence of so much noncoding DNA in eukaryotic genomes and the extraordinary differences in genome size, or C-value, among species, represent a long-standing puzzle known as the "C-value enigma". However, some DNA sequences that do not code protein may still encode functional non-coding RNA molecules, which are involved in the regulation of gene expression.
Some noncoding DNA sequences play structural roles in chromosomes. Telomeres and centromeres typically contain few genes but are important for the function and stability of chromosomes. An abundant form of noncoding DNA in humans are pseudogenes, which are copies of genes that have been disabled by mutation. These sequences are usually just molecular fossils, although they can occasionally serve as raw genetic material for the creation of new genes through the process of gene duplication and divergence.
Transcription and translation
A gene is a sequence of DNA that contains genetic information and can influence the phenotype of an organism. Within a gene, the sequence of bases along a DNA strand defines a messenger RNA sequence, which then defines one or more protein sequences. The relationship between the nucleotide sequences of genes and the amino-acid sequences of proteins is determined by the rules of translation, known collectively as the genetic code. The genetic code consists of three-letter 'words' called codons formed from a sequence of three nucleotides (e.g., ACT, CAG, TTT).
In transcription, the codons of a gene are copied into messenger RNA by RNA polymerase. This RNA copy is then decoded by a ribosome that reads the RNA sequence by base-pairing the messenger RNA to transfer RNA, which carries amino acids. Since there are 4 bases in 3-letter combinations, there are 64 possible codons (43 combinations). These encode the twenty standard amino acids, giving most amino acids more than one possible codon. There are also three 'stop' or 'nonsense' codons signifying the end of the coding region; these are the TAG, TAA, and TGA codons, (UAG, UAA, and UGA on the mRNA).
Replication
Cell division is essential for an organism to grow, but, when a cell divides, it must replicate the DNA in its genome so that the two daughter cells have the same genetic information as their parent. The double-stranded structure of DNA provides a simple mechanism for DNA replication. Here, the two strands are separated and then each strand's complementary DNA sequence is recreated by an enzyme called DNA polymerase. This enzyme makes the complementary strand by finding the correct base through complementary base pairing and bonding it onto the original strand. As DNA polymerases can only extend a DNA strand in a 5′ to 3′ direction, different mechanisms are used to copy the antiparallel strands of the double helix. In this way, the base on the old strand dictates which base appears on the new strand, and the cell ends up with a perfect copy of its DNA.
Extracellular nucleic acids
Naked extracellular DNA (eDNA), most of it released by cell death, is nearly ubiquitous in the environment. Its concentration in soil may be as high as 2 μg/L, and its concentration in natural aquatic environments may be as high at 88 μg/L. Various possible functions have been proposed for eDNA: it may be involved in horizontal gene transfer; it may provide nutrients; and it may act as a buffer to recruit or titrate ions or antibiotics. Extracellular DNA acts as a functional extracellular matrix component in the biofilms of several bacterial species. It may act as a recognition factor to regulate the attachment and dispersal of specific cell types in the biofilm; it may contribute to biofilm formation; and it may contribute to the biofilm's physical strength and resistance to biological stress.
Cell-free fetal DNA is found in the blood of the mother, and can be sequenced to determine a great deal of information about the developing fetus.
Under the name of environmental DNA eDNA has seen increased use in the natural sciences as a survey tool for ecology, monitoring the movements and presence of species in water, air, or on land, and assessing an area's biodiversity.
Interactions with proteins
All the functions of DNA depend on interactions with proteins. These protein interactions can be non-specific, or the protein can bind specifically to a single DNA sequence. Enzymes can also bind to DNA and of these, the polymerases that copy the DNA base sequence in transcription and DNA replication are particularly important.
DNA-binding proteins
Structural proteins that bind DNA are well-understood examples of non-specific DNA-protein interactions. Within chromosomes, DNA is held in complexes with structural proteins. These proteins organize the DNA into a compact structure called chromatin. In eukaryotes, this structure involves DNA binding to a complex of small basic proteins called histones, while in prokaryotes multiple types of proteins are involved. The histones form a disk-shaped complex called a nucleosome, which contains two complete turns of double-stranded DNA wrapped around its surface. These non-specific interactions are formed through basic residues in the histones, making ionic bonds to the acidic sugar-phosphate backbone of the DNA, and are thus largely independent of the base sequence. Chemical modifications of these basic amino acid residues include methylation, phosphorylation, and acetylation. These chemical changes alter the strength of the interaction between the DNA and the histones, making the DNA more or less accessible to transcription factors and changing the rate of transcription. Other non-specific DNA-binding proteins in chromatin include the high-mobility group proteins, which bind to bent or distorted DNA. These proteins are important in bending arrays of nucleosomes and arranging them into the larger structures that make up chromosomes.
A distinct group of DNA-binding proteins is the DNA-binding proteins that specifically bind single-stranded DNA. In humans, replication protein A is the best-understood member of this family and is used in processes where the double helix is separated, including DNA replication, recombination, and DNA repair. These binding proteins seem to stabilize single-stranded DNA and protect it from forming stem-loops or being degraded by nucleases.
In contrast, other proteins have evolved to bind to particular DNA sequences. The most intensively studied of these are the various transcription factors, which are proteins that regulate transcription. Each transcription factor binds to one particular set of DNA sequences and activates or inhibits the transcription of genes that have these sequences close to their promoters. The transcription factors do this in two ways. Firstly, they can bind the RNA polymerase responsible for transcription, either directly or through other mediator proteins; this locates the polymerase at the promoter and allows it to begin transcription. Alternatively, transcription factors can bind enzymes that modify the histones at the promoter. This changes the accessibility of the DNA template to the polymerase.
As these DNA targets can occur throughout an organism's genome, changes in the activity of one type of transcription factor can affect thousands of genes. Consequently, these proteins are often the targets of the signal transduction processes that control responses to environmental changes or cellular differentiation and development. The specificity of these transcription factors' interactions with DNA come from the proteins making multiple contacts to the edges of the DNA bases, allowing them to "read" the DNA sequence. Most of these base-interactions are made in the major groove, where the bases are most accessible.
DNA-modifying enzymes
Nucleases and ligases
Nucleases are enzymes that cut DNA strands by catalyzing the hydrolysis of the phosphodiester bonds. Nucleases that hydrolyse nucleotides from the ends of DNA strands are called exonucleases, while endonucleases cut within strands. The most frequently used nucleases in molecular biology are the restriction endonucleases, which cut DNA at specific sequences. For instance, the EcoRV enzyme shown to the left recognizes the 6-base sequence 5′-GATATC-3′ and makes a cut at the horizontal line. In nature, these enzymes protect bacteria against phage infection by digesting the phage DNA when it enters the bacterial cell, acting as part of the restriction modification system. In technology, these sequence-specific nucleases are used in molecular cloning and DNA fingerprinting.
Enzymes called DNA ligases can rejoin cut or broken DNA strands. Ligases are particularly important in lagging strand DNA replication, as they join the short segments of DNA produced at the replication fork into a complete copy of the DNA template. They are also used in DNA repair and genetic recombination.
Topoisomerases and helicases
Topoisomerases are enzymes with both nuclease and ligase activity. These proteins change the amount of supercoiling in DNA. Some of these enzymes work by cutting the DNA helix and allowing one section to rotate, thereby reducing its level of supercoiling; the enzyme then seals the DNA break. Other types of these enzymes are capable of cutting one DNA helix and then passing a second strand of DNA through this break, before rejoining the helix. Topoisomerases are required for many processes involving DNA, such as DNA replication and transcription.
Helicases are proteins that are a type of molecular motor. They use the chemical energy in nucleoside triphosphates, predominantly adenosine triphosphate (ATP), to break hydrogen bonds between bases and unwind the DNA double helix into single strands. These enzymes are essential for most processes where enzymes need to access the DNA bases.
Polymerases
Polymerases are enzymes that synthesize polynucleotide chains from nucleoside triphosphates. The sequence of their products is created based on existing polynucleotide chains—which are called templates. These enzymes function by repeatedly adding a nucleotide to the 3′ hydroxyl group at the end of the growing polynucleotide chain. As a consequence, all polymerases work in a 5′ to 3′ direction. In the active site of these enzymes, the incoming nucleoside triphosphate base-pairs to the template: this allows polymerases to accurately synthesize the complementary strand of their template. Polymerases are classified according to the type of template that they use.
In DNA replication, DNA-dependent DNA polymerases make copies of DNA polynucleotide chains. To preserve biological information, it is essential that the sequence of bases in each copy are precisely complementary to the sequence of bases in the template strand. Many DNA polymerases have a proofreading activity. Here, the polymerase recognizes the occasional mistakes in the synthesis reaction by the lack of base pairing between the mismatched nucleotides. If a mismatch is detected, a 3′ to 5′ exonuclease activity is activated and the incorrect base removed. In most organisms, DNA polymerases function in a large complex called the replisome that contains multiple accessory subunits, such as the DNA clamp or helicases.
RNA-dependent DNA polymerases are a specialized class of polymerases that copy the sequence of an RNA strand into DNA. They include reverse transcriptase, which is a viral enzyme involved in the infection of cells by retroviruses, and telomerase, which is required for the replication of telomeres. For example, HIV reverse transcriptase is an enzyme for AIDS virus replication. Telomerase is an unusual polymerase because it contains its own RNA template as part of its structure. It synthesizes telomeres at the ends of chromosomes. Telomeres prevent fusion of the ends of neighboring chromosomes and protect chromosome ends from damage.
Transcription is carried out by a DNA-dependent RNA polymerase that copies the sequence of a DNA strand into RNA. To begin transcribing a gene, the RNA polymerase binds to a sequence of DNA called a promoter and separates the DNA strands. It then copies the gene sequence into a messenger RNA transcript until it reaches a region of DNA called the terminator, where it halts and detaches from the DNA. As with human DNA-dependent DNA polymerases, RNA polymerase II, the enzyme that transcribes most of the genes in the human genome, operates as part of a large protein complex with multiple regulatory and accessory subunits.
Genetic recombination
250px250px
Structure of the Holliday junction intermediate in genetic recombination. The four separate DNA strands are coloured red, blue, green and yellow.
A DNA helix usually does not interact with other segments of DNA, and in human cells, the different chromosomes even occupy separate areas in the nucleus called "chromosome territories". This physical separation of different chromosomes is important for the ability of DNA to function as a stable repository for information, as one of the few times chromosomes interact is in chromosomal crossover which occurs during sexual reproduction, when genetic recombination occurs. Chromosomal crossover is when two DNA helices break, swap a section and then rejoin.
Recombination allows chromosomes to exchange genetic information and produces new combinations of genes, which increases the efficiency of natural selection and can be important in the rapid evolution of new proteins. Genetic recombination can also be involved in DNA repair, particularly in the cell's response to double-strand breaks.
The most common form of chromosomal crossover is homologous recombination, where the two chromosomes involved share very similar sequences. Non-homologous recombination can be damaging to cells, as it can produce chromosomal translocations and genetic abnormalities. The recombination reaction is catalyzed by enzymes known as recombinases, such as RAD51. The first step in recombination is a double-stranded break caused by either an endonuclease or damage to the DNA. A series of steps catalyzed in part by the recombinase then leads to joining of the two helices by at least one Holliday junction, in which a segment of a single strand in each helix is annealed to the complementary strand in the other helix. The Holliday junction is a tetrahedral junction structure that can be moved along the pair of chromosomes, swapping one strand for another. The recombination reaction is then halted by cleavage of the junction and re-ligation of the released DNA. Only strands of like polarity exchange DNA during recombination. There are two types of cleavage: east-west cleavage and north–south cleavage. The north–south cleavage nicks both strands of DNA, while the east–west cleavage has one strand of DNA intact. The formation of a Holliday junction during recombination makes it possible for genetic diversity, genes to exchange on chromosomes, and expression of wild-type viral genomes.
Evolution
DNA contains the genetic information that allows all forms of life to function, grow and reproduce. However, it is unclear how long in the 4-billion-year history of life DNA has performed this function, as it has been proposed that the earliest forms of life may have used RNA as their genetic material. RNA may have acted as the central part of early cell metabolism as it can both transmit genetic information and carry out catalysis as part of ribozymes. This ancient RNA world where nucleic acid would have been used for both catalysis and genetics may have influenced the evolution of the current genetic code based on four nucleotide bases. This would occur, since the number of different bases in such an organism is a trade-off between a small number of bases increasing replication accuracy and a large number of bases increasing the catalytic efficiency of ribozymes. However, there is no direct evidence of ancient genetic systems, as recovery of DNA from most fossils is impossible because DNA survives in the environment for less than one million years, and slowly degrades into short fragments in solution. Claims for older DNA have been made, most notably a report of the isolation of a viable bacterium from a salt crystal 250 million years old, but these claims are controversial.
Building blocks of DNA (adenine, guanine, and related organic molecules) may have been formed extraterrestrially in outer space. Complex DNA and RNA organic compounds of life, including uracil, cytosine, and thymine, have also been formed in the laboratory under conditions mimicking those found in outer space, using starting chemicals, such as pyrimidine, found in meteorites. Pyrimidine, like polycyclic aromatic hydrocarbons (PAHs), the most carbon-rich chemical found in the universe, may have been formed in red giants or in interstellar cosmic dust and gas clouds.
Ancient DNA has been recovered from ancient organisms at a timescale where genome evolution can be directly observed, including from extinct organisms up to millions of years old, such as the woolly mammoth.
Uses in technology
Genetic engineering
Methods have been developed to purify DNA from organisms, such as phenol-chloroform extraction, and to manipulate it in the laboratory, such as restriction digests and the polymerase chain reaction. Modern biology and biochemistry make intensive use of these techniques in recombinant DNA technology. Recombinant DNA is a man-made DNA sequence that has been assembled from other DNA sequences. They can be transformed into organisms in the form of plasmids or in the appropriate format, by using a viral vector. The genetically modified organisms produced can be used to produce products such as recombinant proteins, used in medical research, or be grown in agriculture.
DNA profiling
Forensic scientists can use DNA in blood, semen, skin, saliva or hair found at a crime scene to identify a matching DNA of an individual, such as a perpetrator. This process is formally termed DNA profiling, also called DNA fingerprinting. In DNA profiling, the lengths of variable sections of repetitive DNA, such as short tandem repeats and minisatellites, are compared between people. This method is usually an extremely reliable technique for identifying a matching DNA. However, identification can be complicated if the scene is contaminated with DNA from several people. DNA profiling was developed in 1984 by British geneticist Sir Alec Jeffreys, and first used in forensic science to convict Colin Pitchfork in the 1988 Enderby murders case.
The development of forensic science and the ability to now obtain genetic matching on minute samples of blood, skin, saliva, or hair has led to re-examining many cases. Evidence can now be uncovered that was scientifically impossible at the time of the original examination. Combined with the removal of the double jeopardy law in some places, this can allow cases to be reopened where prior trials have failed to produce sufficient evidence to convince a jury. People charged with serious crimes may be required to provide a sample of DNA for matching purposes. The most obvious defense to DNA matches obtained forensically is to claim that cross-contamination of evidence has occurred. This has resulted in meticulous strict handling procedures with new cases of serious crime.
DNA profiling is also used successfully to positively identify victims of mass casualty incidents, bodies or body parts in serious accidents, and individual victims in mass war graves, via matching to family members.
DNA profiling is also used in DNA paternity testing to determine if someone is the biological parent or grandparent of a child with the probability of parentage is typically 99.99% when the alleged parent is biologically related to the child. Usually DNA sequencing are carried out after birth, but there are new methods to test paternity while a mother is still pregnant.
DNA enzymes or catalytic DNA
Deoxyribozymes, also called DNAzymes or catalytic DNA, were first discovered in 1994. They are mostly single stranded DNA sequences isolated from a large pool of random DNA sequences through a combinatorial approach called in vitro selection or systematic evolution of ligands by exponential enrichment (SELEX). DNAzymes catalyze variety of chemical reactions including RNA-DNA cleavage, RNA-DNA ligation, amino acids phosphorylation-dephosphorylation, carbon-carbon bond formation, etc. DNAzymes can enhance catalytic rate of chemical reactions up to 100,000,000,000-fold over the uncatalyzed reaction. The most extensively studied class of DNAzymes is RNA-cleaving types which have been used to detect different metal ions and designing therapeutic agents. Several metal-specific DNAzymes have been reported including the GR-5 DNAzyme (lead-specific), the CA1-3 DNAzymes (copper-specific), the 39E DNAzyme (uranyl-specific) and the NaA43 DNAzyme (sodium-specific). The NaA43 DNAzyme, which is reported to be more than 10,000-fold selective for sodium over other metal ions, was used to make a real-time sodium sensor in cells.
Bioinformatics
Bioinformatics involves the development of techniques to store, data mine, search and manipulate biological data, including DNA nucleic acid sequence data. These have led to widely applied advances in computer science, especially string searching algorithms, machine learning, and database theory. String searching or matching algorithms, which find an occurrence of a sequence of letters inside a larger sequence of letters, were developed to search for specific sequences of nucleotides. The DNA sequence may be aligned with other DNA sequences to identify homologous sequences and locate the specific mutations that make them distinct. These techniques, especially multiple sequence alignment, are used in studying phylogenetic relationships and protein function. Data sets representing entire genomes' worth of DNA sequences, such as those produced by the Human Genome Project, are difficult to use without the annotations that identify the locations of genes and regulatory elements on each chromosome. Regions of DNA sequence that have the characteristic patterns associated with protein- or RNA-coding genes can be identified by gene finding algorithms, which allow researchers to predict the presence of particular gene products and their possible functions in an organism even before they have been isolated experimentally. Entire genomes may also be compared, which can shed light on the evolutionary history of particular organism and permit the examination of complex evolutionary events.
DNA nanotechnology
DNA nanotechnology uses the unique molecular recognition properties of DNA and other nucleic acids to create self-assembling branched DNA complexes with useful properties. DNA is thus used as a structural material rather than as a carrier of biological information. This has led to the creation of two-dimensional periodic lattices (both tile-based and using the DNA origami method) and three-dimensional structures in the shapes of polyhedra. Nanomechanical devices and algorithmic self-assembly have also been demonstrated, and these DNA structures have been used to template the arrangement of other molecules such as gold nanoparticles and streptavidin proteins. DNA and other nucleic acids are the basis of aptamers, synthetic oligonucleotide ligands for specific target molecules used in a range of biotechnology and biomedical applications.
History and anthropology
Because DNA collects mutations over time, which are then inherited, it contains historical information, and, by comparing DNA sequences, geneticists can infer the evolutionary history of organisms, their phylogeny. This field of phylogenetics is a powerful tool in evolutionary biology. If DNA sequences within a species are compared, population geneticists can learn the history of particular populations. This can be used in studies ranging from ecological genetics to anthropology.
Information storage
DNA as a storage device for information has enormous potential since it has much higher storage density compared to electronic devices. However, high costs, slow read and write times (memory latency), and insufficient reliability has prevented its practical use.
History
DNA was first isolated by the Swiss physician Friedrich Miescher who, in 1869, discovered a microscopic substance in the pus of discarded surgical bandages. As it resided in the nuclei of cells, he called it "nuclein". In 1878, Albrecht Kossel isolated the non-protein component of "nuclein", nucleic acid, and later isolated its five primary nucleobases.See:
Albrect Kossel (1883) "Zur Chemie des Zellkerns" (On the chemistry of the cell nucleus), Zeitschrift für physiologische Chemie, 7: 7–22.
In 1909, Phoebus Levene identified the base, sugar, and phosphate nucleotide unit of RNA (then named "yeast nucleic acid"). In 1929, Levene identified deoxyribose sugar in "thymus nucleic acid" (DNA). Levene suggested that DNA consisted of a string of four nucleotide units linked together through the phosphate groups ("tetranucleotide hypothesis"). Levene thought the chain was short and the bases repeated in a fixed order. In 1927, Nikolai Koltsov proposed that inherited traits would be inherited via a "giant hereditary molecule" made up of "two mirror strands that would replicate in a semi-conservative fashion using each strand as a template".Koltsov proposed that a cell's genetic information was encoded in a long chain of amino acids. See:
Reprinted in:
Reprinted in German as:
In 1934, Koltsov contended that the proteins that contain a cell's genetic information replicate. See: In 1928, Frederick Griffith in his experiment discovered that traits of the "smooth" form of Pneumococcus could be transferred to the "rough" form of the same bacteria by mixing killed "smooth" bacteria with the live "rough" form. This system provided the first clear suggestion that DNA carries genetic information.
In 1933, while studying virgin sea urchin eggs, Jean Brachet suggested that DNA is found in the cell nucleus and that RNA is present exclusively in the cytoplasm. At the time, "yeast nucleic acid" (RNA) was thought to occur only in plants, while "thymus nucleic acid" (DNA) only in animals. The latter was thought to be a tetramer, with the function of buffering cellular pH. In 1937, William Astbury produced the first X-ray diffraction patterns that showed that DNA had a regular structure.See:
In 1943, Oswald Avery, along with co-workers Colin MacLeod and Maclyn McCarty, identified DNA as the transforming principle, supporting Griffith's suggestion (Avery–MacLeod–McCarty experiment). Erwin Chargaff developed and published observations now known as Chargaff's rules, stating that in DNA from any species of any organism, the amount of guanine should be equal to cytosine and the amount of adenine should be equal to thymine.
By 1951, Alec Todd and collaborators at the University of Cambridge had determined by biochemical methods how the backbone of DNA is structured via the successive linking of carbon atoms 3 and 5 of the sugar to phosphates. This would help to corroborate Watson and Crick's later X-ray structural work. Todd would later be awarded the 1957 Nobel Prize in Chemistry for this and other discoveries related to DNA.
Late in 1951, Francis Crick started working with James Watson at the Cavendish Laboratory within the University of Cambridge. DNA's role in heredity was confirmed in 1952 when Alfred Hershey and Martha Chase in the Hershey–Chase experiment showed that DNA is the genetic material of the enterobacteria phage T2.
In May 1952, Raymond Gosling, a graduate student working under the supervision of Rosalind Franklin, took an X-ray diffraction image, labeled as "Photo 51", at high hydration levels of DNA. This photo was given to Watson and Crick by Maurice Wilkins and was critical to their obtaining the correct structure of DNA. Franklin told Crick and Watson that the backbones had to be on the outside. Before then, Linus Pauling, and Watson and Crick, had erroneous models with the chains inside and the bases pointing outwards. Franklin's identification of the space group for DNA crystals proved her correct. In February 1953, Linus Pauling and Robert Corey proposed a model for nucleic acids containing three intertwined chains, with the phosphates near the axis, and the bases on the outside. Watson and Crick completed their model, which is now accepted as the first correct model of the double helix of DNA. On 28 February 1953 Crick interrupted patrons' lunchtime at The Eagle pub in Cambridge, England to announce that he and Watson had "discovered the secret of life".
The 25 April 1953 issue of the journal Nature published a series of five articles giving the Watson and Crick double-helix structure DNA and evidence supporting it. The structure was reported in a letter titled "MOLECULAR STRUCTURE OF NUCLEIC ACIDS A Structure for Deoxyribose Nucleic Acid, in which they said, "It has not escaped our notice that the specific pairing we have postulated immediately suggests a possible copying mechanism for the genetic material." This letter was followed by a letter from Franklin and Gosling, which was the first publication of their own X-ray diffraction data and of their original analysis method. Then followed a letter by Wilkins and two of his colleagues, which contained an analysis of in vivo B-DNA X-ray patterns, and which supported the presence in vivo of the Watson and Crick structure.
In April 2023, scientists, based on new evidence, concluded that Rosalind Franklin was a contributor and "equal player" in the discovery process of DNA, rather than otherwise, as may have been presented subsequently after the time of the discovery. In 1962, after Franklin's death, Watson, Crick, and Wilkins jointly received the Nobel Prize in Physiology or Medicine. Nobel Prizes are awarded only to living recipients. A debate continues about who should receive credit for the discovery.
In an influential presentation in 1957, Crick laid out the central dogma of molecular biology, which foretold the relationship between DNA, RNA, and proteins, and articulated the "adaptor hypothesis". Final confirmation of the replication mechanism that was implied by the double-helical structure followed in 1958 through the Meselson–Stahl experiment. Further work by Crick and co-workers showed that the genetic code was based on non-overlapping triplets of bases, called codons, allowing Har Gobind Khorana, Robert W. Holley, and Marshall Warren Nirenberg to decipher the genetic code. These findings represent the birth of molecular biology.
In 1986, DNA analysis was first used in a criminal investigation when police in the UK requested Alec Jeffreys of the University of Leicester to prove or disprove the involvement in a particular case of a suspect who claimed innocence in the matter. Although the suspect had already confessed to committing a recent rape-murder, he was denying any involvement in a similar crime committed three years earlier. Yet the details of the two cases were so alike that the police concluded both crimes had been committed by the same person. However, all charges against the suspect were dropped when Jeffreys' DNA testing exonerated the suspect — from both the earlier murder and the one to which he'd confessed. Further such DNA profiling led to positive identification of another suspect (Colin Pitchfork) who, in 1988, was found guilty of both rape-murders.
See also
References
Further reading
First published in October 1974 by MacMillan, with foreword by Francis Crick; the definitive DNA textbook, revised in 1994 with a nine-page postscript.
External links
DNA binding site prediction on protein
DNA the Double Helix Game From the official Nobel Prize web site
DNA under electron microscope
Dolan DNA Learning Center
Double Helix: 50 years of DNA, Nature
ENCODE threads explorer ENCODE home page at Nature
Double Helix 1953–2003 National Centre for Biotechnology Education
Genetic Education Modules for Teachers – DNA from the Beginning Study Guide
"Clue to chemistry of heredity found". The New York Times, June 1953. First American newspaper coverage of the discovery of the DNA structure
DNA from the Beginning Another DNA Learning Center site on DNA, genes, and heredity from Mendel to the human genome project.
The Register of Francis Crick Personal Papers 1938 – 2007 at Mandeville Special Collections Library, University of California, San Diego
Seven-page, handwritten letter that Crick sent to his 12-year-old son Michael in 1953 describing the structure of DNA. See Crick's medal goes under the hammer, Nature, 5 April 2013.
Category:Biotechnology
Category:Helices
Category:Nucleic acids
|
biology
| 9,443
|
8303
|
Down syndrome
|
https://en.wikipedia.org/wiki/Down_syndrome
|
Down syndrome or Down's syndrome, also known as trisomy 21, is a genetic disorder caused by the presence of all or part of a third copy of chromosome 21. It is usually associated with developmental delays, mild to moderate intellectual disability, and characteristic physical features.
The parents of the affected individual are usually genetically normal. The incidence of the syndrome increases with the age of the mother, from less than 0.1% for 20-year-old mothers to 3% for those of age 45. It is believed to occur by chance, with no known behavioral activity or environmental factor that changes the probability. Three different genetic forms have been identified. The most common, trisomy 21, involves an extra copy of chromosome 21 in all cells. The extra chromosome is provided at conception as the egg and sperm combine. Translocation Down syndrome involves attachment of extra chromosome 21 material. In 1–2% of cases, the additional chromosome is added in the embryo stage and only affects some of the cells in the body; this is known as Mosaic Down syndrome.
Down syndrome can be identified during pregnancy by prenatal screening, followed by diagnostic testing, or after birth by direct observation and genetic testing. Since the introduction of screening, Down syndrome pregnancies are often aborted (rates varying from 50 to 85% depending on maternal age, gestational age, and maternal race/ethnicity).
There is no cure for Down syndrome. Education and proper care have been shown to provide better quality of life. Some children with Down syndrome are educated in typical school classes, while others require more specialized education. Some individuals with Down syndrome graduate from high school, and a few attend post-secondary education. In adulthood, about 20% in the United States do some paid work, with many requiring a sheltered work environment. Caregiver support in financial and legal matters is often needed. Life expectancy is around 50 to 60 years in the developed world, with proper health care. Regular screening for health issues common in Down syndrome is recommended throughout the person's life.
Down syndrome is the most common chromosomal abnormality, occurring in about 1 in 1,000 babies born worldwide, and one in 700 in the US. In 2015, there were 5.4 million people with Down syndrome globally, of whom 27,000 died, down from 43,000 deaths in 1990. The syndrome is named after British physician John Langdon Down, who dedicated his medical practice to the cause. Some aspects were described earlier by French psychiatrist Jean-Étienne Dominique Esquirol in 1838 and French physician Édouard Séguin in 1844. The genetic cause was discovered in 1959.
Signs and symptoms
upright=1.2|thumb|A boy from Somalia with Down syndrome
Those with Down syndrome nearly always have physical and intellectual disabilities. As adults, their mental abilities are typically similar to those of an 8- or 9-year-old. At the same time, their emotional and social awareness is very high. They can have poor immune function and generally reach developmental milestones at a later age. They have an increased risk of a number of health concerns, such as congenital heart defect, epilepsy, leukemia, and thyroid diseases.
CharacteristicsPercentageCharacteristicsPercentageMental impairment 99%Abnormal teeth 60%Stunted growth 90%Slanted eyes 60%Umbilical hernia 90%Shortened hands 60% Increased skin on back of neck 80%Short neck 60%Low muscle tone 80%Obstructive sleep apnea 60%Narrow roof of mouth 76%Bent fifth finger tip 57%Flat head 75%Brushfield spots in the iris 56%Flexible ligaments 75%Single transverse palmar crease 53%Proportionally large tongue 75%Protruding tongue 47%Abnormal outer ears 70%Congenital heart disease 40%Flattened nose 68%Strabismus ≈35%Separation of first and second toes 68%Undescended testicles 20%
Physical
People with Down syndrome may have these physical characteristics: a small chin, epicanthic folds, low muscle tone, a flat nasal bridge, and a protruding tongue. A protruding tongue is caused by low tone and weak facial muscles, and often corrected with myofunctional exercises. Some characteristic airway features can lead to obstructive sleep apnea in around half of those with Down syndrome. Other common features include: excessive joint flexibility, extra space between big toe and second toe, a single crease of the palm, and short fingers.
Instability of the atlantoaxial joint occurs in about 1–2%. Atlantoaxial instability may cause myelopathy due to cervical spinal cord compression later in life, this often manifests as new onset weakness, problems with coordination, bowel or bladder incontinence, and gait dysfunction. Serial imaging cannot reliably predict future cervical cord compression, but changes can be seen on neurological exam. The condition is surgically corrected with spine surgery.
Growth in height is slower, resulting in adults who tend to have short stature—the average height for men is , and for women is . Individuals with Down syndrome are at increased risk for obesity as they age due to hypothyroidism, other medical issues and lifestyle. Growth charts have been developed specifically for children with Down syndrome.
Neurological
upright=1.2|thumb|A boy with Down syndrome using a cordless drill to assemble a book case
This syndrome causes about a third of cases of intellectual disability. Many developmental milestones are delayed with the ability to crawl typically occurring around 8–22 months rather than 6–12 months, and the ability to walk independently typically occurring around 1–4 years rather than 9–18 months. Walking is acquired in 50% of children after 24 months.
Most individuals with Down syndrome have mild (IQ: 50–69) or moderate (IQ: 35–50) intellectual disability with some cases having severe (IQ: 20–35) difficulties. Those with mosaic Down syndrome typically have IQ scores 10–30 points higher than that. As they age, the gap tends to widen between people with Down syndrome and their same-age peers.
Commonly, individuals with Down syndrome have better language understanding than ability to speak. Babbling typically emerges around 15 months on average. 10–45% of those with Down syndrome have either a stutter or rapid and irregular speech, making it difficult to understand them. After reaching 30 years of age, some may lose their ability to speak.
They typically do fairly well with social skills. Behavior problems are not generally as great an issue as in other syndromes associated with intellectual disability. In children with Down syndrome, mental illness occurs in nearly 30% with autism occurring in 5–10%. People with Down syndrome experience a wide range of emotions. While people with Down syndrome are generally happy, symptoms of depression and anxiety may develop in early adulthood.
Children and adults with Down syndrome are at increased risk of epileptic seizures, which occur in 5–10% of children and up to 50% of adults. This includes an increased risk of a specific type of seizure called infantile spasms. Many (15%) who live 40 years or longer develop Alzheimer's disease. In those who reach 60 years of age, 50–70% have the disease.
Down syndrome regression disorder is a sudden regression with neuropsychiatric symptoms such as catatonia, possibly caused by an autoimmune disease. It primarily appears in teenagers and younger adults.
Senses
Hearing and vision disorders occur in more than half of people with Down syndrome.
Ocular findings
Brushfield spots (small white or grayish/brown spots on the periphery of the iris), upward slanting palpebral fissures (the opening between the upper and lower lids) and epicanthal folds (folds of skin between the upper eyelid and the nose) are clinical signs at birth suggesting the diagnosis of Down syndrome especially in the Western World. None of these requires treatment.
Visually significant congenital cataracts (clouding of the lens of the eye) occur more frequently with Down syndrome. Neonates with Down syndrome should be screened for cataract because early recognition and referral reduce the risk of vision loss from amblyopia. Dot-like opacities in the cortex of the lens (cerulean cataract) are present in up to 50% of people with Down syndrome, but may be followed without treatment if they are not visually significant.
Strabismus, nystagmus and nasolacrimal duct obstruction occur more frequently in children with Down syndrome. Screening for these diagnoses should begin within six months of birth. Strabismus is more often acquired than congenital. Early diagnosis and treatment of strabismus reduces the risk of vision loss from amblyopia. In Down syndrome, the presence of epicanthal folds may give the false impression of strabismus, referred to as pseudostrabismus. Nasolacrimal duct obstruction, which causes tearing (epiphora), is more frequently bilateral and multifactorial than in children without Down syndrome.
Refractive error is more common with Down syndrome, though the rate may not differ until after twelve months of age compared to children without Down syndrome. Early screening is recommended to identify and treat significant refractive error with glasses or contact lenses. Poor accommodation (ability to focus on close objects) is associated with Down syndrome, which may mean bifocals are indicated.
In keratoconus, the cornea progressively thins and bulges into a cone shape, causing visual blurring or distortion. Keratoconus first presents in the teen years and progresses into the thirties. Down syndrome is a strong risk factor for developing keratoconus, and onset may be occur at a younger age than in those without Down syndrome. Eye rubbing is also a risk factor for developing keratoconus. It is speculated that chronic eye irritation from blepharitis may increase eye rubbing in Down syndrome, contributing to the increased prevalence of keratoconus.
An association between glaucoma and Down syndrome is often cited. Glaucoma in children with Down syndrome is uncommon, with a prevalence of less than 1%. It is currently unclear if the prevalence of glaucoma in those with Down syndrome differs from that in the absence of Down syndrome.
Estimates of prevalence of ocular findings in Down Syndrome vary widely depending on the study. Some prevalence estimates follow. Vision problems have been observed in 38–80% of cases. Brushfield spots are present in 38–85% of individuals. Between 20 and 50% have strabismus. Cataracts occur in 15%, and may be present at birth. Keratoconus may occur in as many as 21–30%.
Hearing loss
Hearing problems are found in 50–90% of children with Down syndrome. This is often the result of otitis media with effusion which occurs in 50–70% and chronic ear infections which occur in 40–60%. Ear infections often begin in the first year of life and are partly due to poor eustachian tube function. Excessive ear wax can also cause hearing loss due to obstruction of the outer ear canal. Even a mild degree of hearing loss can have negative consequences for speech, language understanding, and academics. It is important to rule out hearing loss as a factor in social and cognitive deterioration. Age-related hearing loss of the sensorineural type occurs at a much earlier age and affects 10–70% of people with Down syndrome.
Heart
The rate of congenital heart disease in newborns with Down syndrome is around 40%. Of those with heart disease, about 80% have an atrial septal defect or ventricular septal defect with the former being more common. Congenital heart disease can also put individuals at a higher risk of pulmonary hypertension, where arteries in the lungs narrow and cause inadequate blood oxygenation. Some of the genetic contributions to pulmonary hypertension in individuals with Down Syndrome are abnormal lung development, endothelial dysfunction, and proinflammatory genes. Mitral valve problems become common as people age, even in those without heart problems at birth. Other problems that may occur include tetralogy of Fallot and patent ductus arteriosus. People with Down syndrome have a lower risk of hardening of the arteries.
Cancer
Although the overall risk of cancer in Down syndrome is not changed, the risk of testicular cancer and certain blood cancers, including acute lymphoblastic leukemia (ALL) and acute megakaryoblastic leukemia (AMKL) is increased while the risk of other non-blood cancers is decreased. People with Down syndrome are believed to have an increased risk of developing cancers derived from germ cells whether these cancers are blood- or non-blood-related. In 2008, the World Health Organization (WHO) introduced a distinct classification for myeloid proliferation in individuals with Down syndrome.
Blood cancers
Leukemia is 10 to 15 times more common in children with Down syndrome. In particular, acute lymphoblastic leukemia is 20 times more common and the megakaryoblastic form of acute myeloid leukemia (acute megakaryoblastic leukemia), is 500 times more common. Acute megakaryoblastic leukemia (AMKL) is a leukemia of megakaryoblasts, the precursors cells to megakaryocytes which form blood platelets. Acute lymphoblastic leukemia in Down syndrome accounts for 1–3% of all childhood cases of ALL. It occurs most often in those older than nine years or having a white blood cell count greater than 50,000 per microliter and is rare in those younger than one year old. ALL in Down syndrome tends to have poorer outcomes than other cases of ALL in people without Down syndrome. In short, the likelihood of developing acute myeloid leukemia (AML) and acute lymphoblastic leukemia (ALL) is higher in children with Down syndrome compared to those without Down syndrome.
Myeloid leukemia typically precedes Down syndrome and is accompanied by a condition known as transient abnormal myelopoiesis (TAM), which generally disrupts the differentiation of megakaryocytes and erythrocytes. In Down syndrome, AMKL is typically preceded by transient myeloproliferative disease (TMD), a disorder of blood cell production in which non-cancerous megakaryoblasts with a mutation in the GATA1 gene rapidly divide during the later period of pregnancy. GATA1 mutations combined with trisomy 21 contribute to a predisposition to TAM. In trisomy 21, the process of leukemogenesis starts in early fetal life, with genetic factors, including GATA1 mutations, contributing to the development of TAM on the preleukemic pathway. The condition affects 3–10% of babies with Down. While it often spontaneously resolves within three months of birth, it can cause serious blood, liver, or other complications. In about 10% of cases, TMD progresses to AMKL during the three months to five years following its resolution.
Non-blood cancers
People with Down syndrome have a lower risk of all major solid cancers, including those of lung, breast, and cervix, with the lowest relative rates occurring in those aged 50 years or older. This low risk is thought to be due to an increase in the expression of tumor suppressor genes present on chromosome 21. One exception is testicular germ cell cancer which occurs at a higher rate in Down syndrome.
Endocrine
Problems of the thyroid gland occur in 20–50% of individuals with Down syndrome. Low thyroid is the most common form, occurring in almost half of all individuals. Thyroid problems can be due to a poorly or nonfunctioning thyroid at birth (known as congenital hypothyroidism) which occurs in 1% or can develop later due to an attack on the thyroid by the immune system resulting in Graves' disease or autoimmune hypothyroidism. Type 1 diabetes mellitus is also more common.
Gastrointestinal
Constipation occurs in nearly half of people with Down syndrome and may result in changes in behavior. One potential cause is Hirschsprung's disease, occurring in 2–15%, which is due to a lack of nerve cells controlling the colon. Other congenital problems can include duodenal atresia, imperforate anus and gastroesophageal reflux disease. Celiac disease affects about 7–20%.
Teeth
People with Down syndrome tend to be more susceptible to gingivitis as well as early, severe periodontal disease, necrotising ulcerative gingivitis, and early tooth loss, especially in the lower front teeth. While plaque and poor oral hygiene are contributing factors, the severity of these periodontal diseases cannot be explained solely by external factors. Research suggests that the severity is likely a result of a weakened immune system. The weakened immune system also contributes to increased incidence of yeast infections in the mouth (from Candida albicans).
People with Down syndrome also tend to have a more alkaline saliva resulting in a greater resistance to tooth decay, despite decreased quantities of saliva, less effective oral hygiene habits, and higher plaque indexes.
Higher rates of tooth wear and bruxism are also common. Other common oral manifestations of Down syndrome include enlarged hypotonic tongue, crusted and hypotonic lips, mouth breathing, narrow palate with crowded teeth, class III malocclusion with an underdeveloped maxilla and posterior crossbite, delayed exfoliation of baby teeth and delayed eruption of adult teeth, shorter roots on teeth, and often missing and malformed (usually smaller) teeth. Less common manifestations include cleft lip and palate and enamel hypocalcification (20% prevalence).
Taurodontism, an elongation of the pulp chamber, has a high prevalence in people with DS.
Fertility
Males with Down syndrome usually do not father children, while females have lower rates of fertility relative to those who are unaffected. Fertility is estimated to be present in 30–50% of females. Menopause usually occurs at an earlier age. The poor fertility in males is thought to be due to problems with sperm development; however, it may also be related to not being sexually active. Without assisted reproductive technologies, around half of the children of someone with Down syndrome will also have the syndrome.
Cause
Down syndrome is caused by having three copies of the genes on chromosome 21, rather than the usual two. The parents of the affected individual are typically genetically normal. Those who have one child with Down syndrome have about a 1% possibility of having a second child with the syndrome, if both parents are found to have normal karyotypes.
There are three types of causes for Down syndrome:
Trisomy 21: 94% of the time Down syndrome is caused by an extra copy of chromosome 21 in all cells,
Translocation: 4% of cases extra chromosome 21 material is attached to another chromosome,
Mosaic: 2% of cases involve mixtures of cells, only some of which have extra chromosome 21.
The most common cause (about 92–95% of cases) is a complete extra copy of chromosome 21, resulting in trisomy 21. In 1–2.5% of cases, some of the cells in the body are normal and others have trisomy 21, known as mosaic Down syndrome. The other common mechanisms that can give rise to Down syndrome include: a Robertsonian translocation, isochromosome, or ring chromosome. These contain additional material from chromosome 21 and occur in about 2.5% of cases. An isochromosome results when the two long arms of a chromosome separate together rather than the long and short arm separating together during egg or sperm development.
Trisomy 21
The trisomy 21 version of Down syndrome (also known by the karyotype 47,XX,+21 for females and 47,XY,+21 for males) is mostly caused by a failure of the 21st chromosome to separate during egg or sperm development, known as nondisjunction. As a result, a sperm or egg cell is produced with an extra copy of chromosome 21; this cell thus has 24 chromosomes. When combined with a normal cell from the other parent, the baby has 47 chromosomes, with three copies of chromosome 21. About 88% of cases of trisomy 21 result from nonseparation of the chromosomes in the mother, 8% from nonseparation in the father, and 3% after the egg and sperm have merged.
The root cause of the extra full or partial chromosome is still unknown. Most of the time, the extra chromosome results from a random mistake in cell division during early development of the fetus. The mechanism is not inherited. There is no scientific research which shows that environmental factors or the parents' activities contribute to Down syndrome. The only factor that has been linked to the increased chance of having a baby with Down syndrome is advanced parental age. This is mostly associated with advanced maternal age but about 10 per cent of cases are associated with advanced paternal age. thumb|upright=1.4|Karyotype for Down syndrome (trisomy 21) showing the three copies of chromosome 21
Translocation Down syndrome
The extra chromosome 21 material may also occur due to a Robertsonian translocation in 2–4% of cases. In this translocation Down syndrome, the long arm of chromosome 21 is attached to another chromosome, often chromosome 14. In a male affected with Down syndrome, it results in a karyotype of 46XY,t(14q21q). This may be a new mutation or previously present in one of the parents. The parent with such a translocation is usually normal physically and mentally; however, during production of egg or sperm cells, a higher chance of creating reproductive cells with extra chromosome 21 material exists. This results in a 15% chance of having a child with Down syndrome when the mother is affected and a less than 5% probability if the father is affected. The probability of this type of Down syndrome is not related to the mother's age. Some children without Down syndrome may inherit the translocation and have a higher probability of having children of their own with Down syndrome. In this case it is sometimes known as familial Down syndrome.
Mosaic Down syndrome
Mosaic Down syndrome is diagnosed when there is a mixture of two types of cells: some cells have three copies of chromosome 21 but some cells have the typical two copies of chromosome 21. This type is the least common form of Down syndrome and accounts for only about 1% of all cases. Children with mosaic Down syndrome may have the same features as other children with Down syndrome. However, they may have fewer characteristics of the condition due to the presence of some (or many) cells with a typical number of chromosomes.
Mechanism
The extra genetic material present in Down syndrome results in overexpression of a portion of the 310 genes located on chromosome 21. This overexpression has been estimated at 50%, due to the third copy of the chromosome present. Some research has suggested the Down syndrome critical region is located at bands 21q22.1–q22.3, with this area including genes for the amyloid precursor protein, superoxide dismutase, and likely the ETS2 proto oncogene. Other research, however, has not confirmed these findings. MicroRNAs are also proposed to be involved.
The dementia that occurs in Down syndrome is due to an excess of amyloid beta peptide produced in the brain and is similar to Alzheimer's disease, which also involves amyloid beta build-up. Amyloid beta is processed from amyloid precursor protein, the gene for which is located on chromosome 21. Senile plaques and neurofibrillary tangles are present in nearly all by 35 years of age, though dementia may not be present. It is hypothesized that those with Down syndrome lack a normal number of lymphocytes and produce less antibodies which is said to present an increased risk of infection.
Epigenetics
Down syndrome is associated with an increased risk of some chronic diseases that are typically associated with older age such as Alzheimer's disease. It is believed that accelerated aging occurs and increases the biological age of tissues, but molecular evidence for this hypothesis is sparse. According to a biomarker of tissue age known as epigenetic clock, it is hypothesized that trisomy 21 increases the age of blood and brain tissue (on average by 6.6 years).
Diagnosis
Screening before birth
Guidelines recommend screening for Down syndrome to be offered to all pregnant women, regardless of age. A number of tests are used, with varying levels of accuracy. They are typically used in combination to increase the detection rate. None can be definitive; thus, if screening predicts a high possibility of Down syndrome, either amniocentesis or chorionic villus sampling is required to confirm the diagnosis.
Ultrasound
Prenatal ultrasound can be used to screen for Down syndrome. Findings that indicate increased chances when seen at 14 to 24 weeks of gestation include a small or no nasal bone, large ventricles, nuchal fold thickness, and an abnormal right subclavian artery, among others. The presence or absence of many markers is more accurate. Increased fetal nuchal translucency (NT) indicates an increased possibility of Down syndrome picking up 75–80% of cases and being falsely positive in 6%.
Blood tests
Several blood markers can be measured to predict the chances of Down syndrome during the first or second trimester. Testing in both trimesters is sometimes recommended and test results are often combined with ultrasound results. In the second trimester, often two or three tests are used in combination with two or three of: α-fetoprotein, unconjugated estriol, total hCG, and free βhCG detecting about 60–70% of cases.
Testing of the mother's blood for fetal DNA is being studied and appears promising in the first trimester. The International Society for Prenatal Diagnosis considers it a reasonable screening option for those women whose pregnancies are at a high likelihood of trisomy 21. Accuracy has been reported at 98.6% in the first trimester of pregnancy. Confirmatory testing by invasive techniques (amniocentesis, CVS) is still required to confirm the screening result.
Combinations
+ First- and second-trimester screeningScreenWeek of pregnancy when performedDetection rateFalse positiveDescriptionCombined test 10–13.5 wks 82–87% 5%Uses ultrasound to measure nuchal translucency in addition to blood tests for free or total beta-hCG and PAPP-AQuad screen15–20 wks81% 5%Measures the maternal serum alpha-fetoprotein, unconjugated estriol, hCG, and inhibin-AIntegrated test15–20 wks94–96%5%Is a combination of the quad screen, PAPP-A, and NTCell-free fetal DNAFrom 10 wks96–100%0.3%A blood sample is taken from the mother by venipuncture and is sent for DNA analysis.
Efficacy
For combinations of ultrasonography and non-genetic blood tests, screening in both the first and second trimesters is better than just screening in the first trimester. Common screening techniques in use are able to pick up 90–95% of cases, with a false-positive rate of 2–5%. If Down syndrome occurs in one in 500 pregnancies with a 90% detection rate and the test used has a 5% false-positive rate, of 28 women who test positive on screening, only one will have a fetus with Down syndrome confirmed. If the screening test has a 2% false-positive rate, this means of 11 women who test positive on screening, only one will have a fetus with Down syndrome.
Invasive genetic testing
Amniocentesis and chorionic villus sampling are more reliable tests, but they increase the risk of miscarriage by between 0.5–1%. The risk of limb problems may be increased in the offspring if chorionic villus sampling is performed before 10 weeks.
The risk from the procedure is greater the earlier it is performed, thus amniocentesis is not recommended before 15 weeks gestational age and chorionic villus sampling before 10 weeks gestational age.
Abortion rates
About 92% of pregnancies in Europe with a diagnosis of Down syndrome are terminated. As a result, there is almost no one with Down syndrome in Iceland and Denmark, where screening is commonplace. In the United States, the termination rate after diagnosis is around 75%, but varies from 61 to 93%, depending on the population surveyed. Rates are lower among women who are younger and have decreased over time. When asked if they would have a termination if their fetus tested positive, 23–33% said yes, when high-risk pregnant women were asked, 46–86% said yes, and when women who screened positive are asked, 89–97% say yes.
After birth
A diagnosis can often be suspected based on the child's physical appearance at birth. An analysis of the child's chromosomes is needed to confirm the diagnosis, and to determine if a translocation is present, as this may help determine the chances of the child's parents having further children with Down syndrome.
Management
Efforts such as early childhood intervention, therapies, screening for common medical issues, a good family environment, and work-related training can improve the development of children with Down syndrome and provide good quality of life. Common therapies utilized include physical therapy, occupational therapy and speech therapy. Education and proper care can provide a positive quality of life. Typical childhood vaccinations are recommended.
Health screening
+ Recommended screening Testing Children Adults Hearing 6 months, 12 months, then yearly 3–5 years T4 and TSH 6 months, then yearly Eyes 6 months, then yearly3–5 years Teeth 2 years, then every 6 months Celiac disease Between 2 and 3 years of age, or earlier if symptoms occur Sleep study 3 to 4 years, or earlier if symptoms of obstructive sleep apnea occur Neck X-rays Between 3 and 5 years of age
A number of health organizations have issued recommendations for screening those with Down syndrome for particular diseases. This is recommended to be done systematically.
At birth, all children should get an electrocardiogram and ultrasound of the heart. Surgical repair of heart problems may be required as early as three months of age. Heart valve problems may occur in young adults, and further ultrasound evaluation may be needed in adolescents and in early adulthood. Due to the elevated risk of testicular cancer, some recommend checking the person's testicles yearly.
Cognitive development
Some people with Down syndrome experience hearing loss. In this instance, hearing aids or other amplification devices can be useful for language learning. Speech therapy may be useful and is recommended to be started around nine months of age. As those with Down syndrome typically have good hand-eye coordination, learning sign language is a helpful communication tool. Augmentative and alternative communication methods, such as pointing, body language, objects, or pictures, are often used to help with communication. Behavioral issues and mental illness are typically managed with counseling or medications.
Education programs before reaching school age may be useful. School-age children with Down syndrome may benefit from inclusive education (whereby students of differing abilities are placed in classes with their peers of the same age), provided some adjustments are made to the curriculum. In the United States, the Individuals with Disabilities Education Act of 1975 requires public schools generally to allow attendance by students with Down syndrome.
Individuals with Down syndrome may learn better visually. Drawing may help with language, speech, and reading skills. Children with Down syndrome still often have difficulty with sentence structure and grammar, as well as developing the ability to speak clearly. Several types of early intervention can help with cognitive development. Efforts to develop motor skills include physical therapy, speech and language therapy, and occupational therapy. Physical therapy focuses specifically on motor development and teaching children to interact with their environment. Speech and language therapy can help prepare for later language. Lastly, occupational therapy can help with skills needed for later independence.
Other
Tympanostomy tubes are often needed and often more than one set during the person's childhood. Tonsillectomy is also often done to help with sleep apnea and throat infections. Surgery does not correct every instance of sleep apnea and a continuous positive airway pressure (CPAP) machine may be useful in those cases.
Efforts to prevent respiratory syncytial virus (RSV) infection with human monoclonal antibodies should be considered, especially in those with heart problems. In those who develop dementia there is no evidence for memantine, donepezil, rivastigmine, or galantamine.
Prognosis
Between 5–15% of children with Down syndrome in Sweden attend regular school. Some graduate from high school; however, most do not. Of those with intellectual disability in the United States who attended high school about 40% graduated. Many learn to read and write and some are able to do paid work. In adulthood about 20% in the United States do paid work in some capacity. In Sweden, however, less than 1% have regular jobs. Many are able to live semi-independently, but they often require help with financial, medical, and legal matters. Those with mosaic Down syndrome usually have better outcomes.
Individuals with Down syndrome have a higher risk of early death than the general population. This is most often from heart problems or infections. Following improved medical care, particularly for heart and gastrointestinal problems, the life expectancy has increased. This increase has been from 12 years in 1912, to 25 years in the 1980s, to 50 to 60 years in the developed world in the 2000s. Data collected between the 1985–2003 showed between 4–12% infants with Down syndrome die in the first year of life. The probability of long-term survival is partly determined by the presence of heart problems. From research at the turn of the century, it tracked those with congenital heart problems, showing 60% survived to at least 10 years and 50% survived to at least 30 years of age. The research failed to track further aging beyond 30 years. In those without heart problems, 85% studied survived to at least 10 years and 80% survived to at least 30 years of age. It is estimated that 10% lived to 70 years of age in the early 2000s. Much of this data is outdated and life expectancy has drastically improved with more equitable healthcare and continuous advancement of surgical practice. The National Down Syndrome Society provides information regarding raising a child with Down syndrome.
Epidemiology
Down syndrome is the most common chromosomal abnormality in humans. Globally, , Down syndrome occurs in about 1 per 1,000 births and results in about 17,000 deaths. More children are born with Down syndrome in countries where abortion is not allowed and in countries where pregnancy more commonly occurs at a later age. About 1.4 per 1,000 live births in the United States and 1.1 per 1,000 live births in Norway are affected. In the 1950s, in the United States, it occurred in 2 per 1,000 live births with the decrease since then due to prenatal screening and abortions. The number of pregnancies with Down syndrome is more than two times greater with many spontaneously aborting. It is the cause of 8% of all congenital disorders.
Maternal age affects the chances of having a pregnancy with Down syndrome. At age 20, the chance is 1 in 1,441; at age 30, it is 1 in 959; at age 40, it is 1 in 84; and at age 50 it is 1 in 44. Although the probability increases with maternal age, 70% of children with Down syndrome are born to women 35 years of age and younger, because younger people have more children. The father's older age is also a risk factor in women older than 35, but not in women younger than 35, and may partly explain the increase in risk as women age.
History
The English physician John Langdon Down first described Down syndrome in 1862, recognizing it as a distinct type of mental disability, and again in a more widely published report in 1866. Édouard Séguin described it as separate from cretinism in 1844. By the 20th century, Down syndrome had become the most recognizable form of mental disability.
Due to his perception that children with Down syndrome shared facial similarities with those of Blumenbach's Mongoloid race, John Langdon Down used the term "mongoloid". He felt that the existence of Down syndrome confirmed that all peoples were genetically related. In the 1950s with discovery of the underlying cause as being related to chromosomes, concerns about the race-based nature of the name increased.
In 1961, a group of nineteen scientists suggested that "mongolism" had "misleading connotations" and had become "an embarrassing term". The World Health Organization (WHO) dropped the term in 1965 after a request by the delegation from the Mongolian People's Republic. While this terminology continued to be used until the late twentieth century, it is now considered unacceptable and is no longer in common use.
In antiquity, many infants with disabilities were either killed or abandoned.
In June 2020, the earliest incidence of Down syndrome was found in genomic evidence from an infant that was buried before 3200 BC at Poulnabrone dolmen in Ireland.
Researchers believe that a number of historical pieces of art portray Down syndrome, including pottery from the pre-Columbian Tumaco-La Tolita culture in present-day Colombia and Ecuador, and the 16th-century painting The Adoration of the Christ Child.
In the 20th century, many individuals with Down syndrome were institutionalized, few of the associated medical problems were treated, and most people died in infancy or early adulthood. With the rise of the eugenics movement, 33 of the then 48 U.S. states and several countries began programs of forced sterilization of individuals with Down syndrome and comparable degrees of disability. Action T4 in Nazi Germany saw the systematic murder of people with Down syndrome made public policy.
With the discovery of karyotype techniques in the 1950s it became possible to identify abnormalities of chromosomal number or shape. In 1959 Jérôme Lejeune reported the discovery that Down syndrome resulted from an extra chromosome. However, Lejeune's claim to the discovery has been disputed, and in 2014 the Scientific Council of the French Federation of Human Genetics unanimously awarded its Grand Prize to his colleague Marthe Gautier for her role in this discovery. The discovery took place in the laboratory of Raymond Turpin at the Hôpital Trousseau in Paris, France. Jérôme Lejeune and Marthe Gautier were both his students.
As a result of this discovery, the condition became known as trisomy 21. Even before the discovery of its cause, the presence of the syndrome in all races, its association with older maternal age, and its rarity of recurrence had been noticed. Medical texts had assumed it was caused by a combination of inheritable factors that had not been identified. Other theories had focused on injuries sustained during birth.
Society and culture
Name
Down syndrome is named after John Langdon Down. He was the first person to provide an accurate description of the syndrome. His research that was published in 1866 earned him the recognition as the Father of the syndrome. While others had previously recognized components of the condition, John Langdon Down described the syndrome as a distinct, unique medical condition.
In 1975, the United States National Institutes of Health (NIH) convened a conference to standardize the naming and recommended replacing the possessive form, "Down's syndrome", with "Down syndrome". However, both the possessive and nonpossessive forms remain in use by the general population, and in the United Kingdom the NHS uses the term Down's syndrome in its patient-oriented information. The term "trisomy 21" is also commonly used.
Ethics
Obstetricians routinely offer antenatal screenings for various conditions, including Down syndrome. When results from testing become available, it is considered an ethical requirement to share the results with the patient.
Some bioethicists deem it reasonable for parents to select a child who would have the highest well-being. One criticism of this reasoning is that it often values those with disabilities less. Some parents argue that Down syndrome should not be prevented or cured and that eliminating Down syndrome amounts to genocide. The disability rights movement does not have a position on screening, although some members consider testing and abortion discriminatory. Some in the United States who are anti-abortion support abortion if the fetus is disabled, while others do not. Of a group of 40 mothers in the United States who have had one child with Down syndrome, half agreed to screening in the next pregnancy.
Within the US, some Protestant denominations see abortion as acceptable when a fetus has Down syndrome while Orthodox Christianity and Roman Catholicism do not. Women may face disapproval whether they choose abortion or not. Some of those against screening refer to it as a form of eugenics.
Advocacy groups
Advocacy groups for individuals with Down syndrome began to be formed after the Second World War. These were organizations advocating for the inclusion of people with Down syndrome into the general school system and for a greater understanding of the condition among the general population, as well as groups providing support for families with children living with Down syndrome. Before this individuals with Down syndrome were often placed in mental hospitals or asylums. Organizations included the Royal Society for Handicapped Children and Adults founded in the UK in 1946 by Judy Fryd, Kobato Kai founded in Japan in 1964, the National Down Syndrome Congress founded in the United States in 1973 by Kathryn McGee and others, and the National Down Syndrome Society founded in 1979 in the United States. The first Roman Catholic order of nuns for women with Down Syndrome, Little Sisters Disciples of the Lamb, was founded in 1985 in France.
The first World Down Syndrome Day was held on 21 March 2006. The day and month were chosen to correspond with 21 and trisomy, respectively. It was recognized by the United Nations General Assembly in 2011.
Special21.org, founded in 2015, advocates the need for a specific classification category to enable Down syndrome swimmers the opportunity to qualify and compete at the Paralympic Games. The project began when International Down syndrome swimmer Filipe Santos broke the world record in the 50m butterfly event, but was unable to compete at the Paralympic Games.
Paralympic Swimming
International Paralympic Committee Para-swimming classification codes are based upon single impairment only, whereas Down syndrome individuals have both physical and intellectual impairments.
Although Down syndrome swimmers are able to compete in the Paralympic Swimming S14 intellectual impairment category (provided they score low in IQ tests), they are often outmatched by the superior physicality of their opponents.
At present there is no designated Paralympic category for swimmers with Down syndrome, meaning they have to compete as intellectually disadvantaged athletes. This disregards their physical disabilities.
A number of advocacy groups globally have been lobbying for the inclusion of a distinct classification category for Down syndrome swimmers within the IPC Classification Codes framework.
Despite ongoing advocacy, the issue remains unresolved, and swimmers with Down syndrome continue to face challenges in accessing appropriate classification pathways.
Research
The additional copy of chromosome 21 affects the regulation of other genes, creating a complex set of changes. Mechanisms connecting the genetic defect to pathology remain unclear. While applying gene therapy seems like a promising approach, tailored treatments may be required.
Gene therapy delivered via stem cells has been proposed as a tool for studying the syndrome and as an approach to therapy. Other methods being studied include the use of antioxidants, gamma secretase inhibition, adrenergic agonists, and memantine. Research is often carried out on an animal model, the Ts65Dn mouse.
Some research seeks to develop appropriate screening tools to determine appropriate treatment strategies should they prove successful.
Other hominids
Down syndrome may also occur in hominids other than humans. In great apes chromosome 22 corresponds to the human chromosome 21 and thus trisomy 22 causes Down syndrome in apes. The condition was observed in a common chimpanzee in 1969 and a Bornean orangutan in 1979, but neither lived very long. The common chimpanzee Kanako, born around 1993 in Japan, was genetically tested and found to have chimpanzee trisomy 22 in 2011. Kanako has some of the same symptoms that are common in human Down syndrome. It is unknown how common this condition is in chimps, but it is plausible it could be roughly as common as Down syndrome is in humans. Kanako was blind, relatively small, and targeted by aggressive group mates. Kanako died in the Kumamoto Sanctuary at Kyoto University in 2020.
Fossilized remains of a Neanderthal aged approximately 6 at death were described in 2024. The child, nicknamed Tina, suffered from a malformation of the inner ear that only occurs in people with Down syndrome, and would have caused hearing loss and disabling vertigo. The fact that a Neanderthal with such a condition survived to such an age was taken as evidence of compassion and extra-maternal care among Neanderthals.
In popular culture
thumb | Chris Burke, an actor with Down syndrome, born in 1965
Individuals
Jamie Brewer is an American actress and model. She is best known for her roles in the FX horror anthology television series American Horror Story. In its first season, Murder House, she portrayed Adelaide "Addie" Langdon; in the third season, Coven, she portrayed Nan, an enigmatic and clairvoyant witch; in the fourth season Freak Show, she portrayed Chester Creb's vision of his doll, Marjorie; in the seventh season Cult, she portrayed Hedda, a member of the 'SCUM' crew, led by feminist Valerie Solanas; and she also returned to her role as Nan in the eighth season, Apocalypse. In February 2015, Brewer became the first woman with Down syndrome to walk the red carpet at New York Fashion Week, for designer Carrie Hammer.
Tommy Jessop is a British actor, author and activist. He starred in the BAFTA-nominated dramas Coming Down the Mountain and Line of Duty, and was the first person with Down syndrome to play Hamlet professionally as part of a touring production with Blue Apple Theatre. Jessop is a prominent campaigner in the UK and was a key figure in the creation of the Down Syndrome Act 2022. In 2023 Headline Publishing Group published Jessop's autobiography A Life Worth Living: Acting, Activism and Everything Else.
Sofía Jirau is a Puerto Rican model with Down syndrome, working with designers and media outlets such as Vogue Mexico, People, and Hola!. In February 2020, Jirau made her debut at New York Fashion Week. Then in February 2022, she became the first-ever model with Down Syndrome to be hired by the American retail company Victoria's Secret. She walked the LA Fashion Week runway in 2022. Jirau launched a campaign in 2021 called Sin Límites or No Limits "which seeks to make visible the challenges facing the Down syndrome community, demonstrate our ability to achieve our goals, and raise awareness about the condition throughout the world."
Chris Nikic is the first person with Down syndrome to finish an Ironman Triathlon. He was awarded the Jimmy V Award for Perseverance at the 2021 ESPY Awards. Nikic continues to run races around the world, using his platform to promote his 1% Better message and bring awareness to the endless possibilities for people with Down syndrome.
Grace Strobel is an American model and the first person with Down Syndrome to represent an American skin-care brand. She first joined Obagi in 2020, and continues to be an Ambassador for the brand as of 2022. She walked the runway representing Tommy Hilfiger for Runway of Dreams New York Fashion Week 2020 and Atlantic City Fashion Week. Strobel has been featured in Forbes, on The Today Show, Good Morning America, by Rihanna's Fenty Beauty, Lady Gaga's Kindness Channel, and many more. She is also a public speaker and gives a presentation called #TheGraceEffect about what it is like to live with Down syndrome.
Television and film
Life Goes On is an American drama television series that aired on ABC from September 12, 1989, to May 23, 1993. The show centers on the Thatcher family living in suburban Chicago: Drew, his wife Libby, and their children Paige, Rebecca and Charles. Charles, called Corky on the show and portrayed by Chris Burke, was the first major character on a television series with Down syndrome. Burke's revolutionary role conveyed a realistic portrayal of people with Down syndrome and changed the way audiences viewed people with disabilities.
Champions (2023) is a film starring four main actors with Down syndrome: Madison Tevlin, Kevin Iannucci, Matthew Von Der Ahe and James Day Keith. It is an American sports comedy film directed by Bobby Farrelly in his solo directorial debut, from a screenplay written by Mark Rizzo. The film stars Woody Harrelson as a temperamental minor-league basketball coach who after an arrest must coach a team of players with intellectual disabilities as community service; Kaitlin Olson, Ernie Hudson, and Cheech Marin also star.
Born This Way is an American reality television series produced by Bunim/Murray Productions featuring seven adults with Down syndrome with work hard to achieve goals and overcome obstacles. The show received a Television Academy Honor in 2016.
The Peanut Butter Falcon is a 2019 American comedy-drama film written and directed by Tyler Nilson and Michael Schwartz, in their directorial film debut, and starring Zack Gottsagen, Shia LaBeouf, Dakota Johnson and John Hawkes. The plot follows a young man with Down syndrome who escapes from an assisted living facility, in order to follow his dream of being a wrestler, and befriends a wayward fisherman on the run. As the two men form a rapid bond, a social worker attempts to track them.
Music
The Devo song "Mongoloid" is about someone with Down syndrome.
The Amateur Transplants song "Your Baby" is about a fetus with Down syndrome.
Toys
In 2023, Mattel released a Barbie doll with characteristics of a person having Down syndrome as a way to promote diversity.
See also
List of syndromes
Characteristics of syndromic ASD conditions
Notes
References
Further reading
External links
Down's syndrome by the UK National Health Service
Category:Autosomal trisomies
Category:Genetic syndromes
Category:Syndromes with intellectual disability
Category:Wikipedia medicine articles ready to translate (full)
Category:Syndromes affecting the gastrointestinal tract
Category:Syndromes affecting the heart
Category:Syndromes affecting the nervous system
Category:Syndromes with craniofacial abnormalities
Category:Syndromic autism
Category:Diseases named after discoverers
|
medicine_health
| 7,985
|
9236
|
Evolution
|
https://en.wikipedia.org/wiki/Evolution
|
Evolution is the change in the heritable characteristics of biological populations over successive generations. It occurs when evolutionary processes such as genetic drift and natural selection act on genetic variation, resulting in certain characteristics becoming more or less common within a population over successive generations. The process of evolution has given rise to biodiversity at every level of biological organisation.
The scientific theory of evolution by natural selection was conceived independently by two British naturalists, Charles Darwin and Alfred Russel Wallace, in the mid-19th century as an explanation for why organisms are adapted to their physical and biological environments. The theory was first set out in detail in Darwin's book On the Origin of Species. Evolution by natural selection is established by observable facts about living organisms: (1) more offspring are often produced than can possibly survive; (2) traits vary among individuals with respect to their morphology, physiology, and behaviour; (3) different traits confer different rates of survival and reproduction (differential fitness); and (4) traits can be passed from generation to generation (heritability of fitness). In successive generations, members of a population are therefore more likely to be replaced by the offspring of parents with favourable characteristics for that environment.
In the early 20th century, competing ideas of evolution were refuted and evolution was combined with Mendelian inheritance and population genetics to give rise to modern evolutionary theory. In this synthesis the basis for heredity is in DNA molecules that pass information from generation to generation. The processes that change DNA in a population include natural selection, genetic drift, mutation, and gene flow.
All life on Earth—including humanity—shares a last universal common ancestor (LUCA), which lived approximately 3.5–3.8 billion years ago. The fossil record includes a progression from early biogenic graphite to microbial mat fossils to fossilised multicellular organisms. Existing patterns of biodiversity have been shaped by repeated formations of new species (speciation), changes within species (anagenesis), and loss of species (extinction) throughout the evolutionary history of life on Earth. Morphological and biochemical traits tend to be more similar among species that share a more recent common ancestor, which historically was used to reconstruct phylogenetic trees, although direct comparison of genetic sequences is a more common method today.NAS 2008, p. 17
Evolutionary biologists have continued to study various aspects of evolution by forming and testing hypotheses as well as constructing theories based on evidence from the field or laboratory and on data generated by the methods of mathematical and theoretical biology. Their discoveries have influenced not just the development of biology but also other fields including agriculture, medicine, and computer science.
Heredity
Evolution in organisms occurs through changes in heritable characteristics—the inherited characteristics of an organism. In humans, for example, eye colour is an inherited characteristic and an individual might inherit the "brown-eye trait" from one of their parents. Inherited traits are controlled by genes and the complete set of genes within an organism's genome (genetic material) is called its genotype.
The complete set of observable traits that make up the structure and behaviour of an organism is called its phenotype. Some of these traits come from the interaction of its genotype with the environment while others are neutral. Some observable characteristics are not inherited. For example, suntanned skin comes from the interaction between a person's genotype and sunlight; thus, suntans are not passed on to people's children. The phenotype is the ability of the skin to tan when exposed to sunlight. However, some people tan more easily than others, due to differences in genotypic variation; a striking example are people with the inherited trait of albinism, who do not tan at all and are very sensitive to sunburn.
Heritable characteristics are passed from one generation to the next via DNA, a molecule that encodes genetic information. DNA is a long biopolymer composed of four types of bases. The sequence of bases along a particular DNA molecule specifies the genetic information, in a manner similar to a sequence of letters spelling out a sentence. Before a cell divides, the DNA is copied, so that each of the resulting two cells will inherit the DNA sequence. Portions of a DNA molecule that specify a single functional unit are called genes; different genes have different sequences of bases. Within cells, each long strand of DNA is called a chromosome. The specific location of a DNA sequence within a chromosome is known as a locus. If the DNA sequence at a locus varies between individuals, the different forms of this sequence are called alleles. DNA sequences can change through mutations, producing new alleles. If a mutation occurs within a gene, the new allele may affect the trait that the gene controls, altering the phenotype of the organism. However, while this simple correspondence between an allele and a trait works in some cases, most traits are influenced by multiple genes in a quantitative or epistatic manner.
Sources of variation
Evolution can occur if there is genetic variation within a population. Variation comes from mutations in the genome, reshuffling of genes through sexual reproduction and migration between populations (gene flow). Despite the constant introduction of new variation through mutation and gene flow, most of the genome of a species is very similar among all individuals of that species. However, discoveries in the field of evolutionary developmental biology have demonstrated that even relatively small differences in genotype can lead to dramatic differences in phenotype both within and between species.
An individual organism's phenotype results from both its genotype and the influence of the environment it has lived in. The modern evolutionary synthesis defines evolution as the change over time in this genetic variation. The frequency of one particular allele will become more or less prevalent relative to other forms of that gene. Variation disappears when a new allele reaches the point of fixation—when it either disappears from the population or replaces the ancestral allele entirely.
Mutation
Mutations are changes in the DNA sequence of a cell's genome and are the ultimate source of genetic variation in all organisms. When mutations occur, they may alter the product of a gene, or prevent the gene from functioning, or have no effect.
About half of the mutations in the coding regions of protein-coding genes are deleterious — the other half are neutral. A small percentage of the total mutations in this region confer a fitness benefit. Some of the mutations in other parts of the genome are deleterious but the vast majority are neutral. A few are beneficial.
Mutations can involve large sections of a chromosome becoming duplicated (usually by genetic recombination), which can introduce extra copies of a gene into a genome. Extra copies of genes are a major source of the raw material needed for new genes to evolve. This is important because most new genes evolve within gene families from pre-existing genes that share common ancestors. For example, the human eye uses four genes to make structures that sense light: three for colour vision and one for night vision; all four are descended from a single ancestral gene.
New genes can be generated from an ancestral gene when a duplicate copy mutates and acquires a new function. This process is easier once a gene has been duplicated because it increases the redundancy of the system; one gene in the pair can acquire a new function while the other copy continues to perform its original function. Other types of mutations can even generate entirely new genes from previously noncoding DNA, a phenomenon termed de novo gene birth.
The generation of new genes can also involve small parts of several genes being duplicated, with these fragments then recombining to form new combinations with new functions (exon shuffling). When new genes are assembled from shuffling pre-existing parts, domains act as modules with simple independent functions, which can be mixed together to produce new combinations with new and complex functions. For example, polyketide synthases are large enzymes that make antibiotics; they contain up to 100 independent domains that each catalyse one step in the overall process, like a step in an assembly line.
One example of mutation is wild boar piglets. They are camouflage coloured and show a characteristic pattern of dark and light longitudinal stripes. However, mutations in the melanocortin 1 receptor (MC1R) disrupt the pattern. The majority of pig breeds carry MC1R mutations disrupting wild-type colour and different mutations causing dominant black colouring.
Sex and recombination
In asexual organisms, genes are inherited together, or linked, as they cannot mix with genes of other organisms during reproduction. In contrast, the offspring of sexual organisms contain random mixtures of their parents' chromosomes that are produced through independent assortment. In a related process called homologous recombination, sexual organisms exchange DNA between two matching chromosomes. Recombination and reassortment do not alter allele frequencies, but instead change which alleles are associated with each other, producing offspring with new combinations of alleles. Sex usually increases genetic variation and may increase the rate of evolution.
The two-fold cost of sex was first described by John Maynard Smith. The first cost is that in sexually dimorphic species only one of the two sexes can bear young. This cost does not apply to hermaphroditic species, like most plants and many invertebrates. The second cost is that any individual who reproduces sexually can only pass on 50% of its genes to any individual offspring, with even less passed on as each new generation passes. Yet sexual reproduction is the more common means of reproduction among eukaryotes and multicellular organisms. The Red Queen hypothesis has been used to explain the significance of sexual reproduction as a means to enable continual evolution and adaptation in response to coevolution with other species in an ever-changing environment. Another hypothesis is that sexual reproduction is primarily an adaptation for promoting accurate recombinational repair of damage in germline DNA, and that increased diversity is a byproduct of this process that may sometimes be adaptively beneficial.Bernstein H, Byerly HC, Hopf FA, Michod RE. Genetic damage, mutation, and the evolution of sex. Science. 1985 Sep 20;229(4719):1277–81. . PMID 3898363Bernstein H, Hopf FA, Michod RE. The molecular basis of the evolution of sex. Adv Genet. 1987;24:323-70. . PMID 3324702
Gene flow
Gene flow is the exchange of genes between populations and between species. It can therefore be a source of variation that is new to a population or to a species. Gene flow can be caused by the movement of individuals between separate populations of organisms, as might be caused by the movement of mice between inland and coastal populations, or the movement of pollen between heavy-metal-tolerant and heavy-metal-sensitive populations of grasses.
Gene transfer between species includes the formation of hybrid organisms and horizontal gene transfer. Horizontal gene transfer is the transfer of genetic material from one organism to another organism that is not its offspring; this is most common among bacteria. In medicine, this contributes to the spread of antibiotic resistance, as when one bacteria acquires resistance genes it can rapidly transfer them to other species. Horizontal transfer of genes from bacteria to eukaryotes such as the yeast Saccharomyces cerevisiae and the adzuki bean weevil Callosobruchus chinensis has occurred. An example of larger-scale transfers are the eukaryotic bdelloid rotifers, which have received a range of genes from bacteria, fungi and plants. Viruses can also carry DNA between organisms, allowing transfer of genes even across biological domains.
Large-scale gene transfer has also occurred between the ancestors of eukaryotic cells and bacteria, during the acquisition of chloroplasts and mitochondria. It is possible that eukaryotes themselves originated from horizontal gene transfers between bacteria and archaea.
Epigenetics
Some heritable changes cannot be explained by changes to the sequence of nucleotides in the DNA. These phenomena are classed as epigenetic inheritance systems. DNA methylation marking chromatin, self-sustaining metabolic loops, gene silencing by RNA interference and the three-dimensional conformation of proteins (such as prions) are areas where epigenetic inheritance systems have been discovered at the organismic level. Developmental biologists suggest that complex interactions in genetic networks and communication among cells can lead to heritable variations that may underlay some of the mechanics in developmental plasticity and canalisation. Heritability may also occur at even larger scales. For example, ecological inheritance through the process of niche construction is defined by the regular and repeated activities of organisms in their environment. This generates a legacy of effects that modify and feed back into the selection regime of subsequent generations. Other examples of heritability in evolution that are not under the direct control of genes include the inheritance of cultural traits and symbiogenesis.
Evolutionary forces
From a neo-Darwinian perspective, evolution occurs when there are changes in the frequencies of alleles within a population of interbreeding organisms, for example, the allele for black colour in a population of moths becoming more common. Mechanisms that can lead to changes in allele frequencies include natural selection, genetic drift, and mutation bias.
Natural selection
Evolution by natural selection is the process by which traits that enhance survival and reproduction become more common in successive generations of a population. It embodies three principles:
Variation exists within populations of organisms with respect to morphology, physiology and behaviour (phenotypic variation).
Different traits confer different rates of survival and reproduction (differential fitness).
These traits can be passed from generation to generation (heritability of fitness).
More offspring are produced than can possibly survive, and these conditions produce competition between organisms for survival and reproduction. Consequently, organisms with traits that give them an advantage over their competitors are more likely to pass on their traits to the next generation than those with traits that do not confer an advantage. This teleonomy is the quality whereby the process of natural selection creates and preserves traits that are seemingly fitted for the functional roles they perform. Consequences of selection include nonrandom mating and genetic hitchhiking.
The central concept of natural selection is the evolutionary fitness of an organism. Fitness is measured by an organism's ability to survive and reproduce, which determines the size of its genetic contribution to the next generation. However, fitness is not the same as the total number of offspring: instead fitness is indicated by the proportion of subsequent generations that carry an organism's genes. For example, if an organism could survive well and reproduce rapidly, but its offspring were all too small and weak to survive, this organism would make little genetic contribution to future generations and would thus have low fitness.
If an allele increases fitness more than the other alleles of that gene, then with each generation this allele has a higher probability of becoming common within the population. These traits are said to be selected for. Examples of traits that can increase fitness are enhanced survival and increased fecundity. Conversely, the lower fitness caused by having a less beneficial or deleterious allele results in this allele likely becoming rarer—they are selected against.
Importantly, the fitness of an allele is not a fixed characteristic; if the environment changes, previously neutral or harmful traits may become beneficial and previously beneficial traits become harmful. However, even if the direction of selection does reverse in this way, traits that were lost in the past may not re-evolve in an identical form. However, a re-activation of dormant genes, as long as they have not been eliminated from the genome and were only suppressed perhaps for hundreds of generations, can lead to the re-occurrence of traits thought to be lost like hindlegs in dolphins, teeth in chickens, wings in wingless stick insects, tails and additional nipples in humans etc. "Throwbacks" such as these are known as atavisms.
· Graph 1 shows directional selection, in which a single extreme phenotype is favoured.
· Graph 2 depicts stabilising selection, where the intermediate phenotype is favoured over the extreme traits.
· Graph 3 shows disruptive selection, in which the extreme phenotypes are favoured over the intermediate.
Natural selection within a population for a trait that can vary across a range of values, such as height, can be categorised into three different types. The first is directional selection, which is a shift in the average value of a trait over time—for example, organisms slowly getting taller. Secondly, disruptive selection is selection for extreme trait values and often results in two different values becoming most common, with selection against the average value. This would be when either short or tall organisms had an advantage, but not those of medium height. Finally, in stabilising selection there is selection against extreme trait values on both ends, which causes a decrease in variance around the average value and less diversity. This would, for example, cause organisms to eventually have a similar height.
Natural selection most generally makes nature the measure against which individuals and individual traits, are more or less likely to survive. "Nature" in this sense refers to an ecosystem, that is, a system in which organisms interact with every other element, physical as well as biological, in their local environment. Eugene Odum, a founder of ecology, defined an ecosystem as: "Any unit that includes all of the organisms...in a given area interacting with the physical environment so that a flow of energy leads to clearly defined trophic structure, biotic diversity, and material cycles (i.e., exchange of materials between living and nonliving parts) within the system...." Each population within an ecosystem occupies a distinct niche, or position, with distinct relationships to other parts of the system. These relationships involve the life history of the organism, its position in the food chain and its geographic range. This broad understanding of nature enables scientists to delineate specific forces which, together, comprise natural selection.
Natural selection can act at different levels of organisation, such as genes, cells, individual organisms, groups of organisms and species. Selection can act at multiple levels simultaneously. An example of selection occurring below the level of the individual organism are genes called transposons, which can replicate and spread throughout a genome. Selection at a level above the individual, such as group selection, may allow the evolution of cooperation.
Genetic drift
Genetic drift is the random fluctuation of allele frequencies within a population from one generation to the next. When selective forces are absent or relatively weak, allele frequencies are equally likely to drift upward or downward in each successive generation because the alleles are subject to sampling error. This drift halts when an allele eventually becomes fixed, either by disappearing from the population or by replacing the other alleles entirely. Genetic drift may therefore eliminate some alleles from a population due to chance alone. Even in the absence of selective forces, genetic drift can cause two separate populations that begin with the same genetic structure to drift apart into two divergent populations with different sets of alleles.
According to the neutral theory of molecular evolution most evolutionary changes are the result of the fixation of neutral mutations by genetic drift. In this model, most genetic changes in a population are thus the result of constant mutation pressure and genetic drift. This form of the neutral theory has been debated since it does not seem to fit some genetic variation seen in nature. A better-supported version of this model is the nearly neutral theory, according to which a mutation that would be effectively neutral in a small population is not necessarily neutral in a large population. Other theories propose that genetic drift is dwarfed by other stochastic forces in evolution, such as genetic hitchhiking, also known as genetic draft. Another concept is constructive neutral evolution (CNE), which explains that complex systems can emerge and spread into a population through neutral transitions due to the principles of excess capacity, presuppression, and ratcheting, and it has been applied in areas ranging from the origins of the spliceosome to the complex interdependence of microbial communities.
The time it takes a neutral allele to become fixed by genetic drift depends on population size; fixation is more rapid in smaller populations. The number of individuals in a population is not critical, but instead a measure known as the effective population size. The effective population is usually smaller than the total population since it takes into account factors such as the level of inbreeding and the stage of the lifecycle in which the population is the smallest. The effective population size may not be the same for every gene in the same population.
It is usually difficult to measure the relative importance of selection and neutral processes, including drift. The comparative importance of adaptive and non-adaptive forces in driving evolutionary change is an area of current research.
Mutation bias
Mutation bias is usually conceived as a difference in expected rates for two different kinds of mutation, e.g., transition-transversion bias, GC-AT bias, deletion-insertion bias. This is related to the idea of developmental bias. J. B. S. Haldane and Ronald Fisher argued that, because mutation is a weak pressure easily overcome by selection, tendencies of mutation would be ineffectual except under conditions of neutral evolution or extraordinarily high mutation rates. This opposing-pressures argument was long used to dismiss the possibility of internal tendencies in evolution, until the molecular era prompted renewed interest in neutral evolution.
Noboru Sueoka and Ernst Freese proposed that systematic biases in mutation might be responsible for systematic differences in genomic GC composition between species. The identification of a GC-biased E. coli mutator strain in 1967, along with the proposal of the neutral theory, established the plausibility of mutational explanations for molecular patterns, which are now common in the molecular evolution literature.
For instance, mutation biases are frequently invoked in models of codon usage. Such models also include effects of selection, following the mutation-selection-drift model, which allows both for mutation biases and differential selection based on effects on translation. Hypotheses of mutation bias have played an important role in the development of thinking about the evolution of genome composition, including isochores. Different insertion vs. deletion biases in different taxa can lead to the evolution of different genome sizes. The hypothesis of Lynch regarding genome size relies on mutational biases toward increase or decrease in genome size.
However, mutational hypotheses for the evolution of composition suffered a reduction in scope when it was discovered that (1) GC-biased gene conversion makes an important contribution to composition in diploid organisms such as mammals and (2) bacterial genomes frequently have AT-biased mutation.
Contemporary thinking about the role of mutation biases reflects a different theory from that of Haldane and Fisher. More recent work showed that the original "pressures" theory assumes that evolution is based on standing variation: when evolution depends on events of mutation that introduce new alleles, mutational and developmental biases in the introduction of variation (arrival biases) can impose biases on evolution without requiring neutral evolution or high mutation rates.
Several studies report that the mutations implicated in adaptation reflect common mutation biases though others dispute this interpretation.
Genetic hitchhiking
Recombination allows alleles on the same strand of DNA to become separated. However, the rate of recombination is low (approximately two events per chromosome per generation). As a result, genes close together on a chromosome may not always be shuffled away from each other and genes that are close together tend to be inherited together, a phenomenon known as linkage. This tendency is measured by finding how often two alleles occur together on a single chromosome compared to expectations, which is called their linkage disequilibrium. A set of alleles that is usually inherited in a group is called a haplotype. This can be important when one allele in a particular haplotype is strongly beneficial: natural selection can drive a selective sweep that will also cause the other alleles in the haplotype to become more common in the population; this effect is called genetic hitchhiking or genetic draft. Genetic draft caused by the fact that some neutral genes are genetically linked to others that are under selection can be partially captured by an appropriate effective population size.
Sexual selection
A special case of natural selection is sexual selection, which is selection for any trait that increases mating success by increasing the attractiveness of an organism to potential mates. Traits that evolved through sexual selection are particularly prominent among males of several animal species. Although sexually favoured, traits such as cumbersome antlers, mating calls, large body size and bright colours often attract predation, which compromises the survival of individual males. This survival disadvantage is balanced by higher reproductive success in males that show these hard-to-fake, sexually selected traits.
Natural outcomes
Evolution influences every aspect of the form and behaviour of organisms. Most prominent are the specific behavioural and physical adaptations that are the outcome of natural selection. These adaptations increase fitness by aiding activities such as finding food, avoiding predators or attracting mates. Organisms can also respond to selection by cooperating with each other, usually by aiding their relatives or engaging in mutually beneficial symbiosis. In the longer term, evolution produces new species through splitting ancestral populations of organisms into new groups that cannot or will not interbreed. These outcomes of evolution are distinguished based on time scale as macroevolution versus microevolution. Macroevolution refers to evolution that occurs at or above the level of species, in particular speciation and extinction, whereas microevolution refers to smaller evolutionary changes within a species or population, in particular shifts in allele frequency and adaptation. Macroevolution is the outcome of long periods of microevolution. Thus, the distinction between micro- and macroevolution is not a fundamental one—the difference is simply the time involved. However, in macroevolution, the traits of the entire species may be important. For instance, a large amount of variation among individuals allows a species to rapidly adapt to new habitats, lessening the chance of it going extinct, while a wide geographic range increases the chance of speciation, by making it more likely that part of the population will become isolated. In this sense, microevolution and macroevolution might involve selection at different levels—with microevolution acting on genes and organisms, versus macroevolutionary processes such as species selection acting on entire species and affecting their rates of speciation and extinction.
A common misconception is that evolution has goals, long-term plans, or an innate tendency for "progress", as expressed in beliefs such as orthogenesis and evolutionism; realistically, however, evolution has no long-term goal and does not necessarily produce greater complexity. Although complex species have evolved, they occur as a side effect of the overall number of organisms increasing, and simple forms of life still remain more common in the biosphere. For example, the overwhelming majority of species are microscopic prokaryotes, which form about half the world's biomass despite their small size and constitute the vast majority of Earth's biodiversity. Simple organisms have therefore been the dominant form of life on Earth throughout its history and continue to be the main form of life up to the present day, with complex life only appearing more diverse because it is more noticeable. Indeed, the evolution of microorganisms is particularly important to evolutionary research since their rapid reproduction allows the study of experimental evolution and the observation of evolution and adaptation in real time.
Adaptation
Adaptation is the process that makes organisms better suited to their habitat.: "Adaptation... could no longer be considered a static condition, a product of a creative past and became instead a continuing dynamic process."The sixth edition of the Oxford Dictionary of Science (2010) defines adaptation as "Any change in the structure or functioning of successive generations of a population that makes it better suited to its environment." Also, the term adaptation may refer to a trait that is important for an organism's survival. For example, the adaptation of horses' teeth to the grinding of grass. By using the term adaptation for the evolutionary process and adaptive trait for the product (the bodily part or function), the two senses of the word may be distinguished. Adaptations are produced by natural selection. The following definitions are due to Theodosius Dobzhansky:
Adaptation is the evolutionary process whereby an organism becomes better able to live in its habitat or habitats.
Adaptedness is the state of being adapted: the degree to which an organism is able to live and reproduce in a given set of habitats.
An adaptive trait is an aspect of the developmental pattern of the organism which enables or enhances the probability of that organism surviving and reproducing.
Adaptation may cause either the gain of a new feature, or the loss of an ancestral feature. An example that shows both types of change is bacterial adaptation to antibiotic selection, with genetic changes causing antibiotic resistance by both modifying the target of the drug, or increasing the activity of transporters that pump the drug out of the cell. Other striking examples are the bacteria Escherichia coli evolving the ability to use citric acid as a nutrient in a long-term laboratory experiment, Flavobacterium evolving a novel enzyme that allows these bacteria to grow on the by-products of nylon manufacturing, and the soil bacterium Sphingobium evolving an entirely new metabolic pathway that degrades the synthetic pesticide pentachlorophenol. An interesting but still controversial idea is that some adaptations might increase the ability of organisms to generate genetic diversity and adapt by natural selection (increasing organisms' evolvability).
upright=1.35|thumb|left|A baleen whale skeleton. Letters a and b label flipper bones, which were adapted from front leg bones, while c indicates vestigial leg bones, both suggesting an adaptation from land to sea.
Adaptation occurs through the gradual modification of existing structures. Consequently, structures with similar internal organisation may have different functions in related organisms. This is the result of a single ancestral structure being adapted to function in different ways. The bones within bat wings, for example, are very similar to those in mice feet and primate hands, due to the descent of all these structures from a common mammalian ancestor. However, since all living organisms are related to some extent, even organs that appear to have little or no structural similarity, such as arthropod, squid and vertebrate eyes, or the limbs and wings of arthropods and vertebrates, can depend on a common set of homologous genes that control their assembly and function; this is called deep homology.
During evolution, some structures may lose their original function and become vestigial structures. Such structures may have little or no function in a current species, yet have a clear function in ancestral species, or other closely related species. Examples include pseudogenes, the non-functional remains of eyes in blind cave-dwelling fish, wings in flightless birds, the presence of hip bones in whales and snakes, and sexual traits in organisms that reproduce via asexual reproduction. Examples of vestigial structures in humans include wisdom teeth, the coccyx, the vermiform appendix, and other behavioural vestiges such as goose bumps and primitive reflexes.
However, many traits that appear to be simple adaptations are in fact exaptations: structures originally adapted for one function, but which coincidentally became somewhat useful for some other function in the process. One example is the African lizard Holaspis guentheri, which developed an extremely flat head for hiding in crevices, as can be seen by looking at its near relatives. However, in this species, the head has become so flattened that it assists in gliding from tree to tree—an exaptation. Within cells, molecular machines such as the bacterial flagella and protein sorting machinery evolved by the recruitment of several pre-existing proteins that previously had different functions. Another example is the recruitment of enzymes from glycolysis and xenobiotic metabolism to serve as structural proteins called crystallins within the lenses of organisms' eyes.
An area of current investigation in evolutionary developmental biology is the developmental basis of adaptations and exaptations. This research addresses the origin and evolution of embryonic development and how modifications of development and developmental processes produce novel features. These studies have shown that evolution can alter development to produce new structures, such as embryonic bone structures that develop into the jaw in other animals instead forming part of the middle ear in mammals. It is also possible for structures that have been lost in evolution to reappear due to changes in developmental genes, such as a mutation in chickens causing embryos to grow teeth similar to those of crocodiles. It is now becoming clear that most alterations in the form of organisms are due to changes in a small set of conserved genes.
Coevolution
Interactions between organisms can produce both conflict and cooperation. When the interaction is between pairs of species, such as a pathogen and a host, or a predator and its prey, these species can develop matched sets of adaptations. Here, the evolution of one species causes adaptations in a second species. These changes in the second species then, in turn, cause new adaptations in the first species. This cycle of selection and response is called coevolution. An example is the production of tetrodotoxin in the rough-skinned newt and the evolution of tetrodotoxin resistance in its predator, the common garter snake. In this predator-prey pair, an evolutionary arms race has produced high levels of toxin in the newt and correspondingly high levels of toxin resistance in the snake.
Cooperation
Not all co-evolved interactions between species involve conflict. Many cases of mutually beneficial interactions have evolved. For instance, an extreme cooperation exists between plants and the mycorrhizal fungi that grow on their roots and aid the plant in absorbing nutrients from the soil. This is a reciprocal relationship as the plants provide the fungi with sugars from photosynthesis. Here, the fungi actually grow inside plant cells, allowing them to exchange nutrients with their hosts, while sending signals that suppress the plant immune system.
Coalitions between organisms of the same species have also evolved. An extreme case is the eusociality found in social insects, such as bees, termites and ants, where sterile insects feed and guard the small number of organisms in a colony that are able to reproduce. On an even smaller scale, the somatic cells that make up the body of an animal limit their reproduction so they can maintain a stable organism, which then supports a small number of the animal's germ cells to produce offspring. Here, somatic cells respond to specific signals that instruct them whether to grow, remain as they are, or die. If cells ignore these signals and multiply inappropriately, their uncontrolled growth causes cancer.
Such cooperation within species may have evolved through the process of kin selection, which is where one organism acts to help raise a relative's offspring. This activity is selected for because if the helping individual contains alleles which promote the helping activity, it is likely that its kin will also contain these alleles and thus those alleles will be passed on. Other processes that may promote cooperation include group selection, where cooperation provides benefits to a group of organisms.
Speciation
Speciation is the process where a species diverges into two or more descendant species.
There are multiple ways to define the concept of "species". The choice of definition is dependent on the particularities of the species concerned. For example, some species concepts apply more readily toward sexually reproducing organisms while others lend themselves better toward asexual organisms. Despite the diversity of various species concepts, these various concepts can be placed into one of three broad philosophical approaches: interbreeding, ecological and phylogenetic. The Biological Species Concept (BSC) is a classic example of the interbreeding approach. Defined by evolutionary biologist Ernst Mayr in 1942, the BSC states that "species are groups of actually or potentially interbreeding natural populations, which are reproductively isolated from other such groups." Despite its wide and long-term use, the BSC like other species concepts is not without controversy, for example, because genetic recombination among prokaryotes is not an intrinsic aspect of reproduction; this is called the species problem. Some researchers have attempted a unifying monistic definition of species, while others adopt a pluralistic approach and suggest that there may be different ways to logically interpret the definition of a species.
Barriers to reproduction between two diverging sexual populations are required for the populations to become new species. Gene flow may slow this process by spreading the new genetic variants also to the other populations. Depending on how far two species have diverged since their most recent common ancestor, it may still be possible for them to produce offspring, as with horses and donkeys mating to produce mules. Such hybrids are generally infertile. In this case, closely related species may regularly interbreed, but hybrids will be selected against and the species will remain distinct. However, viable hybrids are occasionally formed and these new species can either have properties intermediate between their parent species, or possess a totally new phenotype. The importance of hybridisation in producing new species of animals is unclear, although cases have been seen in many types of animals, with the grey tree frog being a particularly well-studied example.
Speciation has been observed multiple times under both controlled laboratory conditions and in nature.
In sexually reproducing organisms, speciation results from reproductive isolation followed by genealogical divergence. There are four primary geographic modes of speciation. The most common in animals is allopatric speciation, which occurs in populations initially isolated geographically, such as by habitat fragmentation or migration. Selection under these conditions can produce very rapid changes in the appearance and behaviour of organisms. As selection and drift act independently on populations isolated from the rest of their species, separation may eventually produce organisms that cannot interbreed.
The second mode of speciation is peripatric speciation, which occurs when small populations of organisms become isolated in a new environment. This differs from allopatric speciation in that the isolated populations are numerically much smaller than the parental population. Here, the founder effect causes rapid speciation after an increase in inbreeding increases selection on homozygotes, leading to rapid genetic change.
The third mode is parapatric speciation. This is similar to peripatric speciation in that a small population enters a new habitat, but differs in that there is no physical separation between these two populations. Instead, speciation results from the evolution of mechanisms that reduce gene flow between the two populations. Generally this occurs when there has been a drastic change in the environment within the parental species' habitat. One example is the grass Anthoxanthum odoratum, which can undergo parapatric speciation in response to localised metal pollution from mines. Here, plants evolve that have resistance to high levels of metals in the soil. Selection against interbreeding with the metal-sensitive parental population produced a gradual change in the flowering time of the metal-resistant plants, which eventually produced complete reproductive isolation. Selection against hybrids between the two populations may cause reinforcement, which is the evolution of traits that promote mating within a species, as well as character displacement, which is when two species become more distinct in appearance.
Finally, in sympatric speciation species diverge without geographic isolation or changes in habitat. This form is rare since even a small amount of gene flow may remove genetic differences between parts of a population. Generally, sympatric speciation in animals requires the evolution of both genetic differences and nonrandom mating, to allow reproductive isolation to evolve.
One type of sympatric speciation involves crossbreeding of two related species to produce a new hybrid species. This is not common in animals as animal hybrids are usually sterile. This is because during meiosis the homologous chromosomes from each parent are from different species and cannot successfully pair. However, it is more common in plants because plants often double their number of chromosomes, to form polyploids. This allows the chromosomes from each parental species to form matching pairs during meiosis, since each parent's chromosomes are represented by a pair already. An example of such a speciation event is when the plant species Arabidopsis thaliana and Arabidopsis arenosa crossbred to give the new species Arabidopsis suecica. This happened about 20,000 years ago, and the speciation process has been repeated in the laboratory, which allows the study of the genetic mechanisms involved in this process. Indeed, chromosome doubling within a species may be a common cause of reproductive isolation, as half the doubled chromosomes will be unmatched when breeding with undoubled organisms.
Speciation events are important in the theory of punctuated equilibrium, which accounts for the pattern in the fossil record of short "bursts" of evolution interspersed with relatively long periods of stasis, where species remain relatively unchanged. In this theory, speciation and rapid evolution are linked, with natural selection and genetic drift acting most strongly on organisms undergoing speciation in novel habitats or small populations. As a result, the periods of stasis in the fossil record correspond to the parental population and the organisms undergoing speciation and rapid evolution are found in small populations or geographically restricted habitats and therefore rarely being preserved as fossils.
Extinction
Extinction is the disappearance of an entire species. Extinction is not an unusual event, as species regularly appear through speciation and disappear through extinction. Nearly all animal and plant species that have lived on Earth are now extinct, and extinction appears to be the ultimate fate of all species. These extinctions have happened continuously throughout the history of life, although the rate of extinction spikes in occasional mass extinction events. The Cretaceous–Paleogene extinction event, during which the non-avian dinosaurs became extinct, is the most well-known, but the earlier Permian–Triassic extinction event was even more severe, with approximately 96% of all marine species driven to extinction. The Holocene extinction event is an ongoing mass extinction associated with humanity's expansion across the globe over the past few thousand years. Present-day extinction rates are 100–1000 times greater than the background rate and up to 30% of current species may be extinct by the mid 21st century. Human activities are now the primary cause of the ongoing extinction event; global warming may further accelerate it in the future. Despite the estimated extinction of more than 99% of all species that ever lived on Earth, about 1 trillion species are estimated to be on Earth currently with only one-thousandth of 1% described.
The role of extinction in evolution is not very well understood and may depend on which type of extinction is considered. The causes of the continuous "low-level" extinction events, which form the majority of extinctions, may be the result of competition between species for limited resources (the competitive exclusion principle). If one species can out-compete another, this could produce species selection, with the fitter species surviving and the other species being driven to extinction. The intermittent mass extinctions are also important, but instead of acting as a selective force, they drastically reduce diversity in a nonspecific manner and promote bursts of rapid evolution and speciation in survivors.
Applications
Concepts and models used in evolutionary biology, such as natural selection, have many applications.
Artificial selection is the intentional selection of traits in a population of organisms. This has been used for thousands of years in the domestication of plants and animals. More recently, such selection has become a vital part of genetic engineering, with selectable markers such as antibiotic resistance genes being used to manipulate DNA. Proteins with valuable properties have evolved by repeated rounds of mutation and selection (for example modified enzymes and new antibodies) in a process called directed evolution.
Understanding the changes that have occurred during an organism's evolution can reveal the genes needed to construct parts of the body, genes which may be involved in human genetic disorders. For example, the Mexican tetra is an albino cavefish that lost its eyesight during evolution. Breeding together different populations of this blind fish produced some offspring with functional eyes, since different mutations had occurred in the isolated populations that had evolved in different caves. This helped identify genes required for vision and pigmentation.
Evolutionary theory has many applications in medicine. Many human diseases are not static phenomena, but capable of evolution. Viruses, bacteria, fungi and cancers evolve to be resistant to host immune defences, as well as to pharmaceutical drugs. These same problems occur in agriculture with pesticide and herbicide resistance. It is possible that we are facing the end of the effective life of most of available antibiotics and predicting the evolution and evolvability of our pathogens and devising strategies to slow or circumvent it is requiring deeper knowledge of the complex forces driving evolution at the molecular level.
In computer science, simulations of evolution using evolutionary algorithms and artificial life started in the 1960s and were extended with simulation of artificial selection. Artificial evolution became a widely recognised optimisation method as a result of the work of Ingo Rechenberg in the 1960s. He used evolution strategies to solve complex engineering problems. Genetic algorithms in particular became popular through the writing of John Henry Holland. Practical applications also include automatic evolution of computer programmes. Evolutionary algorithms are now used to solve multi-dimensional problems more efficiently than software produced by human designers and also to optimise the design of systems.
Evolutionary history of life
Origin of life
The Earth is about 4.54 billion years old. The earliest undisputed evidence of life on Earth dates from at least 3.5 billion years ago, during the Eoarchean Era after a geological crust started to solidify following the earlier molten Hadean Eon. Microbial mat fossils have been found in 3.48 billion-year-old sandstone in Western Australia. Other early physical evidence of a biogenic substance is graphite in 3.7 billion-year-old metasedimentary rocks discovered in Western Greenland as well as "remains of biotic life" found in 4.1 billion-year-old rocks in Western Australia. Commenting on the Australian findings, Stephen Blair Hedges wrote: "If life arose relatively quickly on Earth, then it could be common in the universe." In July 2016, scientists reported identifying a set of 355 genes from the last universal common ancestor (LUCA) of all organisms living on Earth.
More than 99% of all species, amounting to over five billion species, that ever lived on Earth are estimated to be extinct. Estimates on the number of Earth's current species range from 10 million to 14 million, of which about 1.9 million are estimated to have been named and 1.6 million documented in a central database to date, leaving at least 80% not yet described.
Highly energetic chemistry is thought to have produced a self-replicating molecule around 4 billion years ago, and half a billion years later the last common ancestor of all life existed. The current scientific consensus is that the complex biochemistry that makes up life came from simpler chemical reactions. The beginning of life may have included self-replicating molecules such as RNA and the assembly of simple cells.
Common descent
All organisms on Earth are descended from a common ancestor or ancestral gene pool. Current species are a stage in the process of evolution, with their diversity the product of a long series of speciation and extinction events. The common descent of organisms was first deduced from four simple facts about organisms: First, they have geographic distributions that cannot be explained by local adaptation. Second, the diversity of life is not a set of completely unique organisms, but organisms that share morphological similarities. Third, vestigial traits with no clear purpose resemble functional ancestral traits. Fourth, organisms can be classified using these similarities into a hierarchy of nested groups, similar to a family tree.
upright=1.5|thumb|left|The hominoids are descendants of a common ancestor.
Due to horizontal gene transfer, this "tree of life" may be more complicated than a simple branching tree, since some genes have spread independently between distantly related species. To solve this problem and others, some authors prefer to use the "Coral of life" as a metaphor or a mathematical model to illustrate the evolution of life. This view dates back to an idea briefly mentioned by Darwin but later abandoned.
Past species have also left records of their evolutionary history. Fossils, along with the comparative anatomy of present-day organisms, constitute the morphological, or anatomical, record. By comparing the anatomies of both modern and extinct species, palaeontologists can infer the lineages of those species. However, this approach is most successful for organisms that had hard body parts, such as shells, bones or teeth. Further, as prokaryotes such as bacteria and archaea share a limited set of common morphologies, their fossils do not provide information on their ancestry.
More recently, evidence for common descent has come from the study of biochemical similarities between organisms. For example, all living cells use the same basic set of nucleotides and amino acids. The development of molecular genetics has revealed the record of evolution left in organisms' genomes: dating when species diverged through the molecular clock produced by mutations. For example, these DNA sequence comparisons have revealed that humans and chimpanzees share 98% of their genomes and analysing the few areas where they differ helps shed light on when the common ancestor of these species existed.
Evolution of life
Prokaryotes inhabited the Earth from approximately 3–4 billion years ago. No obvious changes in morphology or cellular organisation occurred in these organisms over the next few billion years. The eukaryotic cells emerged between 1.6 and 2.7 billion years ago. The next major change in cell structure came when bacteria were engulfed by eukaryotic cells, in a cooperative association called endosymbiosis. The engulfed bacteria and the host cell then underwent coevolution, with the bacteria evolving into either mitochondria or hydrogenosomes. Another engulfment of cyanobacterial-like organisms led to the formation of chloroplasts in algae and plants.
The history of life was that of the unicellular eukaryotes, prokaryotes and archaea until around 1.7 billion years ago, when multicellular organisms began to appear, with differentiated cells performing specialised functions. The evolution of multicellularity occurred in multiple independent events, in organisms as diverse as sponges, brown algae, cyanobacteria, slime moulds and myxobacteria. In January 2016, scientists reported that, about 800 million years ago, a minor genetic change in a single molecule called GK-PID may have allowed organisms to go from a single cell organism to one of many cells.
Approximately 538.8 million years ago, a remarkable amount of biological diversity appeared over a span of around 10 million years in what is called the Cambrian explosion. Here, the majority of types of modern animals appeared in the fossil record, as well as unique lineages that subsequently became extinct. Various triggers for the Cambrian explosion have been proposed, including the accumulation of oxygen in the atmosphere from photosynthesis.
About 500 million years ago, plants and fungi colonised the land and were soon followed by arthropods and other animals. Insects were particularly successful and even today make up the majority of animal species. Amphibians first appeared around 364 million years ago, followed by early amniotes and birds around 155 million years ago (both from "reptile"-like lineages), mammals around 129 million years ago, Homininae around 10 million years ago and modern humans around 250,000 years ago. However, despite the evolution of these large animals, smaller organisms similar to the types that evolved early in this process continue to be highly successful and dominate the Earth, with the majority of both biomass and species being prokaryotes.
History of evolutionary thought
Classical antiquity
The proposal that one type of organism could descend from another type goes back to some of the first pre-Socratic Greek philosophers, such as Anaximander and Empedocles. Such proposals survived into Roman times. The poet and philosopher Lucretius followed Empedocles in his masterwork De rerum natura ().
Middle Ages
In contrast to these materialistic views, Aristotelianism had considered all natural things as actualisations of fixed natural possibilities, known as forms. This became part of a medieval teleological understanding of nature in which all things have an intended role to play in a divine cosmic order. Variations of this idea became the standard understanding of the Middle Ages and were integrated into Christian learning, but Aristotle did not demand that real types of organisms always correspond one-for-one with exact metaphysical forms and specifically gave examples of how new types of living things could come to be.
A number of Arab Muslim scholars wrote about evolution, most notably Ibn Khaldun, who wrote the book Muqaddimah in 1377, in which he asserted that humans developed from "the world of the monkeys", in a process by which "species become more numerous".Kiros, Teodros. Explorations in African Political Thought. 2001, page 55
Pre-Darwinian
The "New Science" of the 17th century rejected the Aristotelian approach. It sought to explain natural phenomena in terms of physical laws that were the same for all visible things and that did not require the existence of any fixed natural categories or divine cosmic order. However, this new approach was slow to take root in the biological sciences: the last bastion of the concept of fixed natural types. John Ray applied one of the previously more general terms for fixed natural types, "species", to plant and animal types, but he strictly identified each type of living thing as a species and proposed that each species could be defined by the features that perpetuated themselves generation after generation. The biological classification introduced by Carl Linnaeus in 1735 explicitly recognised the hierarchical nature of species relationships, but still viewed species as fixed according to a divine plan.
Other naturalists of this time speculated on the evolutionary change of species over time according to natural laws. In 1751, Pierre Louis Maupertuis wrote of natural modifications occurring during reproduction and accumulating over many generations to produce new species. Georges-Louis Leclerc, Comte de Buffon, suggested that species could degenerate into different organisms, and Erasmus Darwin proposed that all warm-blooded animals could have descended from a single microorganism (or "filament"). The first full-fledged evolutionary scheme was Jean-Baptiste Lamarck's "transmutation" theory of 1809, which envisaged spontaneous generation continually producing simple forms of life that developed greater complexity in parallel lineages with an inherent progressive tendency, and postulated that on a local level, these lineages adapted to the environment by inheriting changes caused by their use or disuse in parents. (The latter process was later called Lamarckism.) These ideas were condemned by established naturalists as speculation lacking empirical support. In particular, Georges Cuvier insisted that species were unrelated and fixed, their similarities reflecting divine design for functional needs. In the meantime, Ray's ideas of benevolent design had been developed by William Paley into the Natural Theology or Evidences of the Existence and Attributes of the Deity (1802), which proposed complex adaptations as evidence of divine design and which was admired by Charles Darwin. Letter 2532, 22 November 1859.
Darwinian revolution
The crucial break from the concept of constant typological classes or types in biology came with the theory of evolution through natural selection, which was formulated by Charles Darwin and Alfred Wallace in terms of variable populations. Darwin used the expression descent with modification rather than evolution. Partly influenced by An Essay on the Principle of Population (1798) by Thomas Robert Malthus, Darwin noted that population growth would lead to a "struggle for existence" in which favourable variations prevailed as others perished. In each generation, many offspring fail to survive to an age of reproduction because of limited resources. This could explain the diversity of plants and animals from a common ancestry through the working of natural laws in the same way for all types of organism. Darwin developed his theory of "natural selection" from 1838 onwards and was writing up his "big book" on the subject when Alfred Russel Wallace sent him a version of virtually the same theory in 1858. Their separate papers were presented together at an 1858 meeting of the Linnean Society of London. At the end of 1859, Darwin's publication of his "abstract" as On the Origin of Species explained natural selection in detail and in a way that led to an increasingly wide acceptance of Darwin's concepts of evolution at the expense of alternative theories. Thomas Henry Huxley applied Darwin's ideas to humans, using palaeontology and comparative anatomy to provide strong evidence that humans and apes shared a common ancestry. Some were disturbed by this since it implied that humans did not have a special place in the universe.
Othniel C. Marsh, America's first palaeontologist, was the first to provide solid fossil evidence to support Darwin's theory of evolution by unearthing the ancestors of the modern horse.Plate, Robert. The Dinosaur Hunters: Othniel C. Marsh and Edward D. Cope, pp. 69, 203–205, David McKay, New York, 1964. In 1877, Marsh delivered a very influential speech before the annual meeting of the American Association for the Advancement of Science, providing a demonstrative argument for evolution. For the first time, Marsh traced the evolution of vertebrates from fish all the way through humans. Sparing no detail, he listed a wealth of fossil examples of past life forms. The significance of this speech was immediately recognised by the scientific community, and it was printed in its entirety in several scientific journals.McCarren, Mark J. The Scientific Contributions of Othniel Charles Marsh, pp. 37–39, Peabody Museum of Natural History, Yale University, New Haven, Connecticut, 1993. Plate, Robert. The Dinosaur Hunters: Othniel C. Marsh and Edward D. Cope, pp. 188–189, David McKay, New York, 1964.
In 1880, Marsh caught the attention of the scientific world with the publication of Odontornithes: a Monograph on Extinct Birds of North America, which included his discoveries of birds with teeth. These skeletons helped bridge the gap between dinosaurs and birds, and provided invaluable support for Darwin's theory of evolution. Darwin wrote to Marsh saying, "Your work on these old birds & on the many fossil animals of N. America has afforded the best support to the theory of evolution, which has appeared within the last 20 years" (since Darwin's publication of Origin of Species).Plate, Robert. The Dinosaur Hunters: Othniel C. Marsh and Edward D. Cope, pp. 210–211, David McKay, New York, 1964.Cianfaglione, Paul. "O.C. Marsh Odontornithes Monograph Still Relevant Today", 20 July 2016, Avian Musings: "going beyond the field mark."
Pangenesis and heredity
The mechanisms of reproductive heritability and the origin of new traits remained a mystery. Towards this end, Darwin developed his provisional theory of pangenesis. In 1865, Gregor Mendel reported that traits were inherited in a predictable manner through the independent assortment and segregation of elements (later known as genes). Mendel's laws of inheritance eventually supplanted most of Darwin's pangenesis theory. August Weismann made the important distinction between germ cells that give rise to gametes (such as sperm and egg cells) and the somatic cells of the body, demonstrating that heredity passes through the germ line only. Hugo de Vries connected Darwin's pangenesis theory to Weismann's germ/soma cell distinction and proposed that Darwin's pangenes were concentrated in the cell nucleus and when expressed they could move into the cytoplasm to change the cell's structure. De Vries was also one of the researchers who made Mendel's work well known, believing that Mendelian traits corresponded to the transfer of heritable variations along the germline. To explain how new variants originate, de Vries developed a mutation theory that led to a temporary rift between those who accepted Darwinian evolution and biometricians who allied with de Vries. In the 1930s, pioneers in the field of population genetics, such as Ronald Fisher, Sewall Wright and J. B. S. Haldane set the foundations of evolution onto a robust statistical philosophy. The false contradiction between Darwin's theory, genetic mutations, and Mendelian inheritance was thus reconciled.
The 'modern synthesis'
In the 1920s and 1930s, the modern synthesis connected natural selection and population genetics, based on Mendelian inheritance, into a unified theory that included random genetic drift, mutation, and gene flow. This new version of evolutionary theory focused on changes in allele frequencies in population. It explained patterns observed across species in populations, through fossil transitions in palaeontology.
Further syntheses
Since then, further syntheses have extended evolution's explanatory power in the light of numerous discoveries, to cover biological phenomena across the whole of the biological hierarchy from genes to populations.
The publication of the structure of DNA by James Watson and Francis Crick with contribution of Rosalind Franklin in 1953 demonstrated a physical mechanism for inheritance. Molecular biology improved understanding of the relationship between genotype and phenotype. Advances were also made in phylogenetic systematics, mapping the transition of traits into a comparative and testable framework through the publication and use of evolutionary trees. In 1973, evolutionary biologist Theodosius Dobzhansky penned that "nothing in biology makes sense except in the light of evolution", because it has brought to light the relations of what first seemed disjointed facts in natural history into a coherent explanatory body of knowledge that describes and predicts many observable facts about life on this planet.
One extension, known as evolutionary developmental biology and informally called "evo-devo", emphasises how changes between generations (evolution) act on patterns of change within individual organisms (development). Since the beginning of the 21st century, some biologists have argued for an extended evolutionary synthesis, which would account for the effects of non-genetic inheritance modes, such as epigenetics, parental effects, ecological inheritance and cultural inheritance, and evolvability.
Social and cultural responses
In the 19th century, particularly after the publication of On the Origin of Species in 1859, the idea that life had evolved was an active source of academic debate centred on the philosophical, social and religious implications of evolution. Today, the modern evolutionary synthesis is accepted by a vast majority of scientists. However, evolution remains a contentious concept for some theists.For an overview of the philosophical, religious and cosmological controversies, see:
For the scientific and social reception of evolution in the 19th and early 20th centuries, see:
While various religions and denominations have reconciled their beliefs with evolution through concepts such as theistic evolution, there are creationists who believe that evolution is contradicted by the creation myths found in their religions and who raise various objections to evolution. As had been demonstrated by responses to the publication of Vestiges of the Natural History of Creation in 1844, the most controversial aspect of evolutionary biology is the implication of human evolution that humans share common ancestry with apes and that the mental and moral faculties of humanity have the same types of natural causes as other inherited traits in animals. In some countries, notably the United States, these tensions between science and religion have fuelled the current creation–evolution controversy, a religious conflict focusing on politics and public education. While other scientific fields such as cosmology and Earth science also conflict with literal interpretations of many religious texts, evolutionary biology experiences significantly more opposition from religious literalists.
The teaching of evolution in American secondary school biology classes was uncommon in most of the first half of the 20th century. The Scopes trial decision of 1925 caused the subject to become very rare in American secondary biology textbooks for a generation, but it was gradually re-introduced later and became legally protected with the 1968 Epperson v. Arkansas decision. Since then, the competing religious belief of creationism was legally disallowed in secondary school curricula in various decisions in the 1970s and 1980s, but it returned in pseudoscientific form as intelligent design (ID), to be excluded once again in the 2005 Kitzmiller v. Dover Area School District case. The debate over Darwin's ideas did not generate significant controversy in China.
See also
Chronospecies
References
Bibliography
The notebook is available from The Complete Work of Charles Darwin Online . Retrieved 2019-10-09.
The book is available from The Complete Work of Charles Darwin Online . Retrieved 2014-11-21.
"Proceedings of a symposium held at the American Museum of Natural History in New York, 2002."
. Retrieved 2014-11-29.
"Papers from the Symposium on the Limits of Reductionism in Biology, held at the Novartis Foundation, London, May 13–15, 1997."
"Based on a conference held in Bellagio, Italy, June 25–30, 1989"
Further reading
Introductory reading
American version.
Advanced reading
External links
General information
Adobe Flash required.
"History of Evolution in the United States". Salon. Retrieved 2021-08-24.
Experiments
Online lectures
Category:Biology theories
*
|
biology
| 10,609
|
9312
|
Endocrine system
|
https://en.wikipedia.org/wiki/Endocrine_system
|
The endocrine system is a messenger system in an organism comprising feedback loops of hormones that are released by internal glands directly into the circulatory system and that target and regulate distant organs. In vertebrates, the hypothalamus is the neural control center for all endocrine systems.
In humans, the major endocrine glands are the thyroid, parathyroid, pituitary, pineal, and adrenal glands, and the (male) testis and (female) ovaries. The hypothalamus, pancreas, and thymus also function as endocrine glands, among other functions. (The hypothalamus and pituitary glands are organs of the neuroendocrine system. One of the most important functions of the hypothalamusit is located in the brain adjacent to the pituitary glandis to link the endocrine system to the nervous system via the pituitary gland.) Other organs, such as the kidneys, also have roles within the endocrine system by secreting certain hormones. The study of the endocrine system and its disorders is known as endocrinology.
The thyroid secretes thyroxine, the pituitary secretes growth hormone, the pineal secretes melatonin, the testis secretes testosterone, and the ovaries secrete estrogen and progesterone.
Glands that signal each other in sequence are often referred to as an axis, such as the hypothalamic–pituitary–adrenal axis. In addition to the specialized endocrine organs mentioned above, many other organs that are part of other body systems have secondary endocrine functions, including bone, kidneys, liver, heart and gonads. For example, the kidney secretes the endocrine hormone erythropoietin. Hormones can be amino acid complexes, steroids, eicosanoids, leukotrienes, or prostaglandins.
The endocrine system is contrasted both to exocrine glands, which secrete hormones to the outside of the body, and to the system known as paracrine signalling between cells over a relatively short distance. Endocrine glands have no ducts, are vascular, and commonly have intracellular vacuoles or granules that store their hormones. In contrast, exocrine glands, such as salivary glands, mammary glands, and submucosal glands within the gastrointestinal tract, tend to be much less vascular and have ducts or a hollow lumen.
Endocrinology is a branch of internal medicine.
Structure
Major endocrine systems
The human endocrine system consists of several systems that operate via feedback loops. Several important feedback systems are mediated via the hypothalamus and pituitary.
TRH – TSH – T3/T4
GnRH – LH/FSH – sex hormones
CRH – ACTH – cortisol
Renin – angiotensin – aldosterone
Leptin vs. ghrelin
Glands
Endocrine glands are glands of the endocrine system that secrete their products, hormones, directly into interstitial spaces where they are absorbed into blood rather than through a duct. The major glands of the endocrine system include the pineal gland, pituitary gland, pancreas, ovaries, testes, thyroid gland, parathyroid gland, hypothalamus and adrenal glands. The hypothalamus and pituitary gland are neuroendocrine organs.
The hypothalamus and the anterior pituitary are two out of the three endocrine glands that are important in cell signaling. They are both part of the HPA axis which is known to play a role in cell signaling in the nervous system.
Hypothalamus: The hypothalamus is a key regulator of the autonomic nervous system. The endocrine system has three sets of endocrine outputs which include the magnocellular system, the parvocellular system, and autonomic intervention. The magnocellular is involved in the expression of oxytocin or vasopressin. The parvocellular is involved in controlling the secretion of hormones from the anterior pituitary.
Anterior Pituitary: The main role of the anterior pituitary gland is to produce and secrete tropic hormones. Some examples of tropic hormones secreted by the anterior pituitary gland include TSH, ACTH, GH, LH, and FSH.
Endocrine cells
There are many types of cells that make up the endocrine system and these cells typically make up larger tissues and organs that function within and outside of the endocrine system.
Hypothalamus
Anterior pituitary gland
Pineal gland
Posterior pituitary gland
The posterior pituitary gland is a section of the pituitary gland. This organ does not produce any hormone but stores and secretes hormones such as antidiuretic hormone (ADH) which is synthesized by supraoptic nucleus of hypothalamus and oxytocin which is synthesized by paraventricular nucleus of hypothalamus. ADH functions to help the body to retain water; this is important in maintaining a homeostatic balance between blood solutions and water. Oxytocin functions to induce uterine contractions, stimulate lactation, and allows for ejaculation.
Thyroid gland
follicular cells of the thyroid gland produce and secrete T3 and T4 in response to elevated levels of TRH, produced by the hypothalamus, and subsequent elevated levels of TSH, produced by the anterior pituitary gland, which further regulates the metabolic activity and rate of all cells, including cell growth and tissue differentiation.
Parathyroid gland The endocrine system can control all emotions and can control temperature.
Epithelial cells of the parathyroid glands are richly supplied with blood from the inferior and superior thyroid arteries and secrete parathyroid hormone (PTH). PTH acts on bone, the kidneys, and the GI tract to increase calcium reabsorption and phosphate excretion. In addition, PTH stimulates the conversion of Vitamin D to its most active variant, 1,25-dihydroxyvitamin D3, which further stimulates calcium absorption in the GI tract.
Thymus Gland
Adrenal glands
Adrenal cortex
Adrenal medulla
Pancreas
Pancreas contain nearly 1 to 2 million islets of Langerhans (a tissue which consists cells that secrete hormones) and acini. Acini secretes digestive enzymes.
Alpha cells
The alpha cells of the pancreas secrete hormones to maintain homeostatic blood sugar. Insulin is produced and excreted to lower blood sugar to normal levels. Glucagon, another hormone produced by alpha cells, is secreted in response to low blood sugar levels; glucagon stimulates glycogen stores in the liver to release sugar into the bloodstream to raise blood sugar to normal levels.
Beta cells
60% of the cells present in islet of Langerhans are beta cells. Beta cells secrete insulin. Along with glucagon, insulin helps in maintaining glucose levels in our body. Insulin decreases blood glucose level ( a hypoglycemic hormone) whereas glucagon increases blood glucose level.
Delta cells
F Cells
Ovaries
Granulosa cells
Testis
Leydig cells
Development
The fetal endocrine system is one of the first systems to develop during prenatal development.
Adrenal glands
The fetal adrenal cortex can be identified within four weeks of gestation. The adrenal cortex originates from the thickening of the intermediate mesoderm. At five to six weeks of gestation, the mesonephros differentiates into a tissue known as the genital ridge. The genital ridge produces the steroidogenic cells for both the gonads and the adrenal cortex. The adrenal medulla is derived from ectodermal cells. Cells that will become adrenal tissue move retroperitoneally to the upper portion of the mesonephros. At seven weeks of gestation, the adrenal cells are joined by sympathetic cells that originate from the neural crest to form the adrenal medulla. At the end of the eighth week, the adrenal glands have been encapsulated and have formed a distinct organ above the developing kidneys. At birth, the adrenal glands weigh approximately eight to nine grams (twice that of the adult adrenal glands) and are 0.5% of the total body weight. At 25 weeks, the adult adrenal cortex zone develops and is responsible for the primary synthesis of steroids during the early postnatal weeks.
Thyroid gland
The thyroid gland develops from two different clusterings of embryonic cells. One part is from the thickening of the pharyngeal floor, which serves as the precursor of the thyroxine (T4) producing follicular cells. The other part is from the caudal extensions of the fourth pharyngobranchial pouches which results in the parafollicular calcitonin-secreting cells. These two structures are apparent by 16 to 17 days of gestation. Around the 24th day of gestation, the foramen cecum, a thin, flask-like diverticulum of the median anlage develops. At approximately 24 to 32 days of gestation the median anlage develops into a bilobed structure. By 50 days of gestation, the medial and lateral anlage have fused together. At 12 weeks of gestation, the fetal thyroid is capable of storing iodine for the production of TRH, TSH, and free thyroid hormone. At 20 weeks, the fetus is able to implement feedback mechanisms for the production of thyroid hormones. During fetal development, T4 is the major thyroid hormone being produced while triiodothyronine (T3) and its inactive derivative, reverse T3, are not detected until the third trimester.
Parathyroid glands
A lateral and ventral view of an embryo showing the third (inferior) and fourth (superior) parathyroid glands during the 6th week of embryogenesis
Once the embryo reaches four weeks of gestation, the parathyroid glands begins to develop. The human embryo forms five sets of endoderm-lined pharyngeal pouches. The third and fourth pouch are responsible for developing into the inferior and superior parathyroid glands, respectively. The third pharyngeal pouch encounters the developing thyroid gland and they migrate down to the lower poles of the thyroid lobes. The fourth pharyngeal pouch later encounters the developing thyroid gland and migrates to the upper poles of the thyroid lobes. At 14 weeks of gestation, the parathyroid glands begin to enlarge from 0.1 mm in diameter to approximately 1 – 2 mm at birth. The developing parathyroid glands are physiologically functional beginning in the second trimester.
Studies in mice have shown that interfering with the HOX15 gene can cause parathyroid gland aplasia, which suggests the gene plays an important role in the development of the parathyroid gland. The genes, TBX1, CRKL, GATA3, GCM2, and SOX3 have also been shown to play a crucial role in the formation of the parathyroid gland. Mutations in TBX1 and CRKL genes are correlated with DiGeorge syndrome, while mutations in GATA3 have also resulted in a DiGeorge-like syndrome. Malformations in the GCM2 gene have resulted in hypoparathyroidism. Studies on SOX3 gene mutations have demonstrated that it plays a role in parathyroid development. These mutations also lead to varying degrees of hypopituitarism.
Pancreas
The human fetal pancreas begins to develop by the fourth week of gestation. Five weeks later, the pancreatic alpha and beta cells have begun to emerge. Reaching eight to ten weeks into development, the pancreas starts producing insulin, glucagon, somatostatin, and pancreatic polypeptide. During the early stages of fetal development, the number of pancreatic alpha cells outnumbers the number of pancreatic beta cells. The alpha cells reach their peak in the middle stage of gestation. From the middle stage until term, the beta cells continue to increase in number until they reach an approximate 1:1 ratio with the alpha cells. The insulin concentration within the fetal pancreas is 3.6 pmol/g at seven to ten weeks, which rises to 30 pmol/g at 16–25 weeks of gestation. Near term, the insulin concentration increases to 93 pmol/g. The endocrine cells have dispersed throughout the body within 10 weeks. At 31 weeks of development, the islets of Langerhans have differentiated.
While the fetal pancreas has functional beta cells by 14 to 24 weeks of gestation, the amount of insulin that is released into the bloodstream is relatively low. In a study of pregnant women carrying fetuses in the mid-gestation and near term stages of development, the fetuses did not have an increase in plasma insulin levels in response to injections of high levels of glucose. In contrast to insulin, the fetal plasma glucagon levels are relatively high and continue to increase during development. At the mid-stage of gestation, the glucagon concentration is 6 μg/g, compared to 2 μg/g in adult humans. Just like insulin, fetal glucagon plasma levels do not change in response to an infusion of glucose. However, a study of an infusion of alanine into pregnant women was shown to increase the cord blood and maternal glucagon concentrations, demonstrating a fetal response to amino acid exposure.
As such, while the fetal pancreatic alpha and beta islet cells have fully developed and are capable of hormone synthesis during the remaining fetal maturation, the islet cells are relatively immature in their capacity to produce glucagon and insulin. This is thought to be a result of the relatively stable levels of fetal serum glucose concentrations achieved via maternal transfer of glucose through the placenta. On the other hand, the stable fetal serum glucose levels could be attributed to the absence of pancreatic signaling initiated by incretins during feeding. In addition, the fetal pancreatic islets cells are unable to sufficiently produce cAMP and rapidly degrade cAMP by phosphodiesterase necessary to secrete glucagon and insulin.
During fetal development, the storage of glycogen is controlled by fetal glucocorticoids and placental lactogen. Fetal insulin is responsible for increasing glucose uptake and lipogenesis during the stages leading up to birth. Fetal cells contain a higher amount of insulin receptors in comparison to adults cells and fetal insulin receptors are not downregulated in cases of hyperinsulinemia. In comparison, fetal haptic glucagon receptors are lowered in comparison to adult cells and the glycemic effect of glucagon is blunted. This temporary physiological change aids the increased rate of fetal development during the final trimester. Poorly managed maternal diabetes mellitus is linked to fetal macrosomia, increased risk of miscarriage, and defects in fetal development. Maternal hyperglycemia is also linked to increased insulin levels and beta cell hyperplasia in the post-term infant. Children of diabetic mothers are at an increased risk for conditions such as: polycythemia, renal vein thrombosis, hypocalcemia, respiratory distress syndrome, jaundice, cardiomyopathy, congenital heart disease, and improper organ development.
Gonads
The reproductive system begins development at four to five weeks of gestation with germ cell migration. The bipotential gonad results from the collection of the medioventral region of the urogenital ridge. At the five-week point, the developing gonads break away from the adrenal primordium. Gonadal differentiation begins 42 days following conception.
Male gonadal development
For males, the testes form at six fetal weeks and the sertoli cells begin developing by the eight week of gestation. SRY, the sex-determining locus, serves to differentiate the Sertoli cells. The Sertoli cells are the point of origin for anti-Müllerian hormone. Once synthesized, the anti-Müllerian hormone initiates the ipsilateral regression of the Müllerian tract and inhibits the development of female internal features. At 10 weeks of gestation, the Leydig cells begin to produce androgen hormones. The androgen hormone dihydrotestosterone is responsible for the development of the male external genitalia.
The testicles descend during prenatal development in a two-stage process that begins at eight weeks of gestation and continues through the middle of the third trimester. During the transabdominal stage (8 to 15 weeks of gestation), the gubernacular ligament contracts and begins to thicken. The craniosuspensory ligament begins to break down. This stage is regulated by the secretion of insulin-like 3 (INSL3), a relaxin-like factor produced by the testicles, and the INSL3 G-coupled receptor, LGR8. During the transinguinal phase (25 to 35 weeks of gestation), the testicles descend into the scrotum. This stage is regulated by androgens, the genitofemoral nerve, and calcitonin gene-related peptide. During the second and third trimester, testicular development concludes with the diminution of the fetal Leydig cells and the lengthening and coiling of the seminiferous cords.
Female gonadal development
For females, the ovaries become morphologically visible by the 8th week of gestation. The absence of testosterone results in the diminution of the Wolffian structures. The Müllerian structures remain and develop into the fallopian tubes, uterus, and the upper region of the vagina. The urogenital sinus develops into the urethra and lower region of the vagina, the genital tubercle develops into the clitoris, the urogenital folds develop into the labia minora, and the urogenital swellings develop into the labia majora. At 16 weeks of gestation, the ovaries produce FSH and LH/hCG receptors. At 20 weeks of gestation, the theca cell precursors are present and oogonia mitosis is occurring. At 25 weeks of gestation, the ovary is morphologically defined and folliculogenesis can begin.
Studies of gene expression show that a specific complement of genes, such as follistatin and multiple cyclin kinase inhibitors are involved in ovarian development. An assortment of genes and proteins - such as WNT4, RSPO1, FOXL2, and various estrogen receptors - have been shown to prevent the development of testicles or the lineage of male-type cells.
Pituitary gland
The pituitary gland is formed within the rostral neural plate. The Rathke's pouch, a cavity of ectodermal cells of the oropharynx, forms between the fourth and fifth week of gestation and upon full development, it gives rise to the anterior pituitary gland. By seven weeks of gestation, the anterior pituitary vascular system begins to develop. During the first 12 weeks of gestation, the anterior pituitary undergoes cellular differentiation. At 20 weeks of gestation, the hypophyseal portal system has developed. The Rathke's pouch grows towards the third ventricle and fuses with the diverticulum. This eliminates the lumen and the structure becomes Rathke's cleft. The posterior pituitary lobe is formed from the diverticulum. Portions of the pituitary tissue may remain in the nasopharyngeal midline. In rare cases this results in functioning ectopic hormone-secreting tumors in the nasopharynx.
The functional development of the anterior pituitary involves spatiotemporal regulation of transcription factors expressed in pituitary stem cells and dynamic gradients of local soluble factors. The coordination of the dorsal gradient of pituitary morphogenesis is dependent on neuroectodermal signals from the infundibular bone morphogenetic protein 4 (BMP4). This protein is responsible for the development of the initial invagination of the Rathke's pouch. Other essential proteins necessary for pituitary cell proliferation are Fibroblast growth factor 8 (FGF8), Wnt4, and Wnt5. Ventral developmental patterning and the expression of transcription factors is influenced by the gradients of BMP2 and sonic hedgehog protein (SHH). These factors are essential for coordinating early patterns of cell proliferation.
Six weeks into gestation, the corticotroph cells can be identified. By seven weeks of gestation, the anterior pituitary is capable of secreting ACTH. Within eight weeks of gestation, somatotroph cells begin to develop with cytoplasmic expression of human growth hormone. Once a fetus reaches 12 weeks of development, the thyrotrophs begin expression of Beta subunits for TSH, while gonadotrophs being to express beta-subunits for LH and FSH. Male fetuses predominately produced LH-expressing gonadotrophs, while female fetuses produce an equal expression of LH and FSH expressing gonadotrophs. At 24 weeks of gestation, prolactin-expressing lactotrophs begin to emerge.
Function
Hormones
A hormone is any of a class of signaling molecules produced by cells in glands in multicellular organisms that are transported by the circulatory system to target distant organs to regulate physiology and behaviour. Hormones have diverse chemical structures, mainly of 3 classes: eicosanoids, steroids, and amino acid/protein derivatives (amines, peptides, and proteins). The glands that secrete hormones comprise the endocrine system. The term hormone is sometimes extended to include chemicals produced by cells that affect the same cell (autocrine or intracrine signalling) or nearby cells (paracrine signalling).
Hormones are used to communicate between organs and tissues for physiological regulation and behavioral activities, such as digestion, metabolism, respiration, tissue function, sensory perception, sleep, excretion, lactation, stress, growth and development, movement, reproduction, and mood.
Hormones affect distant cells by binding to specific receptor proteins in the target cell resulting in a change in cell function. This may lead to cell type-specific responses that include rapid changes to the activity of existing proteins, or slower changes in the expression of target genes. Amino acid–based hormones (amines and peptide or protein hormones) are water-soluble and act on the surface of target cells via signal transduction pathways; steroid hormones, being lipid-soluble, move through the plasma membranes of target cells to act within their nuclei.
Cell signalling
The typical mode of cell signalling in the endocrine system is endocrine signaling, that is, using the circulatory system to reach distant target organs. However, there are also other modes, i.e., paracrine, autocrine, and neuroendocrine signaling. Purely neurocrine signaling between neurons, on the other hand, belongs completely to the nervous system.
Autocrine
Autocrine signaling is a form of signaling in which a cell secretes a hormone or chemical messenger (called the autocrine agent) that binds to autocrine receptors on the same cell, leading to changes in the cells.
Paracrine
Some endocrinologists and clinicians include the paracrine system as part of the endocrine system, but there is not consensus. Paracrines are slower acting, targeting cells in the same tissue or organ. An example of this is somatostatin which is released by some pancreatic cells and targets other pancreatic cells.
Juxtacrine
Juxtacrine signaling is a type of intercellular communication that is transmitted via oligosaccharide, lipid, or protein components of a cell membrane, and may affect either the emitting cell or the immediately adjacent cells.
It occurs between adjacent cells that possess broad patches of closely opposed plasma membrane linked by transmembrane channels known as connexons. The gap between the cells can usually be between only 2 and 4 nm.
Clinical significance
Disease
Diseases of the endocrine system are common, including conditions such as diabetes mellitus, thyroid disease, and obesity.
Endocrine disease is characterized by misregulated hormone release (a productive pituitary adenoma), inappropriate response to signaling (hypothyroidism), lack of a gland (diabetes mellitus type 1, diminished erythropoiesis in chronic kidney failure), or structural enlargement in a critical site such as the thyroid (toxic multinodular goitre). Hypofunction of endocrine glands can occur as a result of loss of reserve, hyposecretion, agenesis, atrophy, or active destruction. Hyperfunction can occur as a result of hypersecretion, loss of suppression, hyperplastic or neoplastic change, or hyperstimulation.
Endocrinopathies are classified as primary, secondary, or tertiary. Primary endocrine disease inhibits the action of downstream glands. Secondary endocrine disease is indicative of a problem with the pituitary gland. Tertiary endocrine disease is associated with dysfunction of the hypothalamus and its releasing hormones.
As the thyroid, and hormones have been implicated in signaling distant tissues to proliferate, for example, the estrogen receptor has been shown to be involved in certain breast cancers. Endocrine, paracrine, and autocrine signaling have all been implicated in proliferation, one of the required steps of oncogenesis.
Other common diseases that result from endocrine dysfunction include Addison's disease, Cushing's disease and Graves' disease. Cushing's disease and Addison's disease are pathologies involving the dysfunction of the adrenal gland. Dysfunction in the adrenal gland could be due to primary or secondary factors and can result in hypercortisolism or hypocortisolism. Cushing's disease is characterized by the hypersecretion of the adrenocorticotropic hormone (ACTH) due to a pituitary adenoma that ultimately causes endogenous hypercortisolism by stimulating the adrenal glands. Some clinical signs of Cushing's disease include obesity, moon face, and hirsutism. Addison's disease is an endocrine disease that results from hypocortisolism caused by adrenal gland insufficiency. Adrenal insufficiency is significant because it is correlated with decreased ability to maintain blood pressure and blood sugar, a defect that can prove to be fatal.
Graves' disease involves the hyperactivity of the thyroid gland which produces the T3 and T4 hormones. Graves' disease effects range from excess sweating, fatigue, heat intolerance and high blood pressure to swelling of the eyes that causes redness, puffiness and in rare cases reduced or double vision.
DALY rates
A DALY (Disability-Adjusted Life Year) is a measure that reflects the total burden of disease. It combines years of life lost (due to premature death) and years lived with disability (adjusted for the severity of the disability). The lower the DALY rates, the lower the burden of endocrine disorders in a country.
The map shows that large parts of Asia have lower DALY rates (pale yellow), suggesting that endocrine disorders have a relatively low impact on overall health, whereas some countries in South America and Africa (specifically Suriname and Somalia) have higher DALY rates (dark orange to red), indicating a higher disease burden from endocrine disorders.
Other animals
A neuroendocrine system has been observed in all animals with a nervous system and all vertebrates have a hypothalamus–pituitary axis. All vertebrates have a thyroid, which in amphibians is also crucial for transformation of larvae into adult form. All vertebrates have adrenal gland tissue, with mammals unique in having it organized into layers. All vertebrates have some form of a renin–angiotensin axis, and all tetrapods have aldosterone as a primary mineralocorticoid.
Additional images
See also
Endocrine disease
Endocrinology
List of human endocrine organs and actions
Neuroendocrinology
Nervous system
Paracrine signalling
Releasing hormones
Tropic hormone
References
External links
Category:Endocrine cells
Category:Endocrine-related cutaneous conditions
|
medicine_health
| 3,993
|
9426
|
Electromagnetic radiation
|
https://en.wikipedia.org/wiki/Electromagnetic_radiation
|
In physics, electromagnetic radiation (EMR) or electromagnetic wave (EMW) is a self-propagating wave of the electromagnetic field that carries momentum and radiant energy through space.* p 430: "These waves... require no medium to support their propagation. Traveling electromagnetic waves carry energy, and... the Poynting vector describes the energy flow...;" p 440: ... the electromagnetic wave must have the following properties: 1) The field pattern travels with speed c (speed of light); 2) At every point within the wave... the electric field strength E equals "c" times the magnetic field strength B; 3) The electric field and the magnetic field are perpendicular to one another and to the direction of travel, or propagation." It encompasses a broad spectrum, classified by frequency (inversely proportional to wavelength), ranging from radio waves, microwaves, infrared, visible light, ultraviolet, X-rays, to gamma rays.* ; p319: "For historical reasons, different portions of the EM spectrum are given different names, although they are all the same kind of thing. Visible light constitutes a narrow range of the spectrum, from wavelengths of about 400-800 nm.... ;p 320 "An electromagnetic wave carries forward momentum... If the radiation is absorbed by a surface, the momentum drops to zero and a force is exerted on the surface... Thus the radiation pressure of an electromagnetic wave is (formula)." All forms of EMR travel at the speed of light in a vacuum and exhibit wave–particle duality, behaving both as waves and as discrete particles called photons.
Electromagnetic radiation is produced by accelerating charged particles such as from the Sun and other celestial bodies or artificially generated for various applications. Its interaction with matter depends on wavelength, influencing its uses in communication, medicine, industry, and scientific research. Radio waves enable broadcasting and wireless communication, infrared is used in thermal imaging, visible light is essential for vision, and higher-energy radiation, such as X-rays and gamma rays, is applied in medical imaging, cancer treatment, and industrial inspection. Exposure to high-energy radiation can pose health risks, making shielding and regulation necessary in certain applications.
In quantum mechanics, an alternate way of viewing EMR is that it consists of photons, uncharged elementary particles with zero rest mass which are the quanta of the electromagnetic field, responsible for all electromagnetic interactions. Quantum electrodynamics is the theory of how EMR interacts with matter on an atomic level. Quantum effects provide additional sources of EMR, such as the transition of electrons to lower energy levels in an atom and black-body radiation.
Physics
Properties
Electromagnetic radiation is produced by accelerating charged particles and can be naturally emitted, as from the Sun and other celestial bodies, or artificially generated for various applications. The energy in electromagnetic waves is sometimes called radiant energy. The electromagnetic waves' energy does not need a propagating medium to travel through space; they move through a vacuum at the speed of light.
Electric and magnetic fields obey the properties of superposition. Thus, a field due to any particular particle or time-varying electric or magnetic field contributes to the fields present in the same space due to other causes. Further, as they are vector fields, all magnetic and electric field vectors add together according to vector addition.Purcell, p442: "Any number of electromagnetic waves can propagate through the same region without affecting one another. The field E at a space time point is the vector sum of the electric fields of the individual waves, and the same goes for B". For example, in optics two or more coherent light waves may interact and by constructive or destructive interference yield a resultant irradiance deviating from the sum of the component irradiances of the individual light waves. The electromagnetic fields of light are not affected by traveling through static electric or magnetic fields in a linear medium such as a vacuum. However, in nonlinear media, such as some crystals, interactions can occur between light and static electric and magnetic fields—these interactions include the Faraday effect and the Kerr effect.
In refraction, a wave crossing from one medium to another of different density alters its speed and direction upon entering the new medium. The ratio of the refractive indices of the media determines the degree of refraction, and is summarized by Snell's law. Light of composite wavelengths (natural sunlight) disperses into a visible spectrum passing through a prism, because of the wavelength-dependent refractive index of the prism material (dispersion); that is, each component wave within the composite light is bent a different amount.
EM radiation exhibits both wave properties and particle properties at the same time (known as wave–particle duality). Both wave and particle characteristics have been confirmed in many experiments. Wave characteristics are more apparent when EM radiation is measured over relatively large timescales and over large distances while particle characteristics are more evident when measuring small timescales and distances. For example, when electromagnetic radiation is absorbed by matter, particle-like properties will be more obvious when the average number of photons in the cube of the relevant wavelength is much smaller than 1. It is not so difficult to experimentally observe non-uniform deposition of energy when light is absorbed, however this alone is not evidence of "particulate" behavior. Rather, it reflects the quantum nature of matter. A quantum theory of the interaction between electromagnetic radiation and matter such as electrons is described by the theory of quantum electrodynamics.
Electromagnetic waves can be polarized, reflected, refracted, or diffracted, and can interfere with each other. Some experiments display both the wave and particle natures of electromagnetic waves, such as the self-interference of a single photon. When a low intensity light is sent through an interferometer it will be detected by a photomultiplier or other sensitive detector only along one arm of the device, consistent with particle properties, and yet the accumulated effect of many such detections will be interference consistent with wave properties.
Wave model
In the far-field EM radiation which is described by the two source-free Maxwell curl operator equations, a time-change in one type of field is proportional to the curl of the other. These derivatives require that the and fields in EMR are in phase. An important aspect of light's nature is its frequency. The frequency of a wave is its rate of oscillation and is measured in hertz, the SI unit of frequency, where one hertz is equal to one oscillation per second. Light usually has multiple frequencies that sum to form the resultant wave. Different frequencies undergo different angles of refraction, a phenomenon known as dispersion.
A monochromatic wave (a wave of a single frequency) consists of successive troughs and crests, and the distance between two adjacent crests or troughs is called the wavelength. Waves of the electromagnetic spectrum vary in size, from very long radio waves longer than a continent to very short gamma rays smaller than atom nuclei. Frequency is inversely proportional to wavelength, according to the equation:
where v is the speed of the wave (c in a vacuum or less in other media), f is the frequency, and λ is the wavelength. As waves cross boundaries between different media, their speeds change but their frequencies remain constant.
Electromagnetic waves in free space must be solutions of Maxwell's electromagnetic wave equation. Two main classes of solutions are known, namely plane waves and spherical waves. The plane waves may be viewed as the limiting case of spherical waves at a very large (ideally infinite) distance from the source. Both types of waves can have a waveform which is an arbitrary time function (so long as it is sufficiently differentiable to conform to the wave equation). As with any time function, this can be decomposed by means of Fourier analysis into its frequency spectrum, or individual sinusoidal components, each of which contains a single frequency, amplitude, and phase. Such a component wave is said to be monochromatic.
Interference is the superposition of two or more waves resulting in a new wave pattern. If the fields have components in the same direction, they constructively interfere, while opposite directions cause destructive interference. Additionally, multiple polarization signals can be combined (i.e. interfered) to form new states of polarization, which is known as parallel polarization state generation.
Maxwell's equations
James Clerk Maxwell derived a wave form of the electric and magnetic equations, thus uncovering the wave-like nature of electric and magnetic fields and their symmetry. Because the speed of EM waves predicted by the wave equation coincided with the measured speed of light, Maxwell concluded that light itself is an EM wave. Maxwell's equations were confirmed by Heinrich Hertz through experiments with radio waves. Out of the four equations, two of the equations that Maxwell refined were Faraday's Law of Induction and Ampère's circuital law, which he extended by adding the displacement current term to the equations himself. Maxwell thought that the displacement current, which he viewed as the motion of bound charges, gave rise to the magnetic field. The other two equations are Gauss's law and Gauss's law for magnetism.
Near and far fields
Maxwell's equations established that some charges and currents (sources) produce local electromagnetic fields near them that do not radiate. Currents directly produce magnetic fields, but such fields of a magnetic-dipole–type that dies out with distance from the current. In a similar manner, moving charges pushed apart in a conductor by a changing electrical potential (such as in an antenna) produce an electric-dipole–type electrical field, but this also declines with distance. These fields make up the near field. Neither of these behaviours is responsible for EM radiation. Instead, they only efficiently transfer energy to a receiver very close to the source, such as inside a transformer. The near field has strong effects on its source, with any energy withdrawn by a receiver causing increased load (decreased electrical reactance) on the source. The near field does not propagate freely into space, carrying energy away without a distance limit, but rather oscillates, returning its energy to the transmitter if it is not absorbed by a receiver.
By contrast, the far field is composed of radiation that is free of the transmitter, in the sense that the transmitter requires the same power to send changes in the field out regardless of whether anything absorbs the signal, e.g. a radio station does not need to increase its power when more receivers use the signal. This far part of the electromagnetic field is electromagnetic radiation. The far fields propagate (radiate) without allowing the transmitter to affect them. This causes them to be independent in the sense that their existence and their energy, after they have left the transmitter, is completely independent of both transmitter and receiver. Due to conservation of energy, the amount of power passing through any closed surface drawn around the source is the same. The power density of EM radiation from an isotropic source decreases with the inverse square of the distance from the source; this is called the inverse-square law. Field intensity due to dipole parts of the near field varies according to an inverse-cube law, and thus fades with distance.
In the Liénard–Wiechert potential formulation of the electric and magnetic fields due to motion of a single particle (according to Maxwell's equations), the terms associated with acceleration of the particle are those that are responsible for the part of the field that is regarded as electromagnetic radiation. By contrast, the term associated with the changing static electric field of the particle and the magnetic term that results from the particle's uniform velocity are both associated with the near field, and do not comprise electromagnetic radiation.
Particle model and quantum theory
An anomaly arose in the late 19th century involving a contradiction between the wave theory of light and measurements of the electromagnetic spectra that were being emitted by thermal radiators known as black bodies. Physicists struggled with this problem unsuccessfully for many years, and it later became known as the ultraviolet catastrophe. In 1900, Max Planck developed a new theory of black-body radiation that explained the observed spectrum. Planck's theory was based on the idea that black bodies emit light (and other electromagnetic radiation) only as discrete bundles or packets of energy. These packets were called quanta. In 1905, Albert Einstein proposed that light quanta be regarded as real particles. Later the particle of light was given the name photon, to correspond with other particles being described around this time, such as the electron and proton. A photon has an energy, E, proportional to its frequency, f, by
where h is the Planck constant, is the wavelength and c is the speed of light. This is sometimes known as the Planck–Einstein equation. In quantum theory (see first quantization) the energy of the photons is thus directly proportional to the frequency of the EMR wave. Likewise, the momentum p of a photon is also proportional to its frequency and inversely proportional to its wavelength:
The source of Einstein's proposal that light was composed of particles (or could act as particles in some circumstances) was an experimental anomaly not explained by the wave theory: the photoelectric effect, in which light striking a metal surface ejected electrons from the surface, causing an electric current to flow across an applied voltage. Experimental measurements demonstrated that the energy of individual ejected electrons was proportional to the frequency, rather than the intensity, of the light. Furthermore, below a certain minimum frequency, which depended on the particular metal, no current would flow regardless of the intensity. These observations appeared to contradict the wave theory, and for years physicists tried to find an explanation. In 1905, Einstein explained this phenomenon by resurrecting the particle theory of light. Because of the preponderance of evidence in favor of the wave theory, however, Einstein's ideas were met initially with great skepticism among established physicists. Eventually Einstein's explanation was accepted as new particle-like behavior of light was observed, such as the Compton effect.
As a photon is absorbed by an atom, it excites the atom, elevating an electron to a higher energy level (one that is on average farther from the nucleus). When an electron in an excited molecule or atom descends to a lower energy level, it emits a photon of light at a frequency corresponding to the energy difference. Since the energy levels of electrons in atoms are discrete, each element and each molecule emits and absorbs its own characteristic frequencies. Immediate photon emission is called fluorescence, a type of photoluminescence. An example is visible light emitted from fluorescent paints, in response to ultraviolet (blacklight). Many other fluorescent emissions are known in spectral bands other than visible light. Delayed emission is called phosphorescence.
Quantum mechanics also governs emission, which is seen when an emitting gas glows due to excitation of the atoms from any mechanism, including heat. As electrons descend to lower energy levels, a spectrum is emitted that represents the jumps between the energy levels of the electrons, but lines are seen because again emission happens only at particular energies after excitation.Browne, p 376: "Radiation is emitted or absorbed only when the electron jumps from one orbit to the other, and the frequency of radiation depends only upon on the energies of the electron in the initial and final orbits. An example is the emission spectrum of nebulae. Rapidly moving electrons are most sharply accelerated when they encounter a region of force, so they are responsible for producing much of the highest frequency electromagnetic radiation observed in nature. These phenomena can be used to detect the composition of gases lit from behind (absorption spectra) and for glowing gases (emission spectra). Spectroscopy (for example) determines what chemical elements comprise a particular star. Shifts in the frequency of the spectral lines for an element, called a redshift, can be used to determine the star's cosmological distance.
Wave–particle duality
The modern theory that explains the nature of light includes the notion of wave–particle duality. The theory is based on the concept that every quantum entity can show wave-like or particle-like behaviors, depending on observation. The observation led to the collapse of the entity's wave function. If it is based on the Copenhagen interpretation, the observation does really collapse the wave function; for the many-worlds interpretation, all possible outcomes of the collapse happened in parallel universes; for the pilot wave theory, the particle behaviour is simply determined by waves. The duality nature of a real photon has been observed in the double-slit experiment.
Together, wave and particle effects fully explain the emission and absorption spectra of EM radiation. The matter-composition of the medium through which the light travels determines the nature of the absorption and emission spectrum. These bands correspond to the allowed energy levels in the atoms. Dark bands in the absorption spectrum are due to the atoms in an intervening medium between source and observer. The atoms absorb certain frequencies of the light between emitter and detector/eye, then emit them in all directions. A dark band appears to the detector, due to the radiation scattered out of the light beam. For instance, dark bands in the light emitted by a distant star are due to the atoms in the star's atmosphere.
Propagation speed
In empty space (vacuum), electromagnetic radiation travels at the speed of light, , 299,792,458 meters per second (approximately 186,000 miles per second). In a medium other than vacuum it travels at a lower velocity , given by a dimensionless parameter between 0 and 1 characteristic of the medium called the velocity factor or its reciprocal, the refractive index :
.
The reason for this is that in matter the electric and magnetic fields of the wave are slowed because they polarize the charged particles in the medium they pass through. The oscillating electric field causes nearby positive and negative charges in atoms to move slightly apart and together, inducing an oscillating polarization, creating an electric polarization field. The oscillating magnetic field moves nearby magnetic dipoles, inducing an oscillating magnetization, creating an induced oscillating magnetic field. These induced fields, superposed on the original wave fields, slow the wave (Ewald–Oseen extinction theorem). The amount of slowing depends on the electromagnetic properties of the medium, the electric permittivity and magnetic permeability. In the SI system of units, empty space has a vacuum permittivity of 8.854×10−12 F/m (farads per meter) and a vacuum permeability of 1.257×10−6 H/m (henries per meter). These universal constants determine the speed of light in a vacuum:
In a medium that is isotropic and linear, which means the electric polarization is proportional to the electric field and the magnetization is proportional to the magnetic field . The speed of the waves, the , and the refractive index are determined by only two parameters: the electric permittivity of the medium in farads per meter, and the magnetic permeability of the medium in henrys per meter
If the permittivity and permeability of the medium is constant for different frequency EM waves, this is called a non-dispersive medium. In this case all EM wave frequencies would travel at the same velocity, and the waveshape stays constant as it travels. However in real matter and typically vary with frequency, this is called a dispersive medium. In dispersive media different spectral bands have different propagation characteristics, and an arbitrary wave changes shape as it travels through the medium.
History of discovery
Electromagnetic radiation of wavelengths other than those of visible light were discovered in the early 19th century. The discovery of infrared radiation is ascribed to astronomer William Herschel, who published his results in 1800 before the Royal Society of London. Herschel used a glass prism to refract light from the Sun and detected invisible rays that caused heating beyond the red part of the spectrum, through an increase in the temperature recorded with a thermometer. These "calorific rays" were later termed infrared.
In 1801 German physicist Johann Wilhelm Ritter discovered ultraviolet in an experiment similar to Herschel's, using sunlight and a glass prism. Ritter noted that invisible rays near the violet edge of a solar spectrum dispersed by a triangular prism darkened silver chloride preparations more quickly than did the nearby violet light. Ritter's experiments were an early precursor to what would become photography. Ritter noted that the ultraviolet rays (which at first were called "chemical rays") were capable of causing chemical reactions.
In 1862–64 James Clerk Maxwell developed equations for the electromagnetic field which suggested that waves in the field would travel with a speed that was very close to the known speed of light. Maxwell therefore suggested that visible light (as well as invisible infrared and ultraviolet rays by inference) all consisted of propagating disturbances (or radiation) in the electromagnetic field. Radio waves were first produced deliberately by Heinrich Hertz in 1887, using electrical circuits calculated to produce oscillations at a much lower frequency than that of visible light, following recipes for producing oscillating charges and currents suggested by Maxwell's equations. Hertz also developed ways to detect these waves, and produced and characterized what were later termed radio waves and microwaves.Jeans, James (1947) The Growth of Physical Science. Cambridge University Press
Wilhelm Röntgen discovered and named X-rays. After experimenting with high voltages applied to an evacuated tube on 8 November 1895, he noticed a fluorescence on a nearby plate of coated glass. In one month, he discovered X-rays' main properties.
The last portion of the EM spectrum to be discovered was associated with radioactivity. Henri Becquerel found that uranium salts caused fogging of an unexposed photographic plate through a covering paper in a manner similar to X-rays, and Marie Curie discovered that only certain elements gave off these rays of energy, soon discovering the intense radiation of radium. The radiation from pitchblende was differentiated into alpha rays (alpha particles) and beta rays (beta particles) by Ernest Rutherford through simple experimentation in 1899, but these proved to be charged particulate types of radiation. However, in 1900 the French scientist Paul Villard discovered a third neutrally charged and especially penetrating type of radiation from radium, and after he described it, Rutherford realized it must be yet a third type of radiation, which in 1903 Rutherford named gamma rays.
In 1910 British physicist William Henry Bragg demonstrated that gamma rays are electromagnetic radiation, not particles, and in 1914 Rutherford and Edward Andrade measured their wavelengths, finding that they were similar to X-rays but with shorter wavelengths and higher frequency, although a 'cross-over' between X and gamma rays makes it possible to have X-rays with a higher energy (and hence shorter wavelength) than gamma rays and vice versa. The origin of the ray differentiates them, gamma rays tend to be natural phenomena originating from the unstable nucleus of an atom and X-rays are electrically generated (and hence man-made) unless they are as a result of bremsstrahlung X-radiation caused by the interaction of fast moving particles (such as beta particles) colliding with certain materials, usually of higher atomic numbers.
Electromagnetic spectrum
γ = Gamma rays
HX = Hard X-rays
SX = Soft X-rays
EUV = Extreme-ultraviolet
NUV = Near-ultraviolet
Visible light (colored bands)
NIR = Near-infrared
MIR = Mid-infrared
FIR = Far-infrared
EHF = Extremely high frequency (microwaves)
SHF = Super-high frequency (microwaves)
UHF = Ultrahigh frequency (radio waves)
VHF = Very high frequency (radio)
HF = High frequency (radio)
MF = Medium frequency (radio)
LF = Low frequency (radio)
VLF = Very low frequency (radio)
VF = Voice frequency
ULF = Ultra-low frequency (radio)
SLF = Super-low frequency (radio)
ELF = Extremely low frequency (radio)
EM radiation (the designation 'radiation' excludes static electric and magnetic and near fields) is classified by wavelength into radio, microwave, infrared, visible, ultraviolet, X-rays, and gamma rays. Arbitrary electromagnetic waves can be expressed by Fourier analysis in terms of sinusoidal waves (monochromatic radiation), which in turn can each be classified into these regions of the EMR spectrum.
For certain classes of EM waves, the waveform is most usefully treated as random, and then spectral analysis must be done by slightly different mathematical techniques appropriate to random or stochastic processes. In such cases, the individual frequency components are represented in terms of their power content, and the phase information is not preserved. Such a representation is called the power spectral density of the random process. Random electromagnetic radiation requiring this kind of analysis is, for example, encountered in the interior of stars, and in certain other very wideband forms of radiation such as the zero-point wave field of the electromagnetic vacuum.
The behavior of EM radiation and its interaction with matter depends on its frequency, and changes qualitatively as the frequency changes. Lower frequencies have longer wavelengths, and higher frequencies have shorter wavelengths, and are associated with photons of higher energy. There is no fundamental limit known to these wavelengths or energies, at either end of the spectrum, although photons with energies near the Planck energy or exceeding it (far too high to have ever been observed) will require new physical theories to describe.
Radio and microwave
Electromagnetic radiation phenomena with wavelengths ranging from one meter to one millimeter are called microwaves; with frequencies between 300 MHz (0.3 GHz) and 300 GHz. When radio waves impinge upon a conductor, they couple to the conductor, travel along it, and induce an electric current on the conductor surface by moving the electrons of the conducting material in correlated bunches of charge. At radio and microwave frequencies, EMR interacts with matter largely as a bulk collection of charges which are spread out over large numbers of affected atoms. In electrical conductors, such induced bulk movement of charges (electric currents) results in absorption of the EMR, or else separations of charges that cause generation of new EMR (effective reflection of the EMR). An example is absorption or emission of radio waves by antennas, or absorption of microwaves by water or other molecules with an electric dipole moment, as for example inside a microwave oven. These interactions produce either electric currents or heat, or both.
Infrared
Like radio and microwave, infrared (IR) is reflected by metals (and also most EMR, well into the ultraviolet range). However, unlike lower-frequency radio and microwave radiation, infrared EMR commonly interacts with dipoles present in single molecules, which change as atoms vibrate at the ends of a single chemical bond. It is consequently absorbed by a wide range of substances, causing them to increase in temperature as the vibrations dissipate as heat. The same process, run in reverse, causes bulk substances to radiate in the infrared spontaneously (see thermal radiation section below).
Infrared radiation is divided into spectral subregions. While different subdivision schemes exist, the spectrum is commonly divided as near-infrared (0.75–1.4 μm), short-wavelength infrared (1.4–3 μm), mid-wavelength infrared (3–8 μm), long-wavelength infrared (8–15 μm) and far infrared (15–1000 μm).
Some animals, such as snakes, have thermo-sensitive membranes (pit organs) that can detect temperature differences, allowing them to sense infrared radiation.
Visible light
Natural sources produce EM radiation across the spectrum. EM radiation with a wavelength between approximately 400 nm and 700 nm is directly detected by the human eye and perceived as visible light. Other wavelengths, especially nearby infrared (longer than 700 nm) and ultraviolet (shorter than 400 nm) are also sometimes referred to as light.
As frequency increases into the visible range, photons have enough energy to change the bond structure of some individual molecules. It is not a coincidence that this happens in the visible range, as the mechanism of vision involves the change in bonding of a single molecule, retinal, which absorbs a single photon. The change in retinal causes a change in the shape of the rhodopsin protein it is contained in, which starts the biochemical process that causes the retina of the human eye to sense the light.
Visible light is able to affect only a tiny percentage of all molecules. Usually not in a permanent or damaging way, rather the photon excites an electron which then emits another photon when returning to its original position. This is the source of color produced by most dyes. Retinal is an exception. When a photon is absorbed, the retinal permanently changes structure from cis to trans, and requires a protein to convert it back, i.e. reset it to be able to function as a light detector again.
Photosynthesis becomes possible in this range as well, for the same reason. A single molecule of chlorophyll is excited by a single photon. In plant tissues that conduct photosynthesis, carotenoids act to quench electronically excited chlorophyll produced by visible light in a process called non-photochemical quenching, to prevent reactions that would otherwise interfere with photosynthesis at high light levels.
Limited evidence indicate that some reactive oxygen species are created by visible light in skin, and that these may have some role in photoaging, in the same manner as ultraviolet A.
Infrared, microwaves, and radio waves are known to damage molecules and biological tissue only by bulk heating, not excitation from single photons of the radiation.
Ultraviolet
As frequency increases into the ultraviolet, photons now carry enough energy (about three electron volts or more) to excite certain doubly bonded molecules into permanent chemical rearrangement. In DNA, this causes lasting damage. DNA is also indirectly damaged by reactive oxygen species produced by ultraviolet A (UVA), which has energy too low to damage DNA directly. This is why ultraviolet at all wavelengths can damage DNA, and is capable of causing cancer, and (for UVB) skin burns (sunburn) that are far worse than would be produced by simple heating (temperature increase) effects.
At the higher end of the ultraviolet range, the energy of photons becomes large enough to impart enough energy to electrons to cause them to be liberated from the atom, in a process called photoionisation. The energy required for this is always larger than about 10 electron volt (eV) corresponding with wavelengths smaller than 124 nm (some sources suggest a more realistic cutoff of 33 eV, which is the energy required to ionize water). This high end of the ultraviolet spectrum with energies in the approximate ionization range, is sometimes called "extreme UV". Ionizing UV is strongly filtered by the Earth's atmosphere.
X-rays and gamma rays
Electromagnetic radiation composed of photons that carry minimum-ionization energy, or more (which includes the entire spectrum with shorter wavelengths), is therefore termed ionizing radiation. (Many other kinds of ionizing radiation are made of non-EM particles.) Electromagnetic-type ionizing radiation extends from the extreme ultraviolet to all higher frequencies and shorter wavelengths, which means that all X-rays and gamma rays qualify. These are capable of the most severe types of molecular damage, which can happen in biology to any type of biomolecule, including mutation and cancer, and often at great depths below the skin, since the higher end of the X-ray spectrum, and all of the gamma ray spectrum, penetrate matter.
Atmosphere and magnetosphere
Most UV and X-rays are blocked by absorption first from molecular nitrogen, and then (for wavelengths in the upper UV) from the electronic excitation of dioxygen and finally ozone at the mid-range of UV. Only 30% of the Sun's ultraviolet light reaches the ground, and almost all of this is well transmitted.
Visible light is well transmitted in air, a property known as an atmospheric window, as it is not energetic enough to excite nitrogen, oxygen, or ozone, but too energetic to excite molecular vibrational frequencies of water vapor and carbon dioxide. Absorption bands in the infrared are due to modes of vibrational excitation in water vapor. However, at energies too low to excite water vapor, the atmosphere becomes transparent again, allowing free transmission of most microwave and radio waves.
Finally, at radio wavelengths longer than 10 m or so (about 30 MHz), the air in the lower atmosphere remains transparent to radio, but plasma in certain layers of the ionosphere begins to interact with radio waves (see skywave). This property allows some longer wavelengths (100 m or 3 MHz) to be reflected and results in shortwave radio beyond line-of-sight. However, certain ionospheric effects begin to block incoming radiowaves from space, when their frequency is less than about 10 MHz (wavelength longer than about 30 m).
Thermal and electromagnetic radiation as a form of heat
The basic structure of matter involves charged particles bound together. When electromagnetic radiation impinges on matter, it causes the charged particles to oscillate and gain energy. The ultimate fate of this energy depends on the context. It could be immediately re-radiated and appear as scattered, reflected, or transmitted radiation. It may get dissipated into other microscopic motions within the matter, coming to thermal equilibrium and manifesting itself as thermal energy, or even kinetic energy, in the material. With a few exceptions related to high-energy photons (such as fluorescence, harmonic generation, photochemical reactions, the photovoltaic effect for ionizing radiations at far ultraviolet, X-ray, and gamma radiation), absorbed electromagnetic radiation simply deposits its energy by heating the material. This happens for infrared, microwave, and radio wave radiation.
Intense radio waves can thermally burn living tissue and can cook food. In addition to infrared lasers, sufficiently intense visible and ultraviolet lasers can easily set paper afire. Ionizing radiation creates high-speed electrons in a material and breaks chemical bonds, but after these electrons collide many times with other atoms eventually most of the energy becomes thermal energy all in a tiny fraction of a second. This caveat also applies to UV, even though almost all of it is not ionizing, because UV can damage molecules due to electronic excitation, which is far greater per unit energy than heating effects.
Infrared radiation in the spectral distribution of a black body is usually considered a form of heat, since it has an equivalent temperature and is associated with an entropy change per unit of thermal energy. However, "heat" is a technical term in physics and thermodynamics and is often confused with thermal energy. Any type of electromagnetic energy can be transformed into thermal energy in interaction with matter. Thus, any electromagnetic radiation can "heat" (in the sense of increase the thermal energy temperature of) a material, when it is absorbed. The inverse or time-reversed process of absorption is thermal radiation. Much of the thermal energy in matter consists of random motion of charged particles, and this energy can be radiated away from the matter. The resulting radiation may subsequently be absorbed by another piece of matter, with the deposited energy heating the material.
The electromagnetic radiation in an opaque cavity at thermal equilibrium is effectively a form of thermal energy, having maximum radiation entropy.
Biological effects
Bioelectromagnetics is the study of the interactions and effects of EM radiation on living organisms. The effects of electromagnetic radiation upon living cells, including those in humans, depends upon the radiation's power and frequency. For low-frequency radiation (radio waves to near ultraviolet) the best-understood effects are those due to radiation power alone, acting through heating when radiation is absorbed. For these thermal effects, frequency is important as it affects the intensity of the radiation and penetration into the organism (for example, microwaves penetrate better than infrared). It is widely accepted that low frequency fields that are too weak to cause significant heating could not possibly have any biological effect. Some research suggests that weaker non-thermal electromagnetic fields (including weak ELF magnetic fields, although the latter does not strictly qualify as EM radiation) and modulated RF and microwave fields can have biological effects, though the significance of this is unclear.
The World Health Organization has classified radio frequency electromagnetic radiation as Group 2B—possibly carcinogenic.IARC classifies Radiofrequency Electromagnetic Fields as possibly carcinogenic to humans . World Health Organization. 31 May 2011 This group contains possible carcinogens such as lead, DDT, and styrene. At higher frequencies (some of visible and beyond), the effects of individual photons begin to become important, as these now have enough energy individually to directly or indirectly damage biological molecules.See for evidence of quantum damage from visible light via reactive oxygen species generated in skin. This happens also with UVA. With UVB, the damage to DNA becomes direct, with photochemical formation of pyrimidine dimers. All UV frequencies have been classed as Group 1 carcinogens by the World Health Organization. Ultraviolet radiation from sun exposure is the primary cause of skin cancer.
Thus, at UV frequencies and higher, electromagnetic radiation does more damage to biological systems than simple heating predicts. This is most obvious in the "far" (or "extreme") ultraviolet. UV, with X-ray and gamma radiation, are referred to as ionizing radiation due to the ability of photons of this radiation to produce ions and free radicals in materials (including living tissue). Since such radiation can severely damage life at energy levels that produce little heating, it is considered far more dangerous (in terms of damage-produced per unit of energy, or power) than the rest of the electromagnetic spectrum.
Use as a weapon
The heat ray is an application of EMR that makes use of microwave frequencies to create an unpleasant heating effect in the upper layer of the skin. A publicly known heat ray weapon called the Active Denial System was developed by the US military as an experimental weapon to deny the enemy access to an area. A death ray is a theoretical weapon that delivers heat ray based on electromagnetic energy at levels that are capable of injuring human tissue. An inventor of a death ray, Harry Grindell Matthews, claimed to have lost sight in his left eye while working on his death ray weapon based on a microwave magnetron from the 1920s (a normal microwave oven creates a tissue damaging cooking effect inside the oven at around 2 kV/m).
Derivation from electromagnetic theory
Electromagnetic waves are predicted by the classical laws of electricity and magnetism, known as Maxwell's equations. There are nontrivial solutions of the homogeneous Maxwell's equations (without charges or currents), describing waves of changing electric and magnetic fields. Beginning with Maxwell's equations in free space:
where
and are the electric field (measured in V/m or N/C) and the magnetic field (measured in T or Wb/m2), respectively;
yields the divergence and the curl of a vector field ;
and are partial derivatives (rate of change in time, with location fixed) of the magnetic and electric field;
is the permeability of a vacuum (4 × 10−7 H/m), and is the permittivity of a vacuum (8.85 × 10−12 F/m);
Besides the trivial solution , useful solutions can be derived with the following vector identity, valid for all vectors in some vector field:Taking the curl of the second Maxwell's equation () yields:
Evaluating the left hand side of () with the above identity and simplifying using (), yields:
Evaluating the right hand side of () by exchanging the sequence of derivatives and inserting the fourth yields:
Combining () and () again, gives a vector-valued differential equation for the electric field, solving the homogeneous Maxwell's equations:
Taking the curl of the fourth Maxwell's equation () results in a similar differential equation for a magnetic field solving the homogeneous Maxwell's equations:
Both differential equations have the form of the general wave equation for waves propagating with speed where is a function of time and location, which gives the amplitude of the wave at some time at a certain location:This is also written as:
where denotes the so-called d'Alembert operator, which in Cartesian coordinates is given as:
Comparing the terms for the speed of propagation, yields in the case of the electric and magnetic fields:
This is the speed of light in vacuum. Thus Maxwell's equations connect the vacuum permittivity , the vacuum permeability , and the speed of light, c0, via the above equation. This relationship had been discovered by Wilhelm Eduard Weber and Rudolf Kohlrausch prior to the development of Maxwell's electrodynamics, however Maxwell was the first to produce a field theory consistent with waves traveling at the speed of light.
These are only two equations versus the original four, so more information pertains to these waves hidden within Maxwell's equations. A generic vector wave for the electric field has the form
Here, is a constant vector, is any second differentiable function, is a unit vector in the direction of propagation, and is a position vector. is a generic solution to the wave equation. In other words,
for a generic wave traveling in the direction.
From the first of Maxwell's equations, we get
Thus,
which implies that the electric field is orthogonal to the direction the wave propagates. The second of Maxwell's equations yields the magnetic field, namely,
Thus,
The remaining equations will be satisfied by this choice of .
The electric and magnetic field waves in the far-field travel at the speed of light. They have a special restricted orientation and proportional magnitudes, , which can be seen immediately from the Poynting vector. The electric field, magnetic field, and direction of wave propagation are all orthogonal, and the wave propagates in the same direction as . Also E and B far-fields in free space, which as wave solutions depend primarily on these two Maxwell's equations to remain in phase with each other. This is guaranteed since the generic wave solution is first order in both space and time, and the curl operator on one side of these equations results in first-order spatial derivatives of the wave solution, while the time-derivative on the other side of the equations, which gives the other field, is first-order in time, resulting in the same phase shift for both fields in each mathematical operation.
From the viewpoint of an electromagnetic wave traveling forward, the electric field might be oscillating up and down, while the magnetic field oscillates right and left. This picture can be rotated with the electric field oscillating right and left and the magnetic field oscillating down and up. This is a different solution that is traveling in the same direction. This arbitrariness in the orientation with respect to propagation direction is known as polarization. On a quantum level, it is described as photon polarization. The direction of the polarization is defined as the direction of the electric field.
More general forms of the second-order wave equations given above are available, allowing for both non-vacuum propagation media and sources. Many competing derivations exist, all with varying levels of approximation and intended applications. One very general example is a form of the electric field equation, which was factorized into a pair of explicitly directional wave equations, and then efficiently reduced into a single uni-directional wave equation by means of a simple slow-evolution approximation.
See also
Antenna measurement
Bioelectromagnetics
Bolometer
CONELRAD
Electromagnetic pulse
Electromagnetic radiation and health
Evanescent wave coupling
Finite-difference time-domain method
Gravitational wave
Helicon
Impedance of free space
Radiation reaction
Health effects of sunlight exposure
Sinusoidal plane-wave solutions of the electromagnetic wave equation
References
Further reading
External links
The Feynman Lectures on Physics Vol. I Ch. 28: Electromagnetic Radiation
Electromagnetic Waves from Maxwell's Equations on Project PHYSNET.
Category:Heinrich Hertz
Category:Radiation
|
physics
| 7,111
|
9891
|
Entropy
|
https://en.wikipedia.org/wiki/Entropy
|
Entropy is a scientific concept, most commonly associated with states of disorder, randomness, or uncertainty. The term and the concept are used in diverse fields, from classical thermodynamics, where it was first recognized, to the microscopic description of nature in statistical physics, and to the principles of information theory. It has found far-ranging applications in chemistry and physics, in biological systems and their relation to life, in cosmology, economics, and information systems including the transmission of information in telecommunication.
Entropy is central to the second law of thermodynamics, which states that the entropy of an isolated system left to spontaneous evolution cannot decrease with time. As a result, isolated systems evolve toward thermodynamic equilibrium, where the entropy is highest. "High" entropy means that energy is more disordered or dispersed, while "low" entropy means that energy is more ordered or concentrated. A consequence of the second law of thermodynamics is that certain processes are irreversible.
The thermodynamic concept was referred to by Scottish scientist and engineer William Rankine in 1850 with the names thermodynamic function and heat-potential. In 1865, German physicist Rudolf Clausius, one of the leading founders of the field of thermodynamics, defined it as the quotient of an infinitesimal amount of heat to the instantaneous temperature. He initially described it as transformation-content, in German Verwandlungsinhalt, and later coined the term entropy from a Greek word for transformation.Brush, S.G. (1976). The Kind of Motion We Call Heat: a History of the Kinetic Theory of Gases in the 19th Century, Book 2, Statistical Physics and Irreversible Processes, Elsevier, Amsterdam, , pp. 576–577.
Austrian physicist Ludwig Boltzmann explained entropy as the measure of the number of possible microscopic arrangements or states of individual atoms and molecules of a system that comply with the macroscopic condition of the system. He thereby introduced the concept of statistical disorder and probability distributions into a new field of thermodynamics, called statistical mechanics, and found the link between the microscopic interactions, which fluctuate about an average configuration, to the macroscopically observable behaviour, in form of a simple logarithmic law, with a proportionality constant, the Boltzmann constant, which has become one of the defining universal constants for the modern International System of Units.
History
In his 1803 paper Fundamental Principles of Equilibrium and Movement, the French mathematician Lazare Carnot proposed that in any machine, the accelerations and shocks of the moving parts represent losses of moment of activity; in any natural process there exists an inherent tendency towards the dissipation of useful energy. In 1824, building on that work, Lazare's son, Sadi Carnot, published Reflections on the Motive Power of Fire, which posited that in all heat-engines, whenever "caloric" (what is now known as heat) falls through a temperature difference, work or motive power can be produced from the actions of its fall from a hot to cold body. He used an analogy with how water falls in a water wheel. That was an early insight into the second law of thermodynamics. Carnot based his views of heat partially on the early 18th-century "Newtonian hypothesis" that both heat and light were types of indestructible forms of matter, which are attracted and repelled by other matter, and partially on the contemporary views of Count Rumford, who showed in 1789 that heat could be created by friction, as when cannon bores are machined. Carnot reasoned that if the body of the working substance, such as a body of steam, is returned to its original state at the end of a complete engine cycle, "no change occurs in the condition of the working body".
The first law of thermodynamics, deduced from the heat-friction experiments of James Joule in 1843, expresses the concept of energy and its conservation in all processes; the first law, however, is unsuitable to separately quantify the effects of friction and dissipation.
In the 1850s and 1860s, German physicist Rudolf Clausius objected to the supposition that no change occurs in the working body, and gave that change a mathematical interpretation, by questioning the nature of the inherent loss of usable heat when work is done, e.g., heat produced by friction. [On the Motive Power of Heat, and on the Laws which can be deduced from it for the Theory of Heat] : Poggendorff's Annalen der Physik und Chemie. He described his observations as a dissipative use of energy, resulting in a transformation-content ( in German), of a thermodynamic system or working body of chemical species during a change of state. That was in contrast to earlier views, based on the theories of Isaac Newton, that heat was an indestructible particle that had mass. Clausius discovered that the non-usable energy increases as steam proceeds from inlet to exhaust in a steam engine. From the prefix en-, as in 'energy', and from the Greek word [tropē], which is translated in an established lexicon as turning or changeLiddell, H. G., Scott, R. (1843/1978). A Greek–English Lexicon, revised and augmented edition, Oxford University Press, Oxford UK, , pp. 1826–1827. and that he rendered in German as , a word often translated into English as transformation, in 1865 Clausius coined the name of that property as entropy. The word was adopted into the English language in 1868.
Later, scientists such as Ludwig Boltzmann, Josiah Willard Gibbs, and James Clerk Maxwell gave entropy a statistical basis. In 1877, Boltzmann visualized a probabilistic way to measure the entropy of an ensemble of ideal gas particles, in which he defined entropy as proportional to the natural logarithm of the number of microstates such a gas could occupy. The proportionality constant in this definition, called the Boltzmann constant, has become one of the defining universal constants for the modern International System of Units (SI). Henceforth, the essential problem in statistical thermodynamics has been to determine the distribution of a given amount of energy E over N identical systems. Constantin Carathéodory, a Greek mathematician, linked entropy with a mathematical definition of irreversibility, in terms of trajectories and integrability.
Etymology
In 1865, Clausius named the concept of "the differential of a quantity which depends on the configuration of the system" entropy () after the Greek word for 'transformation'. He gave "transformational content" () as a synonym, paralleling his "thermal and ergonal content" () as the name of U, but preferring the term entropy as a close parallel of the word energy, as he found the concepts nearly "analogous in their physical significance". This term was formed by replacing the root of ('ergon', 'work') by that of ('tropy', 'transformation').
In more detail, Clausius explained his choice of "entropy" as a name as follows:
I prefer going to the ancient languages for the names of important scientific quantities, so that they may mean the same thing in all living tongues. I propose, therefore, to call S the entropy of a body, after the Greek word "transformation". I have designedly coined the word entropy to be similar to energy, for these two quantities are so analogous in their physical significance, that an analogy of denominations seems to me helpful.
Leon Cooper added that in this way "he succeeded in coining a word that meant the same thing to everybody: nothing".
Definitions and descriptions
The concept of entropy is described by two principal approaches, the macroscopic perspective of classical thermodynamics, and the microscopic description central to statistical mechanics. The classical approach defines entropy in terms of macroscopically measurable physical properties, such as bulk mass, volume, pressure, and temperature. The statistical definition of entropy defines it in terms of the statistics of the motions of the microscopic constituents of a system — modelled at first classically, e.g. Newtonian particles constituting a gas, and later quantum-mechanically (photons, phonons, spins, etc.). The two approaches form a consistent, unified view of the same phenomenon as expressed in the second law of thermodynamics, which has found universal applicability to physical processes.
State variables and functions of state
Many thermodynamic properties are defined by physical variables that define a state of thermodynamic equilibrium, which essentially are state variables. State variables depend only on the equilibrium condition, not on the path evolution to that state. State variables can be functions of state, also called state functions, in a sense that one state variable is a mathematical function of other state variables. Often, if some properties of a system are determined, they are sufficient to determine the state of the system and thus other properties' values. For example, temperature and pressure of a given quantity of gas determine its state, and thus also its volume via the ideal gas law. A system composed of a pure substance of a single phase at a particular uniform temperature and pressure is determined, and is thus a particular state, and has a particular volume. The fact that entropy is a function of state makes it useful. In the Carnot cycle, the working fluid returns to the same state that it had at the start of the cycle, hence the change or line integral of any state function, such as entropy, over this reversible cycle is zero.
Reversible process
The entropy change of a system can be well-defined as a small portion of heat transferred from the surroundings to the system during a reversible process divided by the temperature of the system during this heat transfer:The reversible process is quasistatic (i.e., it occurs without any dissipation, deviating only infinitesimally from the thermodynamic equilibrium), and it may conserve total entropy. For example, in the Carnot cycle, while the heat flow from a hot reservoir to a cold reservoir represents the increase in the entropy in a cold reservoir, the work output, if reversibly and perfectly stored, represents the decrease in the entropy which could be used to operate the heat engine in reverse, returning to the initial state; thus the total entropy change may still be zero at all times if the entire process is reversible.
In contrast, an irreversible process increases the total entropy of the system and surroundings. Any process that happens quickly enough to deviate from the thermal equilibrium cannot be reversible; the total entropy increases, and the potential for maximum work to be done during the process is lost.
Carnot cycle
The concept of entropy arose from Rudolf Clausius's study of the Carnot cycle which is a thermodynamic cycle performed by a Carnot heat engine as a reversible heat engine. In a Carnot cycle, the heat is transferred from a hot reservoir to a working gas at the constant temperature during isothermal expansion stage and the heat is transferred from a working gas to a cold reservoir at the constant temperature during isothermal compression stage. According to Carnot's theorem, a heat engine with two thermal reservoirs can produce a work if and only if there is a temperature difference between reservoirs. Originally, Carnot did not distinguish between heats and , as he assumed caloric theory to be valid and hence that the total heat in the system was conserved. But in fact, the magnitude of heat is greater than the magnitude of heat . Through the efforts of Clausius and Kelvin, the work done by a reversible heat engine was found to be the product of the Carnot efficiency (i.e., the efficiency of all reversible heat engines with the same pair of thermal reservoirs) and the heat absorbed by a working body of the engine during isothermal expansion:To derive the Carnot efficiency Kelvin had to evaluate the ratio of the work output to the heat absorbed during the isothermal expansion with the help of the Carnot–Clapeyron equation, which contained an unknown function called the Carnot function. The possibility that the Carnot function could be the temperature as measured from a zero point of temperature was suggested by Joule in a letter to Kelvin. This allowed Kelvin to establish his absolute temperature scale.
It is known that a work produced by an engine over a cycle equals to a net heat absorbed over a cycle.. Thus, with the sign convention for a heat transferred in a thermodynamic process ( for an absorption and for a dissipation) we get:Since this equality holds over an entire Carnot cycle, it gave Clausius the hint that at each stage of the cycle the difference between a work and a net heat would be conserved, rather than a net heat itself. Which means there exists a state function with a change of . It is called an internal energy and forms a central concept for the first law of thermodynamics.
Finally, comparison for both the representations of a work output in a Carnot cycle gives us:.Similarly to the derivation of internal energy, this equality implies existence of a state function with a change of and which is conserved over an entire cycle. Clausius called this state function entropy.
In addition, the total change of entropy in both thermal reservoirs over Carnot cycle is zero too, since the inversion of a heat transfer direction means a sign inversion for the heat transferred during isothermal stages:Here we denote the entropy change for a thermal reservoir by , where is either for a hot reservoir or for a cold one.
If we consider a heat engine which is less effective than Carnot cycle (i.e., the work produced by this engine is less than the maximum predicted by Carnot's theorem), its work output is capped by Carnot efficiency as:Substitution of the work as the net heat into the inequality above gives us:or in terms of the entropy change :A Carnot cycle and an entropy as shown above prove to be useful in the study of any classical thermodynamic heat engine: other cycles, such as an Otto, Diesel or Brayton cycle, could be analysed from the same standpoint. Notably, any machine or cyclic process converting heat into work (i.e., heat engine) that is claimed to produce an efficiency greater than the one of Carnot is not viable — due to violation of the second law of thermodynamics.
For further analysis of sufficiently discrete systems, such as an assembly of particles, statistical thermodynamics must be used. Additionally, descriptions of devices operating near the limit of de Broglie waves, e.g. photovoltaic cells, have to be consistent with quantum statistics.
Classical thermodynamics
The thermodynamic definition of entropy was developed in the early 1850s by Rudolf Clausius and essentially describes how to measure the entropy of an isolated system in thermodynamic equilibrium with its parts. Clausius created the term entropy as an extensive thermodynamic variable that was shown to be useful in characterizing the Carnot cycle. Heat transfer in the isotherm steps (isothermal expansion and isothermal compression) of the Carnot cycle was found to be proportional to the temperature of a system (known as its absolute temperature). This relationship was expressed in an increment of entropy that is equal to incremental heat transfer divided by temperature. Entropy was found to vary in the thermodynamic cycle but eventually returned to the same value at the end of every cycle. Thus it was found to be a function of state, specifically a thermodynamic state of the system.
While Clausius based his definition on a reversible process, there are also irreversible processes that change entropy. Following the second law of thermodynamics, entropy of an isolated system always increases for irreversible processes. The difference between an isolated system and closed system is that energy may not flow to and from an isolated system, but energy flow to and from a closed system is possible. Nevertheless, for both closed and isolated systems, and indeed, also in open systems, irreversible thermodynamics processes may occur.
According to the Clausius equality, for a reversible cyclic thermodynamic process: which means the line integral is path-independent. Thus we can define a state function , called entropy:Therefore, thermodynamic entropy has the dimension of energy divided by temperature, and the unit joule per kelvin (J/K) in the International System of Units (SI).
To find the entropy difference between any two states of the system, the integral must be evaluated for some reversible path between the initial and final states. Since an entropy is a state function, the entropy change of the system for an irreversible path is the same as for a reversible path between the same two states. However, the heat transferred to or from the surroundings is different as well as its entropy change.
We can calculate the change of entropy only by integrating the above formula. To obtain the absolute value of the entropy, we consider the third law of thermodynamics: perfect crystals at the absolute zero have an entropy .
From a macroscopic perspective, in classical thermodynamics the entropy is interpreted as a state function of a thermodynamic system: that is, a property depending only on the current state of the system, independent of how that state came to be achieved. In any process, where the system gives up of energy to the surrounding at the temperature , its entropy falls by and at least of that energy must be given up to the system's surroundings as a heat. Otherwise, this process cannot go forward. In classical thermodynamics, the entropy of a system is defined if and only if it is in a thermodynamic equilibrium (though a chemical equilibrium is not required: for example, the entropy of a mixture of two moles of hydrogen and one mole of oxygen in standard conditions is well-defined).
Statistical mechanics
The statistical definition was developed by Ludwig Boltzmann in the 1870s by analysing the statistical behaviour of the microscopic components of the system. Boltzmann showed that this definition of entropy was equivalent to the thermodynamic entropy to within a constant factor—known as the Boltzmann constant. In short, the thermodynamic definition of entropy provides the experimental verification of entropy, while the statistical definition of entropy extends the concept, providing an explanation and a deeper understanding of its nature.
The interpretation of entropy in statistical mechanics is the measure of uncertainty, disorder, or mixedupness in the phrase of Gibbs, which remains about a system after its observable macroscopic properties, such as temperature, pressure and volume, have been taken into account. For a given set of macroscopic variables, the entropy measures the degree to which the probability of the system is spread out over different possible microstates. In contrast to the macrostate, which characterizes plainly observable average quantities, a microstate specifies all molecular details about the system including the position and momentum of every molecule. The more such states are available to the system with appreciable probability, the greater the entropy. In statistical mechanics, entropy is a measure of the number of ways a system can be arranged, often taken to be a measure of "disorder" (the higher the entropy, the higher the disorder). This definition describes the entropy as being proportional to the natural logarithm of the number of possible microscopic configurations of the individual atoms and molecules of the system (microstates) that could cause the observed macroscopic state (macrostate) of the system. The constant of proportionality is the Boltzmann constant.
The Boltzmann constant, and therefore entropy, have dimensions of energy divided by temperature, which has a unit of joules per kelvin (J⋅K−1) in the International System of Units (or kg⋅m2⋅s−2⋅K−1 in terms of base units). The entropy of a substance is usually given as an intensive property — either entropy per unit mass (SI unit: J⋅K−1⋅kg−1) or entropy per unit amount of substance (SI unit: J⋅K−1⋅mol−1).
Specifically, entropy is a logarithmic measure for the system with a number of states, each with a probability of being occupied (usually given by the Boltzmann distribution):where is the Boltzmann constant and the summation is performed over all possible microstates of the system.Frigg, R. and Werndl, C. "Entropy – A Guide for the Perplexed" . In Probabilities in Physics; Beisbart C. and Hartmann, S. (eds.); Oxford University Press, Oxford, 2010.
In case states are defined in a continuous manner, the summation is replaced by an integral over all possible states, or equivalently we can consider the expected value of the logarithm of the probability that a microstate is occupied:This definition assumes the basis states to be picked in a way that there is no information on their relative phases. In a general case the expression is:where is a density matrix, is a trace operator and is a matrix logarithm. The density matrix formalism is not required if the system is in thermal equilibrium so long as the basis states are chosen to be eigenstates of the Hamiltonian. For most practical purposes it can be taken as the fundamental definition of entropy since all other formulae for can be derived from it, but not vice versa.
In what has been called the fundamental postulate in statistical mechanics, among system microstates of the same energy (i.e., degenerate microstates) each microstate is assumed to be populated with equal probability , where is the number of microstates whose energy equals that of the system. Usually, this assumption is justified for an isolated system in a thermodynamic equilibrium. Then in case of an isolated system the previous formula reduces to:In thermodynamics, such a system is one with a fixed volume, number of molecules, and internal energy, called a microcanonical ensemble.
The most general interpretation of entropy is as a measure of the extent of uncertainty about a system. The equilibrium state of a system maximizes the entropy because it does not reflect all information about the initial conditions, except for the conserved variables. This uncertainty is not of the everyday subjective kind, but rather the uncertainty inherent to the experimental method and interpretative model.
The interpretative model has a central role in determining entropy. The qualifier "for a given set of macroscopic variables" above has deep implications when two observers use different sets of macroscopic variables. For example, consider observer A using variables , , and observer B using variables , , , . If observer B changes variable , then observer A will see a violation of the second law of thermodynamics, since he does not possess information about variable and its influence on the system. In other words, one must choose a complete set of macroscopic variables to describe the system, i.e. every independent parameter that may change during experiment.
Entropy can also be defined for any Markov processes with reversible dynamics and the detailed balance property.
In Boltzmann's 1896 Lectures on Gas Theory, he showed that this expression gives a measure of entropy for systems of atoms and molecules in the gas phase, thus providing a measure for the entropy of classical thermodynamics.
Entropy of a system
In a thermodynamic system, pressure and temperature tend to become uniform over time because the equilibrium state has higher probability (more possible combinations of microstates) than any other state. As an example, for a glass of ice water in air at room temperature, the difference in temperature between the warm room (the surroundings) and the cold glass of ice and water (the system and not part of the room) decreases as portions of the thermal energy from the warm surroundings spread to the cooler system of ice and water. Over time the temperature of the glass and its contents and the temperature of the room become equal. In other words, the entropy of the room has decreased as some of its energy has been dispersed to the ice and water, of which the entropy has increased.
However, as calculated in the example, the entropy of the system of ice and water has increased more than the entropy of the surrounding room has decreased. In an isolated system such as the room and ice water taken together, the dispersal of energy from warmer to cooler always results in a net increase in entropy. Thus, when the "universe" of the room and ice water system has reached a temperature equilibrium, the entropy change from the initial state is at a maximum. The entropy of the thermodynamic system is a measure of how far the equalisation has progressed.
Thermodynamic entropy is a non-conserved state function that is of great importance in the sciences of physics and chemistry. Historically, the concept of entropy evolved to explain why some processes (permitted by conservation laws) occur spontaneously while their time reversals (also permitted by conservation laws) do not; systems tend to progress in the direction of increasing entropy. For isolated systems, entropy never decreases. This fact has several important consequences in science: first, it prohibits "perpetual motion" machines; and second, it implies the arrow of entropy has the same direction as the arrow of time. Increases in the total entropy of system and surroundings correspond to irreversible changes, because some energy is expended as waste heat, limiting the amount of work a system can do.
Unlike many other functions of state, entropy cannot be directly observed but must be calculated. Absolute standard molar entropy of a substance can be calculated from the measured temperature dependence of its heat capacity. The molar entropy of ions is obtained as a difference in entropy from a reference state defined as zero entropy. The second law of thermodynamics states that the entropy of an isolated system must increase or remain constant. Therefore, entropy is not a conserved quantity: for example, in an isolated system with non-uniform temperature, heat might irreversibly flow and the temperature become more uniform such that entropy increases. Chemical reactions cause changes in entropy and system entropy, in conjunction with enthalpy, plays an important role in determining in which direction a chemical reaction spontaneously proceeds.
Rice University's definition of entropy is that it is "a measurement of a system's disorder and its inability to do work in a system". For instance, a substance at uniform temperature is at maximum entropy and cannot drive a heat engine. A substance at non-uniform temperature is at a lower entropy (than if the heat distribution is allowed to even out) and some of the thermal energy can drive a heat engine.
A special case of entropy increase, the entropy of mixing, occurs when two or more different substances are mixed. If the substances are at the same temperature and pressure, there is no net exchange of heat or work – the entropy change is entirely due to the mixing of the different substances. At a statistical mechanical level, this results due to the change in available volume per particle with mixing.
Equivalence of definitions
Proofs of equivalence between the entropy in statistical mechanics — the Gibbs entropy formula:and the entropy in classical thermodynamics:together with the fundamental thermodynamic relation are known for the microcanonical ensemble, the canonical ensemble, the grand canonical ensemble, and the isothermal–isobaric ensemble. These proofs are based on the probability density of microstates of the generalised Boltzmann distribution and the identification of the thermodynamic internal energy as the ensemble average . Thermodynamic relations are then employed to derive the well-known Gibbs entropy formula. However, the equivalence between the Gibbs entropy formula and the thermodynamic definition of entropy is not a fundamental thermodynamic relation but rather a consequence of the form of the generalized Boltzmann distribution.
Furthermore, it has been shown that the definitions of entropy in statistical mechanics is the only entropy that is equivalent to the classical thermodynamics entropy under the following postulates:
Second law of thermodynamics
The second law of thermodynamics requires that, in general, the total entropy of any system does not decrease other than by increasing the entropy of some other system. Hence, in a system isolated from its environment, the entropy of that system tends not to decrease. It follows that heat cannot flow from a colder body to a hotter body without the application of work to the colder body. Secondly, it is impossible for any device operating on a cycle to produce net work from a single temperature reservoir; the production of net work requires flow of heat from a hotter reservoir to a colder reservoir, or a single expanding reservoir undergoing adiabatic cooling, which performs adiabatic work. As a result, there is no possibility of a perpetual motion machine. It follows that a reduction in the increase of entropy in a specified process, such as a chemical reaction, means that it is energetically more efficient.
It follows from the second law of thermodynamics that the entropy of a system that is not isolated may decrease. An air conditioner, for example, may cool the air in a room, thus reducing the entropy of the air of that system. The heat expelled from the room (the system), which the air conditioner transports and discharges to the outside air, always makes a bigger contribution to the entropy of the environment than the decrease of the entropy of the air of that system. Thus, the total of entropy of the room plus the entropy of the environment increases, in agreement with the second law of thermodynamics.
In mechanics, the second law in conjunction with the fundamental thermodynamic relation places limits on a system's ability to do useful work. The entropy change of a system at temperature absorbing an infinitesimal amount of heat in a reversible way, is given by . More explicitly, an energy is not available to do useful work, where is the temperature of the coldest accessible reservoir or heat sink external to the system. For further discussion, see Exergy.
Statistical mechanics demonstrates that entropy is governed by probability, thus allowing for a decrease in disorder even in an isolated system. Although this is possible, such an event has a small probability of occurring, making it unlikely.
The applicability of a second law of thermodynamics is limited to systems in or sufficiently near equilibrium state, so that they have defined entropy. Some inhomogeneous systems out of thermodynamic equilibrium still satisfy the hypothesis of local thermodynamic equilibrium, so that entropy density is locally defined as an intensive quantity. For such systems, there may apply a principle of maximum time rate of entropy production. It states that such a system may evolve to a steady state that maximises its time rate of entropy production. This does not mean that such a system is necessarily always in a condition of maximum time rate of entropy production; it means that it may evolve to such a steady state.
Applications
The fundamental thermodynamic relation
The entropy of a system depends on its internal energy and its external parameters, such as its volume. In the thermodynamic limit, this fact leads to an equation relating the change in the internal energy to changes in the entropy and the external parameters. This relation is known as the fundamental thermodynamic relation. If external pressure bears on the volume as the only external parameter, this relation is:Since both internal energy and entropy are monotonic functions of temperature , implying that the internal energy is fixed when one specifies the entropy and the volume, this relation is valid even if the change from one state of thermal equilibrium to another with infinitesimally larger entropy and volume happens in a non-quasistatic way (so during this change the system may be very far out of thermal equilibrium and then the whole-system entropy, pressure, and temperature may not exist).
The fundamental thermodynamic relation implies many thermodynamic identities that are valid in general, independent of the microscopic details of the system. Important examples are the Maxwell relations and the relations between heat capacities.
Entropy in chemical thermodynamics
Thermodynamic entropy is central in chemical thermodynamics, enabling changes to be quantified and the outcome of reactions predicted. The second law of thermodynamics states that entropy in an isolated system — the combination of a subsystem under study and its surroundings — increases during all spontaneous chemical and physical processes. The Clausius equation introduces the measurement of entropy change which describes the direction and quantifies the magnitude of simple changes such as heat transfer between systems — always from hotter body to cooler one spontaneously.
Thermodynamic entropy is an extensive property, meaning that it scales with the size or extent of a system. In many processes it is useful to specify the entropy as an intensive property independent of the size, as a specific entropy characteristic of the type of system studied. Specific entropy may be expressed relative to a unit of mass, typically the kilogram (unit: J⋅kg−1⋅K−1). Alternatively, in chemistry, it is also referred to one mole of substance, in which case it is called the molar entropy with a unit of J⋅mol−1⋅K−1.
Thus, when one mole of substance at about is warmed by its surroundings to , the sum of the incremental values of constitute each element's or compound's standard molar entropy, an indicator of the amount of energy stored by a substance at . Entropy change also measures the mixing of substances as a summation of their relative quantities in the final mixture.
Entropy is equally essential in predicting the extent and direction of complex chemical reactions. For such applications, must be incorporated in an expression that includes both the system and its surroundings: Via additional steps this expression becomes the equation of Gibbs free energy change for reactants and products in the system at the constant pressure and temperature :where is the enthalpy change and is the entropy change.
ΔHΔSSpontaneityExample + +Spontaneous at high TIce melting––Spontaneous at low TWater freezing– +Spontaneous at all TPropane combustion +–Non-spontaneous at all TOzone formation
The spontaneity of a chemical or physical process is governed by the Gibbs free energy change (ΔG), as defined by the equation ΔG = ΔH − TΔS, where ΔH represents the enthalpy change, ΔS the entropy change, and T the temperature in Kelvin. A negative ΔG indicates a thermodynamically favorable (spontaneous) process, while a positive ΔG denotes a non-spontaneous one. When both ΔH and ΔS are positive (endothermic, entropy-increasing), the reaction becomes spontaneous at sufficiently high temperatures, as the TΔS term dominates. Conversely, if both ΔH and ΔS are negative (exothermic, entropy-decreasing), spontaneity occurs only at low temperatures, where the enthalpy term prevails. Reactions with ΔH < 0 and ΔS > 0 (exothermic and entropy-increasing) are spontaneous at all temperatures, while those with ΔH > 0 and ΔS < 0 (endothermic and entropy-decreasing) are non-spontaneous regardless of temperature. These principles underscore the interplay between energy exchange, disorder, and temperature in determining the direction of natural processes, from phase transitions to biochemical reactions.
World's technological capacity to store and communicate entropic information
A 2011 study in Science estimated the world's technological capacity to store and communicate optimally compressed information normalised on the most effective compression algorithms available in the year 2007, therefore estimating the entropy of the technologically available sources. The author's estimate that humankind's technological capacity to store information grew from 2.6 (entropically compressed) exabytes in 1986 to 295 (entropically compressed) exabytes in 2007. The world's technological capacity to receive information through one-way broadcast networks was 432 exabytes of (entropically compressed) information in 1986, to 1.9 zettabytes in 2007. The world's effective capacity to exchange information through two-way telecommunication networks was 281 petabytes of (entropically compressed) information in 1986, to 65 (entropically compressed) exabytes in 2007.
Entropy balance equation for open systems
In chemical engineering, the principles of thermodynamics are commonly applied to "open systems", i.e. those in which heat, work, and mass flow across the system boundary. In general, flow of heat , flow of shaft work and pressure-volume work across the system boundaries cause changes in the entropy of the system. Heat transfer entails entropy transfer , where is the absolute thermodynamic temperature of the system at the point of the heat flow. If there are mass flows across the system boundaries, they also influence the total entropy of the system. This account, in terms of heat and work, is valid only for cases in which the work and heat transfers are by paths physically distinct from the paths of entry and exit of matter from the system.
To derive a generalised entropy balanced equation, we start with the general balance equation for the change in any extensive quantity in a thermodynamic system, a quantity that may be either conserved, such as energy, or non-conserved, such as entropy. The basic generic balance expression states that , i.e. the rate of change of in the system, equals the rate at which enters the system at the boundaries, minus the rate at which leaves the system across the system boundaries, plus the rate at which is generated within the system. For an open thermodynamic system in which heat and work are transferred by paths separate from the paths for transfer of matter, using this generic balance equation, with respect to the rate of change with time of the extensive quantity entropy , the entropy balance equation is:The overdots represent derivatives of the quantities with respect to time.where is the net rate of entropy flow due to the flows of mass into and out of the system with entropy per unit mass , is the rate of entropy flow due to the flow of heat across the system boundary and is the rate of entropy generation within the system, e.g. by chemical reactions, phase transitions, internal heat transfer or frictional effects such as viscosity.
In case of multiple heat flows the term is replaced by , where is the heat flow through -th port into the system and is the temperature at the -th port.
The nomenclature "entropy balance" is misleading and often deemed inappropriate because entropy is not a conserved quantity. In other words, the term is never a known quantity but always a derived one based on the expression above. Therefore, the open system version of the second law is more appropriately described as the "entropy generation equation" since it specifies that:with zero for reversible process and positive values for irreversible one.
Entropy change formulas for simple processes
For certain simple transformations in systems of constant composition, the entropy changes are given by simple formulas.
Isothermal expansion or compression of an ideal gas
For the expansion (or compression) of an ideal gas from an initial volume and pressure to a final volume and pressure at any constant temperature, the change in entropy is given by:Here is the amount of gas (in moles) and is the ideal gas constant. These equations also apply for expansion into a finite vacuum or a throttling process, where the temperature, internal energy and enthalpy for an ideal gas remain constant.
Cooling and heating
For pure heating or cooling of any system (gas, liquid or solid) at constant pressure from an initial temperature to a final temperature , the entropy change is:
provided that the constant-pressure molar heat capacity (or specific heat) is constant and that no phase transition occurs in this temperature interval.
Similarly at constant volume, the entropy change is:where the constant-volume molar heat capacity is constant and there is no phase change.
At low temperatures near absolute zero, heat capacities of solids quickly drop off to near zero, so the assumption of constant heat capacity does not apply.
Since entropy is a state function, the entropy change of any process in which temperature and volume both vary is the same as for a path divided into two steps – heating at constant volume and expansion at constant temperature. For an ideal gas, the total entropy change is:Similarly if the temperature and pressure of an ideal gas both vary:
Phase transitions
Reversible phase transitions occur at constant temperature and pressure. The reversible heat is the enthalpy change for the transition, and the entropy change is the enthalpy change divided by the thermodynamic temperature. For fusion (i.e., melting) of a solid to a liquid at the melting point , the entropy of fusion is:Similarly, for vaporisation of a liquid to a gas at the boiling point , the entropy of vaporisation is:
Approaches to understanding entropy
As a fundamental aspect of thermodynamics and physics, several different approaches to entropy beyond that of Clausius and Boltzmann are valid.
Standard textbook definitions
The following is a list of additional definitions of entropy from a collection of textbooks:
a measure of energy dispersal at a specific temperature.
a measure of disorder in the universe or of the availability of the energy in a system to do work.
a measure of a system's thermal energy per unit temperature that is unavailable for doing useful work.
In Boltzmann's analysis in terms of constituent particles, entropy is a measure of the number of possible microscopic states (or microstates) of a system in thermodynamic equilibrium.
Order and disorder
Entropy is often loosely associated with the amount of order or disorder, or of chaos, in a thermodynamic system. The traditional qualitative description of entropy is that it refers to changes in the state of the system and is a measure of "molecular disorder" and the amount of wasted energy in a dynamical energy transformation from one state or form to another. In this direction, several recent authors have derived exact entropy formulas to account for and measure disorder and order in atomic and molecular assemblies. One of the simpler entropy order/disorder formulas is that derived in 1984 by thermodynamic physicist Peter Landsberg, based on a combination of thermodynamics and information theory arguments. He argues that when constraints operate on a system, such that it is prevented from entering one or more of its possible or permitted states, as contrasted with its forbidden states, the measure of the total amount of "disorder" and "order" in the system are each given by:
Here, is the "disorder" capacity of the system, which is the entropy of the parts contained in the permitted ensemble, is the "information" capacity of the system, an expression similar to Shannon's channel capacity, and is the "order" capacity of the system.
Energy dispersal
The concept of entropy can be described qualitatively as a measure of energy dispersal at a specific temperature. Similar terms have been in use from early in the history of classical thermodynamics, and with the development of statistical thermodynamics and quantum theory, entropy changes have been described in terms of the mixing or "spreading" of the total energy of each constituent of a system over its particular quantised energy levels.
Ambiguities in the terms disorder and chaos, which usually have meanings directly opposed to equilibrium, contribute to widespread confusion and hamper comprehension of entropy for most students. As the second law of thermodynamics shows, in an isolated system internal portions at different temperatures tend to adjust to a single uniform temperature and thus produce equilibrium. A recently developed educational approach avoids ambiguous terms and describes such spreading out of energy as dispersal, which leads to loss of the differentials required for work even though the total energy remains constant in accordance with the first law of thermodynamics (compare discussion in next section). Physical chemist Peter Atkins, in his textbook Physical Chemistry, introduces entropy with the statement that "spontaneous changes are always accompanied by a dispersal of energy or matter and often both".
Relating entropy to energy usefulness
It is possible (in a thermal context) to regard lower entropy as a measure of the effectiveness or usefulness of a particular quantity of energy. Energy supplied at a higher temperature (i.e. with low entropy) tends to be more useful than the same amount of energy available at a lower temperature. Mixing a hot parcel of a fluid with a cold one produces a parcel of intermediate temperature, in which the overall increase in entropy represents a "loss" that can never be replaced.
As the entropy of the universe is steadily increasing, its total energy is becoming less useful. Eventually, this is theorised to lead to the heat death of the universe.
Entropy and adiabatic accessibility
A definition of entropy based entirely on the relation of adiabatic accessibility between equilibrium states was given by E. H. Lieb and J. Yngvason in 1999. This approach has several predecessors, including the pioneering work of Constantin Carathéodory from 1909 and the monograph by R. Giles. An equivalent approach that extends the operational definition of entropy to the entire nonequilibrium domain was derived from a rigorous formulation of the general axiomatic foundations of thermodynamics by J. H. Keenan, G. N. Hatsopoulos, E. P. Gyftopoulos, G. P. Beretta, and E. Zanchini between 1965 and 2014. In the setting of Lieb and Yngvason, one starts by picking, for a unit amount of the substance under consideration, two reference states and such that the latter is adiabatically accessible from the former but not conversely. Defining the entropies of the reference states to be 0 and 1 respectively, the entropy of a state is defined as the largest number such that is adiabatically accessible from a composite state consisting of an amount in the state and a complementary amount, , in the state . A simple but important result within this setting is that entropy is uniquely determined, apart from a choice of unit and an additive constant for each chemical element, by the following properties: it is monotonic with respect to the relation of adiabatic accessibility, additive on composite systems, and extensive under scaling.
Entropy in quantum mechanics
In quantum statistical mechanics, the concept of entropy was developed by John von Neumann and is generally referred to as "von Neumann entropy":where is the density matrix, is the trace operator and is the Boltzmann constant.
This upholds the correspondence principle, because in the classical limit, when the phases between the basis states are purely random, this expression is equivalent to the familiar classical definition of entropy for states with classical probabilities :i.e. in such a basis the density matrix is diagonal.
Von Neumann established a rigorous mathematical framework for quantum mechanics with his work . He provided in this work a theory of measurement, where the usual notion of wave function collapse is described as an irreversible process (the so-called von Neumann or projective measurement). Using this concept, in conjunction with the density matrix he extended the classical concept of entropy into the quantum domain.
Information theory
When viewed in terms of information theory, the entropy state function is the amount of information in the system that is needed to fully specify the microstate of the system. Entropy is the measure of the amount of missing information before reception. Often called Shannon entropy, it was originally devised by Claude Shannon in 1948 to study the size of information of a transmitted message. The definition of information entropy is expressed in terms of a discrete set of probabilities so that:where the base of the logarithm determines the units (for example, the binary logarithm corresponds to bits).
In the case of transmitted messages, these probabilities were the probabilities that a particular message was actually transmitted, and the entropy of the message system was a measure of the average size of information of a message. For the case of equal probabilities (i.e. each message is equally probable), the Shannon entropy (in bits) is just the number of binary questions needed to determine the content of the message.
Most researchers consider information entropy and thermodynamic entropy directly linked to the same concept, while others argue that they are distinct. Both expressions are mathematically similar. If is the number of microstates that can yield a given macrostate, and each microstate has the same a priori probability, then that probability is . The Shannon entropy (in nats) is:and if entropy is measured in units of per nat, then the entropy is given by:which is the Boltzmann entropy formula, where is the Boltzmann constant, which may be interpreted as the thermodynamic entropy per nat. Some authors argue for dropping the word entropy for the function of information theory and using Shannon's other term, "uncertainty", instead.Schneider, Tom, DELILA system (Deoxyribonucleic acid Library Language), (Information Theory Analysis of binding sites), Laboratory of Mathematical Biology, National Cancer Institute, Frederick, MD.
Measurement
The entropy of a substance can be measured, although in an indirect way. The measurement, known as entropymetry, is done on a closed system with constant number of particles and constant volume , and it uses the definition of temperature in terms of entropy, while limiting energy exchange to heat :The resulting relation describes how entropy changes when a small amount of energy is introduced into the system at a certain temperature .
The process of measurement goes as follows. First, a sample of the substance is cooled as close to absolute zero as possible. At such temperatures, the entropy approaches zerodue to the definition of temperature. Then, small amounts of heat are introduced into the sample and the change in temperature is recorded, until the temperature reaches a desired value (usually 25 °C). The obtained data allows the user to integrate the equation above, yielding the absolute value of entropy of the substance at the final temperature. This value of entropy is called calorimetric entropy.
Interdisciplinary applications
Although the concept of entropy was originally a thermodynamic concept, it has been adapted in other fields of study, including information theory, psychodynamics, thermoeconomics/ecological economics, and evolution.
Philosophy and theoretical physics
Entropy is the only quantity in the physical sciences that seems to imply a particular direction of progress, sometimes called an arrow of time. As time progresses, the second law of thermodynamics states that the entropy of an isolated system never decreases in large systems over significant periods of time. Hence, from this perspective, entropy measurement is thought of as a clock in these conditions. Since the 19th century, a number the philosophers have drawn upon the concept of entropy to develop novel metaphysical and ethical systems. Examples of this work can be found in the thought of Friedrich Nietzsche and Philipp Mainländer, Claude Lévi-Strauss, Isabelle Stengers, Shannon Mussett, and Drew M. Dalton.
Biology
Chiavazzo et al. proposed that where cave spiders choose to lay their eggs can be explained through entropy minimisation.
Entropy has been proven useful in the analysis of base pair sequences in DNA. Many entropy-based measures have been shown to distinguish between different structural regions of the genome, differentiate between coding and non-coding regions of DNA, and can also be applied for the recreation of evolutionary trees by determining the evolutionary distance between different species.
Cosmology
Assuming that a finite universe is an isolated system, the second law of thermodynamics states that its total entropy is continually increasing. It has been speculated, since the 19th century, that the universe is fated to a heat death in which all the energy ends up as a homogeneous distribution of thermal energy so that no more work can be extracted from any source.
If the universe can be considered to have generally increasing entropy, then – as Roger Penrose has pointed out – gravity plays an important role in the increase because gravity causes dispersed matter to accumulate into stars, which collapse eventually into black holes. The entropy of a black hole is proportional to the surface area of the black hole's event horizon. Jacob Bekenstein and Stephen Hawking have shown that black holes have the maximum possible entropy of any object of equal size. This makes them likely end points of all entropy-increasing processes, if they are totally effective matter and energy traps. However, the escape of energy from black holes might be possible due to quantum activity (see Hawking radiation).
The role of entropy in cosmology remains a controversial subject since the time of Ludwig Boltzmann. Recent work has cast some doubt on the heat death hypothesis and the applicability of any simple thermodynamic model to the universe in general. Although entropy does increase in the model of an expanding universe, the maximum possible entropy rises much more rapidly, moving the universe further from the heat death with time, not closer. This results in an "entropy gap" pushing the system further away from the posited heat death equilibrium. Other complicating factors, such as the energy density of the vacuum and macroscopic quantum effects, are difficult to reconcile with thermodynamical models, making any predictions of large-scale thermodynamics extremely difficult.
Current theories suggest the entropy gap to have been originally opened up by the early rapid exponential expansion of the universe.. In honor of John Wheeler's 90th birthday.
Economics
Romanian American economist Nicholas Georgescu-Roegen, a progenitor in economics and a paradigm founder of ecological economics, made extensive use of the entropy concept in his magnum opus on The Entropy Law and the Economic Process. Due to Georgescu-Roegen's work, the laws of thermodynamics form an integral part of the ecological economics school. Although his work was blemished somewhat by mistakes, a full chapter on the economics of Georgescu-Roegen has approvingly been included in one elementary physics textbook on the historical development of thermodynamics.
In economics, Georgescu-Roegen's work has generated the term 'entropy pessimism'. Since the 1990s, leading ecological economist and steady-state theorist Herman Daly – a student of Georgescu-Roegen – has been the economics profession's most influential proponent of the entropy pessimism position.
See also
Boltzmann entropy
Brownian ratchet
Configuration entropy
Conformational entropy
Entropic explosion
Entropic force
Entropic value at risk
Entropy and life
Entropy unit
Free entropy
Harmonic entropy
Info-metrics
Negentropy (negative entropy)
Phase space
Principle of maximum entropy
Residual entropy
Standard molar entropy
Thermodynamic potential
Notes
References
Further reading
Lambert, Frank L.;
Sharp, Kim (2019). Entropy and the Tao of Counting: A Brief Introduction to Statistical Mechanics and the Second Law of Thermodynamics (SpringerBriefs in Physics). Springer Nature. .
Spirax-Sarco Limited, Entropy – A Basic Understanding A primer on entropy tables for steam engineering
External links
"Entropy" at Scholarpedia
Entropy and the Clausius inequality MIT OCW lecture, part of 5.60 Thermodynamics & Kinetics, Spring 2008
Entropy and the Second Law of Thermodynamics – an A-level physics lecture with 'derivation' of entropy based on Carnot cycle
2.43 Advanced Thermodynamics - an MIT OCW course with emphasis on rigorous and general definitions
taught by G. P. Beretta, MIT, Spring 2024
Khan Academy: entropy lectures, part of Chemistry playlist
Entropy Intuition
More on Entropy
Proof: S (or Entropy) is a valid state variable
Reconciling Thermodynamic and State Definitions of Entropy
Thermodynamic Entropy Definition Clarification
The Discovery of Entropy by Adam Shulman. Hour-long video, January 2013.
The Second Law of Thermodynamics and Entropy – Yale OYC lecture, part of Fundamentals of Physics I (PHYS 200)
Category:Physical quantities
Category:Philosophy of thermal and statistical physics
Category:State functions
Category:Asymmetry
Category:Extensive quantities
|
physics
| 9,001
|
10264
|
Enrico Fermi
|
https://en.wikipedia.org/wiki/Enrico_Fermi
|
Enrico Fermi (; 29 September 1901 – 28 November 1954) was an Italian and naturalized American physicist, renowned for being the creator of the world's first artificial nuclear reactor, the Chicago Pile-1, and a member of the Manhattan Project. He has been called the "architect of the nuclear age" and the "architect of the atomic bomb". He was one of very few physicists to excel in both theoretical and experimental physics. Fermi was awarded the 1938 Nobel Prize in Physics for his work on induced radioactivity by neutron bombardment and for the discovery of transuranium elements. With his colleagues, Fermi filed several patents related to the use of nuclear power, all of which were taken over by the US government. He made significant contributions to the development of statistical mechanics, quantum theory, and nuclear and particle physics.
Fermi's first major contribution involved the field of statistical mechanics. After Wolfgang Pauli formulated his exclusion principle in 1925, Fermi followed with a paper in which he applied the principle to an ideal gas, employing a statistical formulation now known as Fermi–Dirac statistics. Today, particles that obey the exclusion principle are called "fermions". Pauli later postulated the existence of an uncharged invisible particle emitted along with an electron during beta decay, to satisfy the law of conservation of energy. Fermi took up this idea, developing a model that incorporated the postulated particle, which he named the "neutrino". His theory, later referred to as Fermi's interaction and now called weak interaction, described one of the four fundamental interactions in nature. Through experiments inducing radioactivity with the recently discovered neutron, Fermi discovered that slow neutrons were more easily captured by atomic nuclei than fast ones, and he developed the Fermi age equation to describe this. After bombarding thorium and uranium with slow neutrons, he concluded that he had created new elements. Although he was awarded the Nobel Prize for this discovery, the new elements were later revealed to be nuclear fission products.
Fermi left Italy in 1938 to escape new Italian racial laws that affected his Jewish wife, Laura Capon. He emigrated to the United States, where he worked on the Manhattan Project during World War II. Fermi led the team at the University of Chicago that designed and built Chicago Pile-1, which went critical on 2 December 1942, demonstrating the first human-created, self-sustaining nuclear chain reaction. He was on hand when the X-10 Graphite Reactor at Oak Ridge, Tennessee, went critical in 1943, and when the B Reactor at the Hanford Site did so the next year. At Los Alamos, he headed F Division, part of which worked on Edward Teller's thermonuclear "Super" bomb. He was present at the Trinity test on 16 July 1945, the first test of a full nuclear bomb explosion, where he used his Fermi method to estimate the bomb's yield.
After the war, he helped establish the Institute for Nuclear Studies in Chicago, and served on the General Advisory Committee, chaired by J. Robert Oppenheimer, which advised the Atomic Energy Commission on nuclear matters. After the detonation of the first Soviet fission bomb in August 1949, he strongly opposed the development of a hydrogen bomb on both moral and technical grounds. He was among the scientists who testified on Oppenheimer's behalf at the 1954 hearing that resulted in the denial of Oppenheimer's security clearance.
Fermi did important work in particle physics, especially related to pions and muons, and he speculated that cosmic rays arose when the material was accelerated by magnetic fields in interstellar space. Many awards, concepts, and institutions are named after Fermi, including the Fermi 1 (breeder reactor), the Enrico Fermi Nuclear Generating Station, the Enrico Fermi Award, the Enrico Fermi Institute, the Fermi National Accelerator Laboratory (Fermilab), the Fermi Gamma-ray Space Telescope, the Fermi paradox, and the synthetic element fermium, making him one of 16 scientists who have elements named after them.
Early life
Enrico Fermi was born in Rome, Italy, on 29 September 1901. He was the third child of Alberto Fermi, a division head in the Ministry of Railways, and Ida de Gattis, an elementary school teacher. His sister, Maria, was two years older, his brother Giulio a year older. After the two boys were sent to a rural community to be wet nursed, Enrico rejoined his family in Rome when he was two and a half. Although he was baptized a Catholic in accordance with his grandparents' wishes, his family was not particularly religious; Enrico was an agnostic throughout his adult life. As a young boy, he shared the same interests as his brother Giulio, building electric motors and playing with electrical and mechanical toys. Giulio died during an operation on a throat abscess in 1915 and Maria died in an airplane crash near Milan in 1959.
At a local market in Campo de' Fiori, Fermi found a physics book, the 900-page Elementorum physicae mathematicae. Written in Latin by Jesuit Father , a professor at the Collegio Romano, it presented mathematics, classical mechanics, astronomy, optics, and acoustics as they were understood at the time of its 1840 publication. With a scientifically inclined friend, Enrico Persico, Fermi pursued projects such as building gyroscopes and measuring the acceleration of Earth's gravity.
Enrico would often meet his father Alberto in front of his office after work, and in 1914 he met his father’s colleague Adolfo Amidei, who was accustomed to walking part of the way home with Alberto.
Enrico had learned that Adolfo was interested in mathematics and physics and took the opportunity to ask Adolfo a question about geometry. Adolfo understood that the young Fermi was referring to projective geometry and then proceeded to give him a book on the subject written by Theodor Reye. Two months later, Fermi returned the book, having solved all the problems proposed at the end of the book, some of which Adolfo considered difficult. Upon verifying this, Adolfo felt that Fermi was "a prodigy, at least with respect to geometry", and further mentored the boy, providing him with more books on physics and mathematics. Adolfo noted that Fermi had a very good memory and thus could return the books after having read them because he could remember their content very well.
Scuola Normale Superiore in Pisa
Fermi graduated from high school in July 1918, having skipped the third year entirely. At Amidei's urging, Fermi learned German to be able to read the many scientific papers that were published in that language at the time, and he applied to the Scuola Normale Superiore in Pisa. Amidei felt that the Scuola would provide better conditions for Fermi's development than the Sapienza University of Rome could at the time. Having lost one son, Fermi's parents only reluctantly allowed him to live in the school's lodgings away from Rome for four years. Fermi took first place in the difficult entrance exam, which included an essay on the theme of "Specific characteristics of Sounds"; the 17-year-old Fermi chose to use Fourier analysis to derive and solve the partial differential equation for a vibrating rod, and after interviewing Fermi the examiner declared he would become an outstanding physicist.
At the Scuola Normale Superiore, Fermi played pranks with fellow student Franco Rasetti; the two became close friends and collaborators. Fermi was advised by Luigi Puccianti, director of the physics laboratory, who said there was little he could teach Fermi and often asked Fermi to teach him something instead. Fermi's knowledge of quantum physics was such that Puccianti asked him to organize seminars on the topic. During this time Fermi learned tensor calculus, a technique key to general relativity. Fermi initially chose mathematics as his major but soon switched to physics. He remained largely self-taught, studying general relativity, quantum mechanics, and atomic physics.
In September 1920, Fermi was admitted to the physics department. Since there were only three students in the department—Fermi, Rasetti, and Nello Carrara—Puccianti let them freely use the laboratory for whatever purposes they chose. Fermi decided that they should research X-ray crystallography, and the three worked to produce a Laue photograph—an X-ray photograph of a crystal. During 1921, his third year at the university, Fermi published his first scientific works in the Italian journal Nuovo Cimento. The first was entitled "On the dynamics of a rigid system of electrical charges in translational motion" (). A sign of things to come was that the mass was expressed as a tensor—a mathematical construct commonly used to describe something moving and changing in three-dimensional space. In classical mechanics, mass is a scalar quantity, but in relativity, it changes with velocity. The second paper was "On the electrostatics of a uniform gravitational field of electromagnetic charges and on the weight of electromagnetic charges" (). Using general relativity, Fermi showed that a charge has a mass equal to U/c2, where U is the electrostatic energy of the system, and c is the speed of light.
The first paper seemed to point out a contradiction between the electrodynamic theory and the relativistic one concerning the calculation of the electromagnetic masses, as the former predicted a value of 4/3 U/c2. Fermi addressed this the next year in a paper "Concerning a contradiction between electrodynamic and the relativistic theory of electromagnetic mass" in which he showed that the apparent contradiction was a consequence of relativity. This paper was sufficiently well-regarded that it was translated into German and published in the German scientific journal Physikalische Zeitschrift in 1922. That year, Fermi submitted his article "On the phenomena occurring near a world line" () to the Italian journal . In this article, he examined the Principle of Equivalence, and introduced the so-called "Fermi coordinates". He proved that on a world line close to the timeline, space behaves as if it were a Euclidean space.
Fermi submitted his thesis, "A theorem on probability and some of its applications" (), to the Scuola Normale Superiore in July 1922, and received his laurea at the unusually young age of 20. The thesis was on X-ray diffraction images. Theoretical physics was not yet considered a discipline in Italy, and the only thesis that would have been accepted was experimental physics. For this reason, Italian physicists were slow to embrace the new ideas like relativity coming from Germany. Since Fermi was quite at home in the lab doing experimental work, this did not pose insurmountable problems for him.
While writing the appendix for the Italian edition of the book Fundamentals of Einstein Relativity by August Kopff in 1923, Fermi was the first to point out that hidden inside the Einstein equation () was an enormous amount of nuclear potential energy to be exploited. "It does not seem possible, at least in the near future", he wrote, "to find a way to release these dreadful amounts of energy—which is all to the good because the first effect of an explosion of such a dreadful amount of energy would be to smash into smithereens the physicist who had the misfortune to find a way to do it."
In 1923–1924, Fermi spent a semester studying under Max Born at the University of Göttingen, where he met Werner Heisenberg and Pascual Jordan. Fermi then studied in Leiden with Paul Ehrenfest from September to December 1924 on a fellowship from the Rockefeller Foundation obtained through the intercession of the mathematician Vito Volterra. Here Fermi met Hendrik Lorentz and Albert Einstein, and became friends with Samuel Goudsmit and Jan Tinbergen. From January 1925 to late 1926, Fermi taught mathematical physics and theoretical mechanics at the University of Florence, where he teamed up with Rasetti to conduct a series of experiments on the effects of magnetic fields on mercury vapour. He also participated in seminars at the Sapienza University of Rome, giving lectures on quantum mechanics and solid state physics. While giving lectures on the new quantum mechanics based on the remarkable accuracy of predictions of the Schrödinger equation, Fermi would often say, "It has no business to fit so well!"
After Wolfgang Pauli announced his exclusion principle in 1925, Fermi responded with a paper "On the quantization of the perfect monoatomic gas" (), in which he applied the exclusion principle to an ideal gas. The paper was especially notable for Fermi's statistical formulation, which describes the distribution of particles in systems of many identical particles that obey the exclusion principle. This was independently developed soon after by the British physicist Paul Dirac, who also showed how it was related to the Bose–Einstein statistics. Accordingly, it is now known as Fermi–Dirac statistics. After Dirac, particles that obey the exclusion principle are today called "fermions", while those that do not are called "bosons".
Professor in Rome
Professorships in Italy were granted by competition () for a vacant chair, the applicants being rated on their publications by a committee of professors. Fermi applied for a chair of mathematical physics at the University of Cagliari on Sardinia but was narrowly passed over in favour of Giovanni Giorgi. In 1926, at the age of 24, he applied for a professorship at the Sapienza University of Rome. This was a new chair, one of the first three in theoretical physics in Italy, that had been created by the Minister of Education at the urging of professor Orso Mario Corbino, who was the university's professor of experimental physics, the director of the Institute of Physics, and a member of Benito Mussolini's cabinet. Corbino, who also chaired the selection committee, hoped that the new chair would raise the standard and reputation of physics in Italy. The committee chose Fermi ahead of Enrico Persico and Aldo Pontremoli, and Corbino helped Fermi recruit his team, which was soon joined by notable students such as Edoardo Amaldi, Bruno Pontecorvo, Ettore Majorana and Emilio Segrè, and by Franco Rasetti, whom Fermi had appointed as his assistant. They soon were nicknamed the "Via Panisperna boys" after the street where the Institute of Physics was located.
Fermi married Laura Capon, a science student at the university, on 19 July 1928. They had two children: Nella, born in January 1931, and Giulio, born in February 1936. On 18 March 1929, Fermi was appointed a member of the Royal Academy of Italy by Mussolini, and on 27 April he joined the Fascist Party. He later opposed Fascism when the 1938 racial laws were promulgated by Mussolini in order to bring Italian Fascism ideologically closer to German Nazism. These laws threatened Laura, who was Jewish, and put many of Fermi's research assistants out of work.
During their time in Rome, Fermi and his group made important contributions to many practical and theoretical aspects of physics. In 1928, he published his Introduction to Atomic Physics (), which provided Italian university students with an up-to-date and accessible text. Fermi also conducted public lectures and wrote popular articles for scientists and teachers in order to spread knowledge of the new physics as widely as possible. Part of his teaching method was to gather his colleagues and graduate students together at the end of the day and go over a problem, often from his own research. A sign of success was that foreign students now began to come to Italy. The most notable of these was the German physicist Hans Bethe, who came to Rome as a Rockefeller Foundation fellow, and collaborated with Fermi on a 1932 paper "On the Interaction between Two Electrons" ().
At this time, physicists were puzzled by beta decay, in which an electron was emitted from the atomic nucleus. To satisfy the law of conservation of energy, Pauli postulated the existence of an invisible particle with no charge and little or no mass that was also emitted at the same time. Fermi took up this idea, which he developed in a tentative paper in 1933, and then a longer paper the next year that incorporated the postulated particle, which Fermi called a "neutrino". His theory, later referred to as Fermi's interaction, and still later as the theory of the weak interaction, described one of the four fundamental forces of nature. The neutrino was detected after his death, and his interaction theory showed why it was so difficult to detect. When he submitted his paper to the British journal Nature, that journal's editor turned it down because it contained speculations which were "too remote from physical reality to be of interest to readers". According to Fermi's biographer David N. Schwartz, it is at least strange that Fermi seriously requested publication from the journal, since at that time Nature only published short notes on articles of this kind, and was not suitable for the publication of even a new physical theory. More suitable, if anything, would have been the Proceedings of the Royal Society of London. He agrees with some scholars' hypothesis, according to which the rejection of the British magazine convinced his young colleagues (some of them Jews and leftists) to give up the boycott of German scientific magazines, after Hitler came to power in January 1933. Thus Fermi saw the theory published in Italian and German before it was published in English.
In the introduction to the 1968 English translation, physicist Fred L. Wilson noted that:
In January 1934, Irène Joliot-Curie and Frédéric Joliot announced that they had bombarded elements with alpha particles and induced radioactivity in them. By March, Fermi's assistant Gian-Carlo Wick had provided a theoretical explanation using Fermi's theory of beta decay. Fermi decided to switch to experimental physics, using the neutron, which James Chadwick had discovered in 1932. In March 1934, Fermi wanted to see if he could induce radioactivity with Rasetti's polonium-beryllium neutron source. Neutrons had no electric charge, and so would not be deflected by the positively charged nucleus. This meant that they needed much less energy to penetrate the nucleus than charged particles, and so would not require a particle accelerator, which the Via Panisperna boys did not have.
Fermi had the idea to resort to replacing the polonium-beryllium neutron source with a radon-beryllium one, which he created by filling a glass bulb with beryllium powder, evacuating the air, and then adding 50 mCi of radon gas, supplied by . This created a much stronger neutron source, the effectiveness of which declined with the 3.8-day half-life of radon. He knew that this source would also emit gamma rays, but, on the basis of his theory, he believed that this would not affect the results of the experiment. He started by bombarding platinum, an element with a high atomic number that was readily available, without success. He turned to aluminium, which emitted an alpha particle and produced sodium, which then decayed into magnesium by beta particle emission. He tried lead, without success, and then fluorine in the form of calcium fluoride, which emitted an alpha particle and produced nitrogen, decaying into oxygen by beta particle emission. In all, he induced radioactivity in 22 different elements. Fermi rapidly reported the discovery of neutron-induced radioactivity in the Italian journal La Ricerca Scientifica on 25 March 1934.
The natural radioactivity of thorium and uranium made it hard to determine what was happening when these elements were bombarded with neutrons but, after correctly eliminating the presence of elements lighter than uranium but heavier than lead, Fermi concluded that they had created new elements, which he called ausenium and hesperium. The chemist Ida Noddack suggested that some of the experiments could have produced lighter elements than lead rather than new, heavier elements. Her suggestion was not taken seriously at the time because her team had not carried out any experiments with uranium or built the theoretical basis for this possibility. At that time, fission was thought to be improbable if not impossible on theoretical grounds. While physicists expected elements with higher atomic numbers to form from neutron bombardment of lighter elements, nobody expected neutrons to have enough energy to split a heavier atom into two light element fragments in the manner that Noddack suggested.
The Via Panisperna boys also noticed some unexplained effects. The experiment seemed to work better on a wooden table than on a marble tabletop. Fermi remembered that Joliot-Curie and Chadwick had noted that paraffin wax was effective at slowing neutrons, so he decided to try that. When neutrons were passed through paraffin wax, they induced a hundred times as much radioactivity in silver compared with when it was bombarded without the paraffin. Fermi guessed that this was due to the hydrogen atoms in the paraffin. Those in wood similarly explained the difference between the wooden and the marble tabletops. This was confirmed by repeating the effect with water. He concluded that collisions with hydrogen atoms slowed the neutrons. The lower the atomic number of the nucleus it collides with, the more energy a neutron loses per collision, and therefore the fewer collisions that are required to slow a neutron down by a given amount. Fermi realised that this induced more radioactivity because slow neutrons were more easily captured than fast ones. He developed a diffusion equation to describe this, which became known as the Fermi age equation.
In 1938, Fermi received the Nobel Prize in Physics at the age of 37 for his "demonstrations of the existence of new radioactive elements produced by neutron irradiation, and for his related discovery of nuclear reactions brought about by slow neutrons". After Fermi received the prize in Stockholm, he did not return home to Italy but rather continued to New York City with his family in December 1938, where they applied for permanent residency. The decision to move to America and become US citizens was due primarily to the racial laws in Italy.
Manhattan Project
Fermi arrived in New York City on 2 January 1939. He was immediately offered positions at five universities, and accepted one at Columbia University, where he had already given summer lectures in 1936. He received the news that in December 1938, the German chemists Otto Hahn and Fritz Strassmann had detected the element barium after bombarding uranium with neutrons, which Lise Meitner and her nephew Otto Frisch correctly interpreted as the result of nuclear fission. Frisch confirmed this experimentally on 13 January 1939. The news of Meitner and Frisch's interpretation of Hahn and Strassmann's discovery crossed the Atlantic with Niels Bohr, who was to lecture at Princeton University. Isidor Isaac Rabi and Willis Lamb, two Columbia University physicists working at Princeton, found out about it and carried it back to Columbia. Rabi said he told Enrico Fermi, but Fermi later gave the credit to Lamb:
Noddack was proven right after all. Fermi had dismissed the possibility of fission on the basis of his calculations, but he had not taken into account the binding energy that would appear when a nuclide with an odd number of neutrons absorbed an extra neutron. For Fermi, the news came as a profound embarrassment, as the transuranic elements that he had partly been awarded the Nobel Prize for discovering had not been transuranic elements at all, but fission products. He added a footnote to this effect to his Nobel Prize acceptance speech.
The scientists at Columbia decided that they should try to detect the energy released in the nuclear fission of uranium when bombarded by neutrons. On 25 January 1939, in the basement of Pupin Hall at Columbia, an experimental team including Fermi conducted the first nuclear fission experiment in the United States. The other members of the team were Herbert L. Anderson, Eugene T. Booth, John R. Dunning, G. Norris Glasoe, and Francis G. Slack. The next day, the fifth Washington Conference on Theoretical Physics began in Washington, D.C. under the joint auspices of George Washington University and the Carnegie Institution of Washington. There, the news on nuclear fission was spread even further, fostering many more experimental demonstrations.
French scientists Hans von Halban, Lew Kowarski, and Frédéric Joliot-Curie had demonstrated that uranium bombarded by neutrons emitted more neutrons than it absorbed, suggesting the possibility of a chain reaction. Fermi and Anderson did so too a few weeks later. Leó Szilárd obtained of uranium oxide from Canadian radium producer Eldorado Gold Mines Limited, allowing Fermi and Anderson to conduct experiments with fission on a much larger scale. Fermi and Szilárd collaborated on the design of a device to achieve a self-sustaining nuclear reaction—a nuclear reactor. Owing to the rate of absorption of neutrons by the hydrogen in water, it was unlikely that a self-sustaining reaction could be achieved with natural uranium and water as a neutron moderator. Fermi suggested, based on his work with neutrons, that the reaction could be achieved with uranium oxide blocks and graphite as a moderator instead of water. This would reduce the neutron capture rate, and in theory make a self-sustaining chain reaction possible. Szilárd came up with a workable design: a pile of uranium oxide blocks interspersed with graphite bricks. Szilárd, Anderson, and Fermi published a paper on "Neutron Production in Uranium". But their work habits and personalities were different, and Fermi had trouble working with Szilárd.
Fermi was among the first to warn military leaders about the potential impact of nuclear energy, giving a lecture on the subject at the Navy Department on 18 March 1939. The response fell short of what he had hoped for, although the Navy agreed to provide $1,500 towards further research at Columbia. Later that year, Szilárd, Eugene Wigner, and Edward Teller sent the letter signed by Einstein to US president Franklin D. Roosevelt, warning that Nazi Germany was likely to build an atomic bomb. In response, Roosevelt formed the Advisory Committee on Uranium to investigate the matter.
The Advisory Committee on Uranium provided money for Fermi to buy graphite, and he built a pile of graphite bricks on the seventh floor of the Pupin Hall laboratory. By August 1941, he had six tons of uranium oxide and thirty tons of graphite, which he used to build a still larger pile in Schermerhorn Hall at Columbia.
The S-1 Section of the Office of Scientific Research and Development, as the Advisory Committee on Uranium was now known, met on 18 December 1941, with the US now engaged in World War II, making its work urgent. Most of the effort sponsored by the committee had been directed at producing enriched uranium, but Committee member Arthur Compton determined that a feasible alternative was plutonium, which could be mass-produced in nuclear reactors by the end of 1944. He decided to concentrate the plutonium work at the University of Chicago. Fermi reluctantly moved, and his team became part of the new Metallurgical Laboratory there.
The possible results of a self-sustaining nuclear reaction were unknown, so it seemed inadvisable to build the first nuclear reactor on the University of Chicago campus in the middle of the city. Compton found a location in the Argonne Woods Forest Preserve, about from Chicago. Stone & Webster was contracted to develop the site, but the work was halted by an industrial dispute. Fermi then persuaded Compton that he could build the reactor in the squash court under the stands of the University of Chicago's Stagg Field. Construction of the pile began on 6 November 1942, and Chicago Pile-1 went critical on 2 December. The shape of the pile was intended to be roughly spherical, but as work proceeded Fermi calculated that criticality could be achieved without finishing the entire pile as planned.
This experiment was a landmark in the quest for energy, and it was typical of Fermi's approach. Every step was carefully planned, and every calculation was meticulously done. When the first self-sustained nuclear chain reaction was achieved, Compton made a coded phone call to James B. Conant, the chairman of the National Defense Research Committee.
To continue the research where it would not pose a public health hazard, the reactor was disassembled and moved to the Argonne Woods site. There Fermi directed experiments on nuclear reactions, reveling in the opportunities provided by the reactor's abundant production of free neutrons. The laboratory soon branched out from physics and engineering into using the reactor for biological and medical research. Initially, Argonne was run by Fermi as part of the University of Chicago, but it became a separate entity with Fermi as its director in May 1944.
When the air-cooled X-10 Graphite Reactor at Oak Ridge went critical on 4 November 1943, Fermi was on hand just in case something went wrong. The technicians woke him early so that he could see it happen. Getting X-10 operational was another milestone in the plutonium project. It provided data on reactor design, training for DuPont staff in reactor operation, and produced the first small quantities of reactor-bred plutonium. Fermi became an American citizen in July 1944, the earliest date the law allowed.
In September 1944, Fermi inserted the first uranium fuel slug into the B Reactor at the Hanford Site, the production reactor designed to breed plutonium in large quantities. Like X-10, it had been designed by Fermi's team at the Metallurgical Laboratory and built by DuPont, but it was much larger and was water-cooled. Over the next few days, 838 tubes were loaded, and the reactor went critical. Shortly after midnight on 27 September, the operators began to withdraw the control rods to initiate production. At first, all appeared to be well, but around 03:00, the power level started to drop and by 06:30 the reactor had shut down completely. The Army and DuPont turned to Fermi's team for answers. The cooling water was investigated to see if there was a leak or contamination. The next day the reactor suddenly started up again, only to shut down once more a few hours later. The problem was traced to neutron poisoning from xenon-135 or Xe-135, a fission product with a half-life of 9.1 to 9.4 hours. Fermi and John Wheeler both deduced that Xe-135 was responsible for absorbing neutrons in the reactor, thereby sabotaging the fission process. Fermi was recommended by colleague Emilio Segrè to ask Chien-Shiung Wu, as she prepared a printed draft on this topic to be published by the Physical Review. Upon reading the draft, Fermi and the scientists confirmed their suspicions: Xe-135 indeed absorbed neutrons, in fact it had a huge neutron cross-section. DuPont had deviated from the Metallurgical Laboratory's original design in which the reactor had 1,500 tubes arranged in a circle, and had added 504 tubes to fill in the corners. The scientists had originally considered this over-engineering a waste of time and money, but Fermi realized that if all 2,004 tubes were loaded, the reactor could reach the required power level and efficiently produce plutonium.
In April 1943, Fermi raised with Robert Oppenheimer the possibility of using the radioactive byproducts from enrichment to contaminate the German food supply. The background was fear that the German atomic bomb project was already at an advanced stage, and Fermi was also sceptical at the time that an atomic bomb could be developed quickly enough. Oppenheimer discussed the "promising" proposal with Edward Teller, who suggested the use of strontium-90. James B. Conant and Leslie Groves were also briefed, but Oppenheimer wanted to proceed with the plan only if enough food could be contaminated with the weapon to kill half a million people.
In mid-1944, Oppenheimer persuaded Fermi to join his Project Y at Los Alamos, New Mexico. Arriving in September, Fermi was appointed an associate director of the laboratory, with broad responsibility for nuclear and theoretical physics, and was placed in charge of F Division, which was named after him. F Division had four branches: F-1 Super and General Theory under Teller, which investigated the "Super" (thermonuclear) bomb; F-2 Water Boiler under L. D. P. King, which looked after the "water boiler" aqueous homogeneous research reactor; F-3 Super Experimentation under Egon Bretscher; and F-4 Fission Studies under Anderson. Fermi observed the Trinity test on 16 July 1945 and conducted an experiment to estimate the bomb's yield by dropping strips of paper into the blast wave. He paced off the distance they were blown by the explosion, and calculated the yield as ten kilotons of TNT; the actual yield was about 18.6 kilotons.
Along with Oppenheimer, Compton, and Ernest Lawrence, Fermi was part of the scientific panel that advised the Interim Committee on target selection. The panel agreed with the committee that atomic bombs would be used without warning against an industrial target. Like others at the Los Alamos Laboratory, Fermi found out about the atomic bombings of Hiroshima and Nagasaki from the public address system in the technical area. Fermi did not believe that atomic bombs would deter nations from starting wars, nor did he think that the time was ripe for world government. He therefore did not join the Association of Los Alamos Scientists.
Postwar work
Fermi became the Charles H. Swift Distinguished Professor of Physics at the University of Chicago on 1 July 1945, although he did not depart the Los Alamos Laboratory with his family until 31 December 1945. He was elected a member of the US National Academy of Sciences in 1945. The Metallurgical Laboratory became the Argonne National Laboratory on 1 July 1946, the first of the national laboratories established by the Manhattan Project. The short distance between Chicago and Argonne allowed Fermi to work at both places. At Argonne he continued experimental physics, investigating neutron scattering with Leona Marshall. He also discussed theoretical physics with Maria Mayer, helping her develop insights into spin–orbit coupling that would lead to her receiving the Nobel Prize.
The Manhattan Project was replaced by the Atomic Energy Commission (AEC) on 1 January 1947. Fermi served on the AEC General Advisory Committee, an influential scientific committee chaired by Robert Oppenheimer. He also liked to spend a few weeks each year at the Los Alamos National Laboratory, where he collaborated with Nicholas Metropolis, and with John von Neumann on Rayleigh–Taylor instability, the science of what occurs at the border between two fluids of different densities.
After the detonation of the first Soviet fission bomb in August 1949, Fermi, along with Isidor Rabi, wrote a strongly worded report for the committee, opposing the development of a hydrogen bomb on moral and technical grounds. Nonetheless, Fermi continued to participate in work on the hydrogen bomb at Los Alamos as a consultant. Along with Stanislaw Ulam, he calculated that not only would the amount of tritium needed for Teller's model of a thermonuclear weapon be prohibitive, but a fusion reaction could still not be assured to propagate even with this large quantity of tritium. Fermi was among the scientists who testified on Oppenheimer's behalf at the Oppenheimer security hearing in 1954 that resulted in the denial of Oppenheimer's security clearance.
In his later years, Fermi continued teaching at the University of Chicago, where he was a founder of what later became the Enrico Fermi Institute. His PhD students in the postwar period included Owen Chamberlain, Geoffrey Chew, Jerome Friedman, Marvin Goldberger, Tsung-Dao Lee, Arthur Rosenfeld and Sam Treiman. including the Nobel Lecture, 12 December 1938 Artificial Radioactivity Produced by Neutron Bombardment Jack Steinberger was a graduate student, and Mildred Dresselhaus was highly influenced by Fermi during the year she overlapped with him as a PhD student. Fermi conducted important research in particle physics, especially related to pions and muons. He made the first predictions of pion-nucleon resonance, relying on statistical methods, since he reasoned that exact answers were not required when the theory was wrong anyway. In a paper coauthored with Chen Ning Yang, he speculated that pions might actually be composite particles. The idea was elaborated by Shoichi Sakata. It has since been supplanted by the quark model, in which the pion is made up of quarks, which completed Fermi's model, and vindicated his approach.
Fermi wrote a paper "On the Origin of Cosmic Radiation" in which he proposed that cosmic rays arose through material being accelerated by magnetic fields in interstellar space, which led to a difference of opinion with Teller. Fermi examined the issues surrounding magnetic fields in the arms of a spiral galaxy. He mused about what is now referred to as the "Fermi paradox": the contradiction between the presumed probability of the existence of extraterrestrial life and the fact that contact has not been made.
Toward the end of his life, Fermi questioned his faith in society at large to make wise choices about nuclear technology. He said:
Death
Fermi underwent what was called an "exploratory" operation in Billings Memorial Hospital in October 1954, after which he returned home. Fifty days later he died of inoperable stomach cancer in his home in Chicago. He was 53. Fermi suspected working near the nuclear pile involved great risk but he pressed on because he felt the benefits outweighed the risks to his personal safety. Two of his graduate student assistants working near the pile also died of cancer.
A memorial service was held at the University of Chicago chapel, where colleagues Samuel K. Allison, Emilio Segrè, and Herbert L. Anderson spoke to mourn the loss of one of the world's "most brilliant and productive physicists". His body was interred at Oak Woods Cemetery where a private graveside service for the immediate family took place presided by a Lutheran chaplain.
Impact and legacy
Legacy
Fermi received numerous awards in recognition of his achievements, including the Matteucci Medal in 1926, the Nobel Prize for Physics in 1938, the Hughes Medal in 1942, the Franklin Medal in 1947, and the Rumford Prize in 1953. He was awarded the Medal for Merit in 1946 for his contribution to the Manhattan Project. Fermi was elected member of the American Philosophical Society in 1939 and a Foreign Member of the Royal Society (FRS) in 1950. The Basilica of Santa Croce, Florence, known as the Temple of Italian Glories for its many graves of artists, scientists and prominent figures in Italian history, has a plaque commemorating Fermi. In 1999, Time named Fermi on its list of the top 100 persons of the twentieth century. Fermi was widely regarded as an unusual case of a 20th-century physicist who excelled both theoretically and experimentally. Radiochemist and nuclear physicist Emilio Segrè called Fermi "the last universal physicist in the tradition of great men of the 19th century" and stated that he "was the last person who knew all of physics of his day". Chemist and novelist C. P. Snow wrote, "if Fermi had been born a few years earlier, one could well imagine him discovering Rutherford's atomic nucleus, and then developing Bohr's theory of the hydrogen atom. If this sounds like hyperbole, anything about Fermi is likely to sound like hyperbole".
Fermi was known as an inspiring teacher and was noted for his attention to detail, simplicity, and careful preparation of his lectures. Later, his lecture notes were transcribed into books. His papers and notebooks are today at the University of Chicago. Victor Weisskopf noted how Fermi "always managed to find the simplest and most direct approach, with the minimum of complication and sophistication." He disliked complicated theories, and while he had great mathematical ability, he would never use it when the job could be done much more simply. He was famous for getting quick and accurate answers to problems that would stump other people. Later on, his method of getting approximate and quick answers through back-of-the-envelope calculations became informally known as the "Fermi method", and is widely taught.
Fermi was fond of pointing out that when Alessandro Volta was working in his laboratory, Volta had no idea where the study of electricity would lead. Fermi is generally remembered for his work on nuclear power and nuclear weapons, especially the creation of the first nuclear reactor, and the development of the first atomic and hydrogen bombs. His scientific work has stood the test of time. This includes his theory of beta decay, his work with non-linear systems, his discovery of the effects of slow neutrons, his study of pion-nucleon collisions, and his Fermi–Dirac statistics. His speculation that a pion was not a fundamental particle pointed the way towards the study of quarks and leptons.
Things named after Fermi
Many things bear Fermi's name. These include the Fermilab particle accelerator and physics lab in Batavia, Illinois, which was renamed in his honour in 1974, and the Fermi Gamma-ray Space Telescope, which was named after him in 2008, in recognition of his work on cosmic rays. Three nuclear reactor installations have been named after him: the Fermi 1 and Fermi 2 nuclear power plants in Newport, Michigan, the Enrico Fermi Nuclear Power Plant at Trino Vercellese in Italy, and the RA-1 Enrico Fermi research reactor in Argentina. A synthetic element isolated from the debris of the 1952 Ivy Mike nuclear test was named Fermium, in honor of Fermi's contributions to the scientific community. This makes him one of 16 scientists who have elements named after them.
Starting in 1956 the United States Atomic Energy Commission, and from 1977, the U.S. Energy Department, has named its highest honor, the Fermi Award, for him. Recipients of the award have included Otto Hahn, Robert Oppenheimer, Edward Teller and Hans Bethe.
Publications
(with Edoardo Amaldi)
For a full list of his papers, see pages 75–78 in ref.
Patents
References
Sources
Further reading
Bernstein, Barton J. "Four Physicists and the Bomb: The Early Years, 1945-1950" Historical Studies in the Physical and Biological Sciences (1988) 18#2; covers Oppenheimer, Fermi, Lawrence and Compton. online
Galison, Peter, and Barton Bernstein. "In any light: Scientists and the decision to build the Superbomb, 1952–1954." Historical Studies in the Physical and Biological Sciences 19.2 (1989): 267–347. online
External links
"To Fermi – with Love – Part 1". Voices of the Manhattan Project 1971 Radio Segment
"The First Reactor: 40th Anniversary Commemorative Edition", United States Department of Energy, (December 1982).
Nobel prize page for the 1938 physics prize
The Story of the First Pile
Enrico Fermi's Case File at The Franklin Institute with information about his contributions to theoretical and experimental physics.
"Remembering Enrico Fermi". Session J1. APS April Meeting 2010, American Physical Society.
Time 100: Enrico Fermi by Richard Rhodes 29 March 1999
Fermi's stay with Ehrenfest in Leiden.
Category:1901 births
Category:1954 deaths
Category:American nuclear physicists
Category:Italian nuclear physicists
Category:Experimental physicists
Category:Theoretical physicists
Category:Quantum physicists
Category:American relativity theorists
Category:Thermodynamicists
Category:20th-century American physicists
Category:Manhattan Project people
Category:20th-century Italian inventors
Category:Nobel laureates in Physics
Category:Italian Nobel laureates
Category:Medal for Merit recipients
Category:Members of the United States National Academy of Sciences
Category:Foreign members of the Royal Society
Category:Corresponding Members of the USSR Academy of Sciences
Category:Members of the Royal Academy of Italy
Category:Members of the Lincean Academy
Category:Fellows of the American Physical Society
Category:Italian emigrants to the United States
Category:Monte Carlo methodologists
Category:University of Chicago faculty
Category:Columbia University faculty
Category:Academic staff of the University of Göttingen
Category:Academic staff of the Sapienza University of Rome
Category:University of Pisa alumni
Category:American agnostics
Category:Italian agnostics
Category:Italian Freemasons
Category:People from Leonia, New Jersey
Category:Scientists from Rome
Category:Deaths from stomach cancer in Illinois
Category:Italian exiles
Category:Naturalized citizens of the United States
Category:Recipients of the Matteucci Medal
Category:Winners of the Max Planck Medal
Category:Presidents of the American Physical Society
Category:Members of the American Philosophical Society
Category:People of Apulian descent
Category:People of Emilian descent
Category:Recipients of Franklin Medal
Category:Scuola Normale Superiore di Pisa alumni
|
biographies
| 7,173
|
10350
|
Easter Rising
|
https://en.wikipedia.org/wiki/Easter_Rising
|
The Easter Rising (), also known as the Easter Rebellion, was an armed insurrection in Ireland during Easter Week in April 1916. The Rising was launched by Irish republicans against British rule in Ireland with the aim of establishing an independent Irish Republic while the United Kingdom was fighting the First World War. It was the most significant uprising in Ireland since the rebellion of 1798 and the first armed conflict of the Irish revolutionary period. Sixteen of the Rising's leaders were executed starting in May 1916. The nature of the executions, and subsequent political developments, ultimately contributed to an increase in popular support for Irish independence.
Organised by a seven-man Military Council of the Irish Republican Brotherhood, the Rising began on Easter Monday, 24 April 1916 and lasted for six days. Members of the Irish Volunteers, led by schoolmaster and Irish language activist Patrick Pearse, joined by the smaller Irish Citizen Army of James Connolly and 200 women of Cumann na mBan seized strategically important buildings in Dublin and proclaimed the Irish Republic. The British Army brought in thousands of reinforcements as well as artillery and a gunboat. There was street fighting on the routes into the city centre, where the rebels slowed the British advance and inflicted many casualties. Elsewhere in Dublin, the fighting mainly consisted of sniping and long-range gun battles. The main rebel positions were gradually surrounded and bombarded with artillery. There were isolated actions in other parts of Ireland; Volunteer leader Eoin MacNeill had issued a countermand in a bid to halt the Rising, which greatly reduced the extent of the rebel actions.
With much greater numbers and heavier weapons, the British Army suppressed the Rising. Pearse agreed to an unconditional surrender on Saturday 29 April, although sporadic fighting continued briefly. After the surrender, the country remained under martial law. About 3,500 people were taken prisoner by the British and 1,800 of them were sent to internment camps or prisons in Britain. Most of the leaders of the Rising were executed following courts martial. The Rising brought physical force republicanism back to the forefront of Irish politics, which for nearly fifty years had been dominated by constitutional nationalism. Opposition to the British reaction to the Rising contributed to changes in public opinion and the move toward independence, as shown in the 1918 general election, in which Sinn Féin won 73 of the 105 Irish seats. Sinn Féin convened the First Dáil and declared independence.
Of the 485 people killed, 260 were civilians, 143 were British military and police personnel, and 82 were Irish rebels, including 16 rebels executed for their roles in the Rising. More than 2,600 people were wounded. Many of the civilians were killed or wounded by British artillery fire or were mistaken for rebels. Others were caught in the crossfire during firefights between the British and the rebels. The shelling and resulting fires left parts of central Dublin in ruins.
Background
The Acts of Union 1800 united the Kingdom of Great Britain and the Kingdom of Ireland as the United Kingdom of Great Britain and Ireland, abolishing the Irish Parliament and giving Ireland representation in the British Parliament. From early on, many Irish nationalists opposed the union and the continued lack of adequate political representation, along with the British government's handling of Ireland and Irish people, particularly the Great Famine. The union was closely preceded by and formed partly in response to an Irish uprising – whose centenary would prove an influence on the Easter Rising. Three more rebellions ensued: one in 1803, another in 1848 and one in 1867. All were failures.
Opposition took other forms: constitutional (the Repeal Association; the Home Rule League) and social (disestablishment of the Church of Ireland; the Land League).Mansergh, Nicholas, The Irish Question 1840–1921, George Allen & Unwin, 1978, p. 244 The Irish Home Rule movement sought to achieve self-government for Ireland, within the United Kingdom. In 1886, the Irish Parliamentary Party under Charles Stewart Parnell succeeded in having the First Home Rule Bill introduced in the British parliament, but it was defeated. The Second Home Rule Bill of 1893 was passed by the House of Commons but rejected by the House of Lords.
After the death of Parnell, younger and more radical nationalists became disillusioned with parliamentary politics and turned toward more extreme forms of separatism. The Gaelic Athletic Association, the Gaelic League, and the cultural revival under W. B. Yeats and Augusta, Lady Gregory, together with the new political thinking of Arthur Griffith expressed in his newspaper Sinn Féin and organisations such as the National Council and the Sinn Féin League, led many Irish people to identify with the idea of an independent Gaelic Ireland.
The Third Home Rule Bill was introduced by British Liberal Prime Minister H. H. Asquith in 1912. Irish Unionists, who were overwhelmingly Protestants, opposed it, as they did not want to be ruled by a Catholic-dominated Irish government. Led by Sir Edward Carson and James Craig, they formed the Ulster Volunteers (UVF) in January 1913. The UVF's opposition included arming themselves, in the event that they had to resist by force.
Seeking to defend Home Rule, the Irish Volunteers was formed in November 1913. Although sporting broadly open membership and without avowed support for separatism, the executive branch of the Irish Volunteers – excluding leadership – was dominated by the Irish Republican Brotherhood (IRB) who rose to prominence via the organisation, having restarted recruitment in 1909.Macardle (1951), pp. 90–92Foy and Barton, pp. 7–8 These members feared that Home Rule's enactment would result in a broad, seemingly perpetual, contentment with the British Empire. Another militant group, the Irish Citizen Army, was formed by trade unionists as a result of the Dublin Lock-out of that year. The issue of Home Rule appeared to some, as the basis of an "imminent civil war".
On the outbreak of the First World War, the Third Home Rule Bill was enacted, but its implementation was postponed for the war's duration. It was widely believed at the time that the war would not last more than a few months. The Irish Volunteers split: the great majority of its 160,000 members – thereafter known as the National Volunteers – followed John Redmond and supported the British war effort, some 35,000 to 40,000 of them enlisting in the British Army, while the smaller faction of 2,000 to 3,000 – who retained the name – opposed any involvement in the war. The official policy thus became "the abolition of the system of governing Ireland through Dublin Castle and the British military power and the establishment of a National Government in its place"; the Volunteers believed that "England's difficulty" was "Ireland's opportunity".
Planning the Rising
The Supreme Council of the IRB met on 5 September 1914, just over a month after the British government had declared war on Germany. At this meeting, they elected to stage an uprising before the war ended and to secure help from Germany. Responsibility for the planning of the rising was given to Tom Clarke and Seán Mac Diarmada.Foy and Barton, p. 16 Patrick Pearse, Michael Joseph O'Rahilly, Joseph Plunkett and Bulmer Hobson would assume general control of the Volunteers by March 1915.
In May 1915, Clarke and Mac Diarmada established a Military Council within the IRB, consisting of Pearse, Plunkett and Éamonn Ceannt – and soon themselves – to devise plans for a rising.Foy and Barton, pp. 16, 19 The Military Council functioned independently and in opposition to those who considered a possible uprising inopportune. Volunteer Chief-of-Staff Eoin MacNeill supported a rising only if the British government attempted to suppress the Volunteers or introduce conscription in Ireland, and if such a rising had some chance of success. Hobson and IRB President Denis McCullough held similar views as did much of the executive branches of both organisations.
The Military Council kept its plans secret, so as to prevent the British authorities from learning of the plans, and to thwart those within the organisation who might try to stop the rising. The secrecy of the plans was such that the Military Council largely superseded the IRB's Supreme Council with even McCullough being unaware of some of the plans, whereas the likes of MacNeill were only informed as the Rising rapidly approached. Although most Volunteers were oblivious to any plans their training increased in the preceding year. The public nature of their training heightened tensions with authorities, which, come the next year, manifested in rumours of the Rising. Public displays likewise existed in the espousal of anti-recruitment. The number of Volunteers also increased: between December 1914 and February 1916 the rank and file rose from 9,700 to 12,215. Although the likes of the civil servants were discouraged from joining the Volunteers, the organisation was permitted by law.
Shortly after the outbreak of World War I, Roger Casement and John Devoy went to Germany and began negotiations with the German government and military. Casement – later accompanied by Plunkett – persuaded the Germans to announce their support for Irish independence in November 1914.Foy and Barton, p. 25 Casement envisioned the recruitment of Irish prisoners of war, to be known as the Irish Brigade, aided by a German expeditionary force who would secure the line of the River Shannon, before advancing on the capital.Foy and Barton, p. 105McNally and Dennis, p. 30 Neither intention came to fruition, but the German military did agree to ship arms and ammunition to the Volunteers,Foy and Barton, pp. 25–28 gunrunning having become difficult and dangerous on account of the war.
In late 1915 and early 1916 Devoy had trusted couriers deliver approximately $100,000 from the American-based Irish Republican organization Clan na Gael to the IRB. In January 1916 the Supreme Council of the IRB decided that the rising would begin on Easter Sunday, 23 April 1916. On 5 February 1916 Devoy received a coded message from the Supreme Council of the IRB informing him of their decision to start a rebellion at Easter 1916: "We have decided to begin action on Easter Sunday. We must have your arms and munitions in Limerick between Good Friday and Easter Sunday. We expect German help immediately after beginning action. We might have to begin earlier."
Head of the Irish Citizen Army, James Connolly, was unaware of the IRB's plans, and threatened to start a rebellion on his own if other parties failed to act. The IRB leaders met with Connolly in Dolphin's Barn in January 1916 and convinced him to join forces with them. They agreed that they would launch a rising together at Easter and made Connolly the sixth member of the Military Council. Thomas MacDonagh would later become the seventh and final member.
The death of the old Fenian leader Jeremiah O'Donovan Rossa in New York City in August 1915 was an opportunity to mount a spectacular demonstration. His body was sent to Ireland for burial in Glasnevin Cemetery, with the Volunteers in charge of arrangements. Huge crowds lined the route and gathered at the graveside. Pearse (wearing the uniform of the Irish Volunteers) made a dramatic funeral oration, a rallying call to republicans, which ended with the words "Ireland unfree shall never be at peace".
Build-up to Easter Week
In early April, Pearse issued orders to the Irish Volunteers for three days of "parades and manoeuvres" beginning on Easter Sunday. He had the authority to do this, as the Volunteers' Director of Organisation. The idea was that IRB members within the organisation would know these were orders to begin the rising, while men such as MacNeill and the British authorities would take it at face value.
On 9 April, the German Navy dispatched the SS Libau for County Kerry, disguised as the Norwegian ship Aud. It was loaded with 20,000 rifles, one million rounds of ammunition, and explosives. Casement also left for Ireland aboard the German submarine U-19. He was disappointed with the level of support offered by the Germans and he intended to stop or at least postpone the rising.Foy and Barton, p.56 During this time, the Volunteers amassed ammunition from various sources, including the adolescent Michael McCabe.
On Wednesday 19 April, Alderman Tom Kelly, a Sinn Féin member of Dublin Corporation, read out at a meeting of the corporation a document purportedly leaked from Dublin Castle, detailing plans by the British authorities to shortly arrest leaders of the Irish Volunteers, Sinn Féin and the Gaelic League, and occupy their premises. Although the British authorities said the "Castle Document" was fake, MacNeill ordered the Volunteers to prepare to resist.Foy and Barton, p. 47 Unbeknownst to MacNeill, the document had been forged by the Military Council to persuade moderates of the need for their planned uprising. It was an edited version of a real document outlining British plans in the event of conscription. That same day, the Military Council informed senior Volunteer officers that the rising would begin on Easter Sunday. However, it chose not to inform the rank-and-file, or moderates such as MacNeill, until the last minute.Foy and Barton, p. 48
The following day, MacNeill got wind that a rising was about to be launched and threatened to do everything he could to prevent it, short of informing the British.Foy and Barton, p. 52 He and Hobson confronted Pearse, but refrained from decisive action as to avoiding instigating a rebellion of any kind; Hobson would be detained by Volunteers until the Rising occurred.
The SS Libau (disguised as the Aud) and the U-19 reached the coast of Kerry on Good Friday, 21 April. This was earlier than the Volunteers expected and so none were there to meet the vessels. The Royal Navy had known about the arms shipment and intercepted the SS Libau, prompting the captain to scuttle the ship. Furthermore, Casement was captured shortly after he landed at Banna Strand.Foy and Barton, pp. 57–58
When MacNeill learned that the arms shipment had been lost, he reverted to his original position. With the support of other leaders of like mind, notably Bulmer Hobson and The O'Rahilly, he issued a countermand to all Volunteers, cancelling all actions for Sunday. This countermanding order was relayed to Volunteer officers and printed in the Sunday morning newspapers. The order resulted in a delay to the rising by a day, and some confusion over strategy for those who took part.
British Naval Intelligence had been aware of the arms shipment, Casement's return, and the Easter date for the rising through radio messages between Germany and its embassy in the United States that were intercepted by the Royal Navy and deciphered in Room 40 of the Admiralty.Ó Broin, p. 138 It is unclear how extensive Room 40's decryptions preceding the Rising were. On the eve of the Rising, John Dillon wrote to Redmond of Dublin being "full of most extraordinary rumours. And I have no doubt in my mind that the Clan men – are planning some devilish business – what it is I cannot make out. It may not come off – But you must not be surprised if something very unpleasant and mischievous happens this week".
The information was passed to the Under-Secretary for Ireland, Sir Matthew Nathan, on 17 April, but without revealing its source; Nathan was doubtful about its accuracy.Ó Broin, p. 79 When news reached Dublin of the capture of the SS Libau and the arrest of Casement, Nathan conferred with the Lord Lieutenant, Lord Wimborne. Nathan proposed to raid Liberty Hall, headquarters of the Citizen Army, and Volunteer properties at Father Matthew Park and at Kimmage, but Wimborne insisted on wholesale arrests of the leaders. It was decided to postpone action until after Easter Monday, and in the meantime, Nathan telegraphed the Chief Secretary, Augustine Birrell, in London seeking his approval.Ó Broin, pp. 81–87 By the time Birrell cabled his reply authorising the action, at noon on Monday 24 April 1916, the Rising had already begun.Ó Broin, p. 88
On the morning of Easter Sunday, 23 April, the Military Council met at Liberty Hall to discuss what to do in light of MacNeill's countermanding order. They decided that the Rising would go ahead the following day, Easter Monday, and that the Irish Volunteers and Irish Citizen Army would go into action as the 'Army of the Irish Republic'. They elected Pearse as president of the Irish Republic, and also as Commander-in-Chief of the army; Connolly became Commandant of the Dublin Brigade.Foy and Barton, p. 66 That weekend was largely spent preparing rations and manufacturing ammunition and bombs. Messengers were then sent to all units informing them of the new orders.
The Rising in Dublin
Easter Monday
On the morning of Monday 24 April, about 1,200 members of the Irish Volunteers and Irish Citizen Army mustered at several locations in central Dublin. Among them were members of the all-female Cumann na mBan. Some wore Irish Volunteer and Citizen Army uniforms, while others wore civilian clothes with a yellow Irish Volunteer armband, military hats, and bandoliers.Ward, Alan. The Easter Rising: Revolution and Irish Nationalism. Wiley, 2003. p. 5Cottrel, Peter. The War for Ireland: 1913–1923. Osprey, 2009. p. 41 They were armed mostly with rifles (especially 1871 Mausers), but also with shotguns, revolvers, a few Mauser C96 semi-automatic pistols, and grenades. The number of Volunteers who mobilised was much smaller than expected. This was due to MacNeill's countermanding order, and the fact that the new orders had been sent so soon beforehand. However, several hundred Volunteers joined the Rising after it began.
Shortly before midday, the rebels began to seize important sites in central Dublin. The rebels' plan was to hold Dublin city centre. This was a large, oval-shaped area bounded by two canals: the Grand to the south and the Royal to the north, with the River Liffey running through the middle. On the southern and western edges of this district were five British Army barracks. Most of the rebels' positions had been chosen to defend against counter-attacks from these barracks. The rebels took the positions with ease. Civilians were evacuated and policemen were ejected or taken prisoner. Windows and doors were barricaded, food and supplies were secured, and first aid posts were set up. Barricades were erected on the streets to hinder British Army movement.
A joint force of about 400 Volunteers and the Citizen Army gathered at Liberty Hall under the command of Commandant James Connolly. This was the headquarters battalion, and it also included Commander-in-Chief Patrick Pearse, as well as Tom Clarke, Seán Mac Diarmada and Joseph Plunkett.McNally and Dennis, p. 41 They marched to the General Post Office (GPO) on O'Connell Street, Dublin's main thoroughfare, occupied the building and hoisted two republican flags. Pearse stood outside and read the Proclamation of the Irish Republic.Foy and Barton, pp. 192, 195 Copies of the Proclamation were also pasted on walls and handed out to bystanders by Volunteers and newsboys. The GPO would be the rebels' headquarters for most of the Rising. Volunteers from the GPO also occupied other buildings on the street, including buildings overlooking O'Connell Bridge. They took over a wireless telegraph station and sent out a radio broadcast in Morse code, announcing that an Irish Republic had been declared. This was the first radio broadcast in Ireland.
Elsewhere, some of the headquarters battalion under Michael Mallin occupied St Stephen's Green, where they dug trenches and barricaded the surrounding roads. The 1st battalion, under Edward 'Ned' Daly, occupied the Four Courts and surrounding buildings, while a company under Seán Heuston occupied the Mendicity Institution, across the River Liffey from the Four Courts. The 2nd battalion, under Thomas MacDonagh, occupied Jacob's biscuit factory. The 3rd battalion, under Éamon de Valera, occupied Boland's Mill and surrounding buildings (uniquely, without the presence of Cumann na mBan women whom de Valera expressly excluded). The 4th battalion, under Éamonn Ceannt, occupied the South Dublin Union and the distillery on Marrowbone Lane. From each of these garrisons, small units of rebels established outposts in the surrounding area.McNally and Dennis, pp. 39–40
The rebels also attempted to cut transport and communication links. As well as erecting roadblocks, they took control of various bridges and cut telephone and telegraph wires. Westland Row and Harcourt Street railway stations were occupied, though the latter only briefly. The railway line was cut at Fairview and the line was damaged by bombs at Amiens Street, Broadstone, Kingsbridge and Lansdowne Road.McKenna, Joseph. Guerrilla Warfare in the Irish War of Independence. McFarland, 2011. p. 19
Around midday, a small team of Volunteers and Fianna Éireann members swiftly captured the Magazine Fort in the Phoenix Park and disarmed the guards. The goal was to seize weapons and blow up the ammunition store to signal that the Rising had begun. They seized weapons and planted explosives, but the blast was not loud enough to be heard across the city. The 23-year-old son of the fort's commander was fatally shot when he ran to raise the alarm."Children of the Revolution" . History Ireland. Volume 1, issue 23 (May/June 2013).
A contingent under Seán Connolly occupied Dublin City Hall and adjacent buildings.Foy and Barton, pp. 87–90 They attempted to seize neighbouring Dublin Castle, the heart of British rule in Ireland. As they approached the gate a lone and unarmed police sentry, James O'Brien, attempted to stop them and was shot dead by Connolly. According to some accounts, he was the first casualty of the Rising. The rebels overpowered the soldiers in the guardroom but failed to press further. The British Army's chief intelligence officer, Major Ivon Price, fired on the rebels while the Under-Secretary for Ireland, Sir Matthew Nathan, helped shut the castle gates. Unbeknownst to the rebels, the Castle was lightly guarded and could have been taken with ease.Foy and Barton, pp. 84–85 The rebels instead laid siege to the Castle from City Hall. Fierce fighting erupted there after British reinforcements arrived. The rebels on the roof exchanged fire with soldiers on the street. Seán Connolly was shot dead by a sniper, becoming the first rebel casualty. By the following morning, British forces had re-captured City Hall and taken the rebels prisoner.
The rebels did not attempt to take some other key locations, notably Trinity College, in the heart of the city centre and defended by only a handful of armed unionist students. Failure to capture the telephone exchange in Crown Alley left communications in the hands of the Government with GPO staff quickly repairing telephone wires that had been cut by the rebels. The failure to occupy strategic locations was attributed to lack of manpower. In at least two incidents, at Jacob's and Stephen's Green, the Volunteers and Citizen Army shot dead civilians trying to attack them or dismantle their barricades. Elsewhere, they hit civilians with their rifle butts to drive them off.;
The British military were caught totally unprepared by the Rising and their response of the first day was generally un-coordinated. Two squadronsTownsend, Easter 1916, p.170 of British cavalry were sent to investigate what was happening. They took fire and casualties from rebel forces at the GPO and at the Four Courts.Coffey, Thomas M. Agony at Easter: The 1916 Irish Uprising, pp. 38, 44, 155 As one troop passed Nelson's Pillar, the rebels opened fire from the GPO, killing three cavalrymen and two horses and fatally wounding a fourth man. The cavalrymen retreated and were withdrawn to barracks. On Mount Street, a group of Volunteer Training Corps men stumbled upon the rebel position and four were killed before they reached Beggars Bush Barracks. Although ransacked, the barracks were never seized.
The only substantial combat of the first day of the Rising took place at the South Dublin Union where a piquet from the Royal Irish Regiment encountered an outpost of Éamonn Ceannt's force at the northwestern corner of the South Dublin Union. The British troops, after taking some casualties, managed to regroup and launch several assaults on the position before they forced their way inside and the small rebel force in the tin huts at the eastern end of the Union surrendered. However, the Union complex as a whole remained in rebel hands. A nurse in uniform, Margaret Keogh, was shot dead by British soldiers at the Union. She is believed to have been the first civilian killed in the Rising.
Three unarmed Dublin Metropolitan Police were shot dead on the first day of the Rising and their Commissioner pulled them off the streets. Partly as a result of the police withdrawal, a wave of looting broke out in the city centre, especially in the area of O'Connell Street (still officially called "Sackville Street" at the time).
Tuesday and Wednesday
Lord Wimborne, the Lord Lieutenant, declared martial law on Tuesday evening and handed over civil power to Brigadier-General William Lowe. British forces initially put their efforts into securing the approaches to Dublin Castle and isolating the rebel headquarters, which they believed was in Liberty Hall. The British commander, Lowe, worked slowly, unsure of the size of the force he was up against, and with only 1,269 troops in the city when he arrived from the Curragh Camp in the early hours of Tuesday 25 April. City Hall was taken from the rebel unit that had attacked Dublin Castle on Tuesday morning.Coogan 2001, p. 107
In the early hours of Tuesday, 120 British soldiers, with machine guns, occupied two buildings overlooking St Stephen's Green: the Shelbourne Hotel and United Services Club. At dawn they opened fire on the Citizen Army occupying the green. The rebels returned fire but were forced to retreat to the Royal College of Surgeons building. They remained there for the rest of the week, exchanging fire with British forces.
Fighting erupted along the northern edge of the city centre on Tuesday afternoon. In the northeast, British troops left Amiens Street railway station in an armoured train, to secure and repair a section of damaged tracks. They were attacked by rebels who had taken up position at Annesley Bridge. After a two-hour battle, the British were forced to retreat and several soldiers were captured. At Phibsborough, in the northwest, rebels had occupied buildings and erected barricades at junctions on the North Circular Road. The British summoned 18-pounder field artillery from Athlone and shelled the rebel positions, destroying the barricades. After a fierce firefight, the rebels withdrew.
That afternoon Pearse walked out into O'Connell Street with a small escort and stood in front of Nelson's Pillar. As a large crowd gathered, he read out a 'manifesto to the citizens of Dublin,' calling on them to support the Rising.Foy and Barton, p. 180
The rebels had failed to take either of Dublin's two main railway stations or either of its ports, at Dublin Port and Kingstown. As a result, during the following week, the British were able to bring in thousands of reinforcements from Britain and from their garrisons at the Curragh and Belfast. By the end of the week, British strength stood at over 16,000 men. Their firepower was provided by field artillery which they positioned on the Northside of the city at Phibsborough and at Trinity College, and by the patrol vessel Helga, which sailed up the Liffey, having been summoned from the port at Kingstown. On Wednesday, 26 April, the guns at Trinity College and Helga shelled Liberty Hall, and the Trinity College guns then began firing at rebel positions, first at Boland's Mill and then in O'Connell Street. Some rebel commanders, particularly James Connolly, did not believe that the British would shell the 'second city' of the British Empire.Foy and Barton, p. 181
The principal rebel positions at the GPO, the Four Courts, Jacob's Factory and Boland's Mill saw little action. The British surrounded and bombarded them rather than assault them directly. One Volunteer in the GPO recalled, "we did practically no shooting as there was no target". Entertainment ensued within the factory, "everybody merry & cheerful", bar the "occasional sniping", noted one Volunteer. However, where the rebels dominated the routes by which the British tried to funnel reinforcements into the city, there was fierce fighting.
At 5:25 PM a dozen Volunteers, including Eamon Martin, Garry Holohan, Robert Beggs, Sean Cody, Dinny O'Callaghan, Charles Shelley, and Peadar Breslin, attempted to occupy Broadstone railway station on Church Street. The attack was unsuccessful and Martin was injured.Witness Statement by Eamon Martin to Bureau of Military History, 1951Witness Statement of Sean Cody to Bureau of Military History, 1954Witness Statement of Nicholas Kaftan to Bureau of Military HistoryWitness Statement of Charles Shelley to Bureau of Military History, 1953
On Wednesday morning, hundreds of British troops encircled the Mendicity Institution, which was occupied by 26 Volunteers under Seán Heuston. British troops advanced on the building, supported by snipers and machine-gun fire, but the Volunteers put up stiff resistance. Eventually, the troops got close enough to hurl grenades into the building, some of which the rebels threw back. Exhausted and almost out of ammunition, Heuston's men became the first rebel position to surrender. Heuston had been ordered to hold his position for a few hours, to delay the British, but had held on for three days.O'Brien, Paul. Heuston's Fort – The Battle for the Mendicity Institute, 1916 . The Irish Story. 15 August 2012.
Reinforcements were sent to Dublin from Britain and disembarked at Kingstown on the morning of Wednesday 26 April. Heavy fighting occurred at the rebel-held positions around the Grand Canal as these troops advanced towards Dublin. More than 1,000 Sherwood Foresters were repeatedly caught in a crossfire trying to cross the canal at Mount Street Bridge. Seventeen Volunteers were able to severely disrupt the British advance, killing or wounding 240 men.Coogan , p. 122 Despite there being alternative routes across the canal nearby, General Lowe ordered repeated frontal assaults on the Mount Street position. The British eventually took the position, which had not been reinforced by the nearby rebel garrison at Boland's Mills, on Thursday,O'Brien, p. 69 but the fighting there inflicted up to two-thirds of their casualties for the entire week for a cost of just four dead Volunteers. It had taken nearly nine hours for the British to advance .
On Wednesday Linenhall Barracks on Constitution Hill was burnt down under the orders of Commandant Edward Daly to prevent its reoccupation by the British.
Thursday to Saturday
The rebel position at the South Dublin Union (site of the present-day St. James's Hospital) and Marrowbone Lane, further west along the canal, also inflicted heavy losses on British troops. The South Dublin Union was a large complex of buildings and there was vicious fighting around and inside the buildings. Cathal Brugha, a rebel officer, distinguished himself in this action and was badly wounded. By the end of the week, the British had taken some of the buildings in the Union, but others remained in rebel hands. British troops also took casualties in unsuccessful frontal assaults on the Marrowbone Lane Distillery.
The third major scene of fighting during the week was in the area of North King Street, north of the Four Courts. The rebels had established strong outposts in the area, occupying numerous small buildings and barricading the streets. From Thursday to Saturday, the British made repeated attempts to capture the area, in what was some of the fiercest fighting of the Rising. As the troops moved in, the rebels continually opened fire from windows and behind chimneys and barricades. At one point, a platoon led by Major Sheppard made a bayonet charge on one of the barricades but was cut down by rebel fire. The British employed machine guns and attempted to avoid direct fire by using makeshift armoured trucks, and by mouse-holing through the inside walls of terraced houses to get near the rebel positions.Dorney, John. "The North King Street Massacre, Dublin 1916" . The Irish Story. 13 April 2012. By the time of the rebel headquarters' surrender on Saturday, the South Staffordshire Regiment under Colonel Taylor had advanced only down the street at a cost of 11 dead and 28 wounded.Coogan pp. 152–155 The enraged troops broke into the houses along the street and shot or bayoneted fifteen unarmed male civilians whom they accused of being rebel fighters.Coogan , p. 155,
Elsewhere, at Portobello Barracks, an officer named Bowen Colthurst summarily executed six civilians, including the pacifist nationalist activist, Francis Sheehy-Skeffington. These instances of British troops killing Irish civilians would later be highly controversial in Ireland.
Surrender
The headquarters garrison at the GPO was forced to evacuate after days of shelling when a fire caused by the shells spread to the GPO. Connolly had been incapacitated by a bullet wound to the ankle and had passed command on to Pearse. The O'Rahilly was killed in a sortie from the GPO. They tunnelled through the walls of the neighbouring buildings in order to evacuate the Post Office without coming under fire and took up a new position in 16 Moore Street. The young Seán McLoughlin was given military command and planned a breakout, but Pearse realised this plan would lead to further loss of civilian life.
On the eve of the surrender, there had been about 35 Cumann na mBan women remaining in the GPO. In the final group that left with Pearse and Connolly, there were three: Connolly's aide de camp, Winifred Carney, who had entered with the original ICA contingent, and the dispatchers and nurses Elizabeth O'Farrell, and Julia Grenan.Eight Women of the Easter Rising The New York Times, 16 March 2016
On Saturday 29 April, from this new headquarters, Pearse issued an order for all companies to surrender. Pearse surrendered unconditionally to Brigadier-General Lowe. The surrender document read:
The other posts surrendered only after Pearse's surrender order, carried by O'Farrell, reached them. Sporadic fighting, therefore, continued until Sunday, when word of the surrender was got to the other rebel garrisons. Command of British forces had passed from Lowe to General John Maxwell, who arrived in Dublin just in time to take the surrender. Maxwell was made temporary military governor of Ireland.
The Rising outside Dublin
The Rising was planned to occur across the nation, but MacNeill's countermanding order coupled with the failure to secure German arms hindered this objective significantly. Charles Townshend contended that serious intentions for a national Rising were meagre, being diminished by a focus upon Dublin – although this is an increasingly contentious notion.
In the south, around 1,200 Volunteers commanded by Tomás Mac Curtain mustered on the Sunday in Cork, but they dispersed on Wednesday after receiving nine contradictory orders by dispatch from the Volunteer leadership in Dublin. At their Sheares Street headquarters, some of the Volunteers engaged in a standoff with British forces. Much to the anger of many Volunteers, MacCurtain, under pressure from Catholic clergy, agreed to surrender his men's arms to the British. The only violence in County Cork occurred when the RIC attempted to raid the home of the Kent family. The Kent brothers, who were Volunteers, engaged in a three-hour firefight with the RIC. An RIC officer and one of the brothers were killed, while another brother was later executed. Virtually all rebel family homes were raided, either during or after the Rising.
In the north, Volunteer companies were mobilised in County Tyrone at Coalisland (including 132 men from Belfast led by IRB President Denis McCullough) and Carrickmore, under the leadership of Patrick McCartan. They also mobilised at Creeslough, County Donegal under Daniel Kelly and James McNulty. However, in part because of the confusion caused by the countermanding order, the Volunteers in these locations dispersed without fighting. McCartan claimed that the decision by the leadership of the rebellion not to share plans led to poor communication and uncertainty. McCartan wrote that Tyrone Volunteers could have
Ashbourne
In north County Dublin, about 60 Volunteers mobilised near Swords. They belonged to the 5th Battalion of the Dublin Brigade (also known as the Fingal Battalion), and were led by Thomas Ashe and his second in command, Richard Mulcahy. Unlike the rebels elsewhere, the Fingal Battalion successfully employed guerrilla tactics. They set up camp and Ashe split the battalion into four sections: three would undertake operations while the fourth was kept in reserve, guarding camp and foraging for food.Maguire, Paul. The Fingal Battalion: A Blueprint for the Future? . The Irish Sword. Military History Society of Ireland, 2011. pp. 9–13 The Volunteers moved against the RIC barracks in Swords, Donabate and Garristown, forcing the RIC to surrender and seizing all the weapons. They also damaged railway lines and cut telegraph wires. The railway line at Blanchardstown was bombed to prevent a troop train from reaching Dublin. This derailed a cattle train, which had been sent ahead of the troop train.The 1916 Rebellion Handbook p. 27
The only large-scale engagement of the Rising, outside Dublin city, was at Ashbourne, County Meath. On Friday, about 35 Fingal Volunteers surrounded the Ashbourne RIC barracks and called on it to surrender, but the RIC responded with a volley of gunfire. A firefight followed, and the RIC surrendered after the Volunteers attacked the building with a homemade grenade. Before the surrender could be taken, up to sixty RIC men arrived in a convoy, sparking a five-hour gun battle, in which eight RIC men were killed and 18 wounded. Two Volunteers were also killed and five wounded, and a civilian was fatally shot. The RIC surrendered and were disarmed. Ashe let them go after warning them not to fight against the Irish Republic again. Ashe's men camped at Kilsalaghan near Dublin until they received orders to surrender on Saturday. The Fingal Battalion's tactics during the Rising foreshadowed those of the IRA during the War of Independence that followed.
Volunteer contingents also mobilised nearby in counties Meath and Louth but proved unable to link up with the North Dublin unit until after it had surrendered. In County Louth, Volunteers shot dead an RIC man near the village of Castlebellingham on 24 April, in an incident in which 15 RIC men were also taken prisoner.
Enniscorthy
In County Wexford, 100–200 Volunteers—led by Robert Brennan, Séamus Doyle and Seán Etchingham—took over the town of Enniscorthy on Thursday 27 April until Sunday.Boyle, John F. The Irish Rebellion of 1916: a brief history of the revolt and its suppression (Chapter IV: Outbreaks in the Country). BiblioBazaar, 2009. pp. 127–152 Volunteer officer Paul Galligan had cycled 200 km from rebel headquarters in Dublin with orders to mobilise.Dorney, John. The Easter Rising in County Wexford . The Irish Story. 10 April 2012. They blocked all roads into the town and made a brief attack on the RIC barracks, but chose to blockade it rather than attempt to capture it. They flew the tricolour over the Athenaeum building, which they had made their headquarters, and paraded uniformed in the streets. They also occupied Vinegar Hill, where the United Irishmen had made a last stand in the 1798 rebellion. The public largely supported the rebels and many local men offered to join them.
By Saturday, up to 1,000 rebels had been mobilised, and a detachment was sent to occupy the nearby village of Ferns. In Wexford, the British assembled a column of 1,000 soldiers (including the Connaught Rangers), two field guns and a 4.7 inch naval gun on a makeshift armoured train. On Sunday, the British sent messengers to Enniscorthy, informing the rebels of Pearse's surrender order. However, the Volunteer officers were sceptical. Two of them were escorted by the British to Arbour Hill Prison, where Pearse confirmed the surrender order.
Galway
In County Galway, 600–700 Volunteers mobilised on Tuesday under Liam Mellows. His plan was to "bottle up the British garrison and divert the British from concentrating on Dublin".Dorney, John. The Easter Rising in Galway, 1916 . The Irish Story. 4 March 2016. However, his men were poorly armed, with only 25 rifles, 60 revolvers, 300 shotguns and some homemade grenades – many of them only had pikes.Mark McCarthy & Shirley Wrynn. County Galway's 1916 Rising: A Short History . Galway County Council. Most of the action took place in a rural area to the east of Galway city. They made unsuccessful attacks on the RIC barracks at Clarinbridge and Oranmore, captured several officers, and bombed a bridge and railway line, before taking up position near Athenry. There was also a skirmish between rebels and an RIC mobile patrol at Carnmore crossroads. A constable, Patrick Whelan, was shot dead after he had called to the rebels: "Surrender, boys, I know ye all".
On Wednesday, arrived in Galway Bay and shelled the countryside on the northeastern edge of Galway. The rebels retreated southeast to Moyode, an abandoned country house and estate. From here they set up lookout posts and sent out scouting parties. On Friday, landed 200 Royal Marines and began shelling the countryside near the rebel position. The rebels retreated further south to Limepark, another abandoned country house. Deeming the situation to be hopeless, they dispersed on Saturday morning. Many went home and were arrested following the Rising, while others, including Mellows, went "on the run". By the time British reinforcements arrived in the west, the Rising there had already disintegrated.
Limerick and Clare
In County Limerick, 300 Irish Volunteers assembled at Glenquin Castle near Killeedy, but they did not take any military action.
In County Clare, Micheal Brennan marched with 100 Volunteers (from Meelick, Oatfield, and Cratloe) to the River Shannon on Easter Monday to await orders from the Rising leaders in Dublin, and weapons from the expected Casement shipment. However, neither arrived and no actions were taken.
Casualties
The Easter Rising resulted in at least 485 deaths, according to the Glasnevin Trust.
Of those killed:
260 (about 54%) were civilians
126 (about 26%) were U.K. forces (120 U.K. military personnel, 5 Volunteer Training Corps members, and one Canadian soldier)
35 – Irish Regiments:-
11 – Royal Dublin Fusiliers
10 – Royal Irish Rifles
9 – Royal Irish Regiment
2 – Royal Inniskilling Fusiliers
2 – Royal Irish Fusiliers
1 – Leinster Regiment
74 – British Regiments:-
29 – Sherwood Foresters
15 – South Staffordshire
2 – North Staffordshire
1 – Royal Field Artillery
4 – Royal Engineers
5 – Army Service Corps
10 – Lancers
7 – 8th Hussars
2 – 2nd King Edwards Horse
3 – Yeomanry
1 – Royal Navy
82 (about 16%) were Irish rebel forces (64 Irish Volunteers, 15 Irish Citizen Army and 3 Fianna Éireann)
17 (about 4%) were police
14 – Royal Irish Constabulary
3 – Dublin Metropolitan Police
More than 2,600 were wounded; including at least 2,200 civilians and rebels, at least 370 British soldiers and 29 policemen.Foy and Barton, page 325 All 16 police fatalities and 22 of the British soldiers killed were Irishmen.1916 Rebellion Handbook, pp. 50–55 About 40 of those killed were children (under 17 years old), four of whom were members of the rebel forces.
The number of casualties each day steadily rose, with 55 killed on Monday and 78 killed on Saturday. The British Army suffered their biggest losses in the Battle of Mount Street Bridge on Wednesday when at least 30 soldiers were killed. The rebels also suffered their biggest losses on that day. The RIC suffered most of their casualties in the Battle of Ashbourne on Friday.
The majority of the casualties, both killed and wounded, were civilians. Most of the civilian casualties and most of the casualties overall were caused by the British Army. This was due to the British using artillery, incendiary shells and heavy machine guns in built-up areas, as well as their "inability to discern rebels from civilians". One Royal Irish Regiment officer recalled, "they regarded, not unreasonably, every one they saw as an enemy, and fired at anything that moved". Many other civilians were killed when caught in the crossfire. Both sides, British and rebel, also shot civilians deliberately on occasion; for not obeying orders (such as to stop at checkpoints), for assaulting or attempting to hinder them, and for looting. There were also instances of British troops killing unarmed civilians out of revenge or frustration: notably in the North King Street Massacre, where fifteen were killed, and at Portobello Barracks, where six were shot. Furthermore, there were incidents of friendly fire. On 29 April, the Royal Dublin Fusiliers under Company Quartermaster Sergeant Robert Flood shot dead two British officers and two Irish civilian employees of the Guinness Brewery after he decided they were rebels. Flood was court-martialled for murder but acquitted.
According to the historian Fearghal McGarry, the rebels attempted to avoid needless bloodshed. Desmond Ryan stated that Volunteers were told "no firing was to take place except under orders or to repel attack". Aside from the engagement at Ashbourne, policemen and unarmed soldiers were not systematically targeted, and a large group of policemen was allowed to stand at Nelson's Pillar throughout Monday. McGarry writes that the Irish Citizen Army "were more ruthless than Volunteers when it came to shooting policemen" and attributes this to the "acrimonious legacy" of the Dublin Lock-out.
The vast majority of the Irish casualties were buried in Glasnevin Cemetery in the aftermath of the fighting. British families came to Dublin Castle in May 1916 to reclaim the bodies of British soldiers, and funerals were arranged. Soldiers whose bodies were not claimed were given military funerals in Grangegorman Military Cemetery.
Aftermath
alt=View of O'Connell Bridge, 1916|thumb|View of O'Connell Bridge, 1916, on a German postcard. The caption reads: Rising of the Sinn Feiners in Ireland. O'Connell bridge with Dublin city, where the fiercest clashes took place.
Arrests and executions
In the immediate aftermath, the Rising was commonly described as the "Sinn Féin Rebellion", reflecting a popular belief that Sinn Féin, a separatist organisation that was neither militant nor republican, was behind it. Thus General Maxwell signalled his intention "to arrest all dangerous Sinn Feiners", including "those who have taken an active part in the movement although not in the present rebellion".
A total of 3,430 men and 79 women were arrested, including 425 people for looting – roughly, 1,500 of these arrests accounted for the rebels.Foy and Barton, pp. 294–295 Detainees were overwhelmingly young, Catholic and religious. 1,424 men and 73 women were released after a few weeks of imprisonment; those interned without trial in England and Wales (see below) were released on Christmas Eve, 1916; the remaining majority of convicts were held until June 1917.
A series of courts martial began on 2 May, in which 187 people were tried. Controversially, Maxwell decided that the courts martial would be held in secret and without a defence, which Crown law officers later ruled to have been illegal. Some of those who conducted the trials had commanded British troops involved in suppressing the Rising, a conflict of interest that the Military Manual prohibited. Only one of those tried by courts martial was a woman, Constance Markievicz, who was also the only woman to be kept in solitary confinement. Ninety were sentenced to death. Fifteen of those (including all seven signatories of the Proclamation) had their sentences confirmed by Maxwell and fourteen were executed by firing squad at Kilmainham Gaol between 3 and 12 May.
Maxwell stated that only the "ringleaders" and those proven to have committed "cold-blooded murder" would be executed. However, some of those executed were not leaders and did not kill anyone, such as Willie Pearse and John MacBride; Thomas Kent did not come out at all—he was executed for the killing of a police officer during the raid on his house the week after the Rising. The most prominent leader to escape execution was Éamon de Valera, Commandant of the 3rd Battalion, who did so partly because of his American birth. Hobson went into hiding, re-emerging after the June amnesty, largely to scorn.
Most of the executions took place over a ten-day period:
3 May: Patrick Pearse, Thomas MacDonagh and Thomas Clarke
4 May: Joseph Plunkett, William Pearse, Edward Daly and Michael O'Hanrahan
5 May: John MacBride
8 May: Éamonn Ceannt, Michael Mallin, Seán Heuston and Con Colbert
12 May: James Connolly and Seán Mac Diarmada
The arrests greatly affected hundreds of families and communities; anti-English sentiment developed among the public, as separatists declared the arrests as indicative of a draconian approach. The public, at large, feared that the response was "an assault on the entirety of the Irish national cause". This radical transformation was recognised in the moment and had become a point of concern among British authorities; after Connolly's execution, the remaining death sentences were commuted to penal servitude.Foy and Barton, p. 325 Growing support for republicanism can be found as early as June 1916; imprisonment largely failed to deter militants – interned rebels would proceed to fight at higher rates than those who weren't – who thereafter quickly reorganised the movement.
Frongoch prison camp
Under Regulation 14B of the Defence of the Realm Act 1914 1,836 men were interned at internment camps and prisons in England and Wales. As urban areas were becoming the nexus for republicanism, Internees were largely from such areas. Many Internees had not taken part in the Rising; many thereafter became sympathetic to the nationalist cause.
Internees occupied themselves with the likes of lectures, craftwork, music and sports. These activities – which included games of Gaelic football, crafting of Gaelic symbols, and lessons in Irish – regularly had a nationalist character and the cause itself developed a sense of cohesion within the camps. The military studies included discussion of the Rising. Internment lasted until December of that year with releases having started in July. Martial law had ceased by the end of November.
Casement was tried in London for high treason and hanged at Pentonville Prison on 3 August.
British atrocities
The other incident was the "North King Street Massacre". On the night of 28–29 April, British soldiers of the South Staffordshire Regiment, under Colonel Henry Taylor, had burst into houses on North King Street and killed fifteen male civilians whom they accused of being rebels. The soldiers shot or bayoneted the victims, and then secretly buried some of them in cellars or backyards after robbing them. The area saw some of the fiercest fighting of the Rising and the British had taken heavy casualties for little gain. Maxwell attempted to excuse the killings and argued that the rebels were ultimately responsible. He claimed that "the rebels wore no uniform" and that the people of North King Street were rebel sympathisers. Maxwell concluded that such incidents "are absolutely unavoidable in such a business as this" and that "under the circumstance the troops [...] behaved with the greatest restraint". A private brief, prepared for the Prime Minister, said the soldiers "had orders not to take any prisoners" but took it to mean they were to shoot any suspected rebel. The City Coroner's inquest found that soldiers had killed "unarmed and unoffending" residents. The military court of inquiry ruled that no specific soldiers could be held responsible, and no action was taken.; ; Coogan , pp. 152–155Dorney, John. "The North King Street Massacre, Dublin 1916" . The Irish Story. 13 April 2012.
Inquiry
A Royal Commission was set up to enquire into the causes of the Rising. It began hearings on 18 May under the chairmanship of Lord Hardinge of Penshurst. The Commission heard evidence from Sir Matthew Nathan, Augustine Birrell, Lord Wimborne, Sir Neville Chamberlain (Inspector-General of the Royal Irish Constabulary), General Lovick Friend, Major Ivor Price of Military Intelligence and others.Ó Broin, Leon, Dublin Castle & the 1916 Rising pp. 153–159 The report, published on 26 June, was critical of the Dublin administration, saying that "Ireland for several years had been administered on the principle that it was safer and more expedient to leave the law in abeyance if collision with any faction of the Irish people could thereby be avoided." Birrell and Nathan had resigned immediately after the Rising. Wimborne resisted the pressure to resign, but was recalled to London by Asquith. He was re-appointed in July 1916. Chamberlain also resigned.
Reaction of the Dublin public
At first, many Dubliners were bewildered by the outbreak of the Rising. James Stephens, who was in Dublin during the week, thought, "None of these people were prepared for Insurrection. The thing had been sprung on them so suddenly they were unable to take sides." Eyewitnesses compared the ruin of Dublin with the destruction of towns in Europe in the war: the physical damage, which included over ninety fires, was largely confined to Sackville Street. In the immediate aftermath, the Irish government was in disarray.
There was great hostility towards the Volunteers in some parts of the city which escalated to physical violence in some instances. Historian Keith Jeffery noted that most of the opposition came from the dependents of British Army personnel. The death and destruction, which resulted in disrupted trade, considerable looting and unemployment, contributed to the antagonism of the Volunteers, who were denounced as "murderers" and "starvers of the people" – the monetary consequences of the Rising were estimated to be at £2,500,000. International aid was supplied to residents – nationalists aided the dependents of Volunteers. The British Government compensated the consequences to the sum of £2,500,000.
Support for the rebels did exist among Dubliners, expressed through both crowds cheering at prisoners and reverent silence. With martial law seeing this expression prosecuted, many would-be supporters elected to remain silent although "a strong undercurrent of disloyalty" was still felt. Drawing upon this support, and amidst the deluge of nationalist ephemera, the significantly popular Catholic Bulletin eulogised Volunteers killed in action and implored readers to donate; entertainment was offered as an extension of those intentions, targeting local sectors to great success. The Bulletins Catholic character allowed it to evade the widespread censorship of press and seizure of republican propaganda; it therefore exposed many unaware readers to such propaganda.
Rise of Sinn Féin
A meeting called by George Noble Plunkett on 19 April 1917 led to the formation of a broad political movement under the banner of Sinn Féin which was formalised at the Sinn Féin Ard Fheis of 25 October 1917. The Conscription Crisis of 1918 further intensified public support for Sinn Féin before the general election to the British Parliament on 14 December 1918, which resulted in a landslide victory for Sinn Féin, winning 73 seats out of 105, whose Members of Parliament (MPs) gathered in Dublin on 21 January 1919 to form Dáil Éireann and adopt the Declaration of Independence.
During that election, they drew directly upon the Rising and their popularity was significantly accreditable to that association, one that accrued political prestige until the end of the century. Many participants of the Rising would soon assume electoral positions. Sinn Féin served as an alternative to the Irish Parliamentary Party whose support for British establishments alienated voters.
Sinn Féin would become closely aligned with the Irish Republican Army, who sought to continue the IRB's ideals and waged armed conflict against British forces.
Legacy
1916 – containing both the Rising and the Battle of the Somme, events paramount to the collective memory of Irish Republicans and Ulster Unionists, respectively – had a profound effect on Ireland and is remembered accordingly. The Rising was among the events that ended colonial rule in Ireland, succeeded by the Irish War of Independence. The legacy of the Rising possess many dimensions although the declaration of the Republic and the ensuing executions remain focal points.
Annual parades in celebration of the Rising occurred for many years, however, ceased after The Troubles in Northern Ireland began, being seen as supportive of republican paramilitary violence – the Rising is a common feature of republican murals in Northern Ireland. These commemorations celebrated the Rising as the origin of the Irish state, a stance reiterated through extensive analysis. Unionists contend that the Rising was an illegal attack on the British State that should not be celebrated. Revivalism of the parades has inspired significant public debate, although the centenary of the Rising, which featured the likes of ceremonies and memorials, was largely successful and praised for its sensitivity.
The leaders of the Rising were "instantly apotheosized" and remembrance was situated within a larger republican tradition of claimed martyrdom – the Catholic Church would contend this narrative as the foundational myth of the Irish Free State, assuming a place within the remembrance as an association between republicanism and Catholicism grew. The "Pearsean combination of Catholicism, Gaelicism, and spiritual nationalism" would become dominant within republicanism, the ideas gaining a quasi-religiosity, whilst helping unify later strands thereof. Within the Free State, the Rising was sanctified by officials, positioned as a "highly disciplined military operation". Historians largely agree that the Rising succeeded by offering a symbolic display of sacrifice, while the military action was a considerable failure. As Monk Gibbon remarked, the "shots from khaki-uniformed firing parties did more to create the Republic of Ireland than any shot fired by a Volunteer in the course of Easter week".
Literature surrounding the Rising was significant: MacDonagh, Plunkett, and Pearse were themselves poets, whose ideals were granted a spiritual dimension in their work; Arnold Bax, Francis Ledwidge, George William Russell and W. B. Yeats responded through verse that ranged from endorsement to elegies. Although James Joyce was ambivalent to the insurgence, metaphors of and imagery consistent with the Rising appear in his later work. Hugh Leonard, Denis Johnston, Tom Murphy, Roddy Doyle and Sorley MacLean are among writers would later invoke the Rising. Now extensively dramatised, its theatricality was identified in the moment and has been stressed in its remembrance. Literary and political evocation position the Rising as a "watershed moment" central to Irish history.
Black, Basque, Breton, Catalan and Indian nationalists have drawn upon the Rising and its consequences. For the latter, Jawaharlal Nehru noted, the symbolic display was the appeal, that of the transcendent, "invincible spirit of a nation"; such was broadly appealing in America, where diasporic, occasionally socialist, nationalism occurred. Vladimir Lenin was effusive, ascribing its anti-imperialism a singular significance within geopolitics – his only misgiving was its estrangement from the broader wave of revolution occurring.
During the Troubles, significant revisionism of the Rising occurred. Revisionists contended that it was not a "heroic drama" as thought but rather informed the violence transpiring, by having legitimised a "cult of 'blood sacrifice'". With the advent of a Provisional IRA ceasefire and the beginning of what became known as the Peace Process during the 1990s, the government's view of the Rising grew more positive and in 1996 an 80th anniversary commemoration at the Garden of Remembrance in Dublin was attended by the Taoiseach and leader of Fine Gael, John Bruton.Reconstructing the Easter Rising , Colin Murphy, The Village, 16 February 2006
In popular culture
"Easter, 1916", a poem by the poet and playwright W. B. Yeats, published in 1921.
"The Foggy Dew" is a song by Canon Charles O'Neill, composed during the Irish War of Independence, that eulogises the rebels of the Easter Rising.
The Plough and the Stars is a 1926 play by Seán O'Casey that takes place during the Easter Rising.
Insurrection is a 1950 novel by Liam O'Flaherty that takes place during the Rising.
The Red and the Green is a 1965 novel by Iris Murdoch that covers the events leading up to and during the Easter Rising.
Insurrection is an eight-part 1966 docudrama made by Telefís Éireann for the 50th anniversary of the Rising. It was rebroadcast during the centenary celebrations in 2016.
"Grace" is a 1985 song about the marriage of Joseph Plunkett to Grace Gifford in Kilmainham Gaol before his execution.
1916, A Novel of the Irish Rebellion is a 1998 historical novel by Morgan Llywelyn.
A Star Called Henry is a 1999 novel by Roddy Doyle that partly recounts the Easter Rising through the involvement of the novel's protagonist Henry Smart.
At Swim, Two Boys is a 2001 novel by Irish writer Jamie O'Neill, set in Dublin before and during the 1916 Easter Rising.
Rebel Heart, is a 2001 BBC miniseries on the life of a (fictional) nationalist from the Rising through the Irish Civil War.
Blood Upon the Rose is a 2009 graphic novel by Gerry Hunt depicting the events of the Easter Rising.Edward Madigan, "Review of Gerry Hunt's 'Blood Upon the Rose', part one" , Pue's Occurrences, 2 November 2009
1916 Seachtar na Cásca is a 2010 Irish TV documentary series based on the Easter Rising, telling about seven signatories of the rebellion.
The Dream of the Celt is a 2012 novel by the Nobel Prize winner in Literature Mario Vargas Llosa, based on the life and death of Roger Casement including his involvement with the Rising.
Rebellion is a 2016 mini-series about the Easter Rising.
1916 is a 2016 three-part documentary miniseries about the Easter Rising narrated by Liam Neeson.
Penance is a 2018 Irish film set primarily in Donegal in 1916 and in Derry in 1969, in which the Rising is also featured.
See also
List of Irish uprisings
Property Losses (Ireland) Committee
Notes
References
Sources
Augusteijn, Joost (ed.)The Memoirs of John M. Regan, a Catholic Officer in the RIC and RUC, 1909–48, Witnessed Rising, .
Coogan, Tim Pat, 1916: The Easter Rising (2001)
Coogan, Tim Pat, The IRA (2nd ed. 2000),
De Rosa, Peter. Rebels: The Irish Rising of 1916. Fawcett Columbine, New York. 1990.
Eberspächer, Cord/Wiechmann, Gerhard: "Erfolg Revolution kann Krieg entscheiden". Der Einsatz von S.M.H. LIBAU im irischen Osteraufstand 1916 ("Successful revolution may decide war". The use of S.M.H. LIBAU in the Irish Easter rising 1916), in: Schiff & Zeit, Nr. 67, Frühjahr 2008, S. 2–16.
Foster, R. F. Vivid Faces: The Revolutionary Generation in Ireland, 1890–1923 (2015) excerpt
Foy, Michael and Barton, Brian, The Easter Rising
Greaves, C. Desmond, The Life and Times of James Connolly
Kostick, Conor & Collins, Lorcan, The Easter Rising, A Guide to Dublin in 1916
Lyons, F.S.L., Ireland Since the Famine
Macardle, Dorothy, The Irish Republic (Dublin 1951)
"Patrick Pearse and Patriotic Soteriology," in Yonah Alexander and Alan O'Day, eds, The Irish Terrorism Experience, (Aldershot: Dartmouth) 1991
Ó Broin, Leon, Dublin Castle & the 1916 Rising, Sidgwick & Jackson, 1970
Further reading
Bunbury, Turtle. Easter Dawn – The 1916 Rising (Mercier Press, 2015)
McCarthy, Mark. Ireland's 1916 Rising: Explorations of History-Making, Commemoration & Heritage in Modern Times (2013), historiography excerpt
McKeown, Eitne, 'A Family in the Rising' Dublin Electricity Supply Board Journal 1966.
Murphy, John A., Ireland in the Twentieth Century
Purdon, Edward, The 1916 Rising
Shaw, Francis, S.J., "The Canon of Irish History: A Challenge", in Studies: An Irish Quarterly Review, LXI, 242, 1972, pp. 113–52
External links
Easter 1916 – Digital Heritage Website
The 1916 Rising – an Online Exhibition. National Library of Ireland
The Letters of 1916 – Crowdsourcing Project Trinity College Dublin
Lillian Stokes (1878–1955): account of the 1916 Easter Rising
Primary and secondary sources relating to the Easter Rising (Sources database, National Library of Ireland)
Easter Rising site and walking tour of 1916 Dublin
News articles and letters to the editor in The Age, 27 April 1916
The Easter Rising – BBC History
The Irish Story archive on the Rising
Easter Rising website
The Discussion On Self-Determination Summed Up Lenin's discussion of the importance of the rebellion appears in Section 10: The Irish Rebellion of 1916
Bureau of Military History – Witness Statements Online (PDF files)
Category:1916 in Ireland
Category:20th-century rebellions
Category:Anti-imperialism in Europe
Category:April 1916 in Europe
Category:Conflicts in 1916
Category:Attacks in Ireland
Category:History of County Dublin
Category:History of Ireland (1801–1923)
Category:Ireland–United Kingdom relations
Category:Rebellions in Ireland
Category:Rebellions against the British Empire
Category:Wars involving the United Kingdom
Category:Events that led to courts-martial
Category:May 1916 in Europe
Category:Uprisings during World War I
|
political_movements
| 10,577
|
12266
|
Genetics
|
https://en.wikipedia.org/wiki/Genetics
|
Genetics is the study of genes, genetic variation, and heredity in organisms.Hartl D, Jones E (2005) It is an important branch in biology because heredity is vital to organisms' evolution. Gregor Mendel, a Moravian Augustinian friar working in the 19th century in Brno, was the first to study genetics scientifically. Mendel studied "trait inheritance", patterns in the way traits are handed down from parents to offspring over time. He observed that organisms (pea plants) inherit traits by way of discrete "units of inheritance". This term, still used today, is a somewhat ambiguous definition of what is referred to as a gene.
Trait inheritance and molecular inheritance mechanisms of genes are still primary principles of genetics in the 21st century, but modern genetics has expanded to study the function and behavior of genes. Gene structure and function, variation, and distribution are studied within the context of the cell, the organism (e.g. dominance), and within the context of a population. Genetics has given rise to a number of subfields, including molecular genetics, epigenetics, population genetics, and paleogenetics. Organisms studied within the broad field span the domains of life (archaea, bacteria, and eukarya).
Genetic processes work in combination with an organism's environment and experiences to influence development and behavior, often referred to as nature versus nurture. The intracellular or extracellular environment of a living cell or organism may increase or decrease gene transcription. A classic example is two seeds of genetically identical corn, one placed in a temperate climate and one in an arid climate (lacking sufficient waterfall or rain). While the average height the two corn stalks could grow to is genetically determined, the one in the arid climate only grows to half the height of the one in the temperate climate due to lack of water and nutrients in its environment.
Etymology
The word genetics stems from the ancient Greek meaning "genitive"/"generative", which in turn derives from meaning "origin".
History
The observation that living things inherit traits from their parents has been used since prehistoric times to improve crop plants and animals through selective breeding. The modern science of genetics, seeking to understand this process, began with the work of the Augustinian friar Gregor Mendel in the mid-19th century.
Prior to Mendel, Imre Festetics, a Hungarian noble, who lived in Kőszeg before Mendel, was the first who used the word "genetic" in hereditarian context, and is considered the first geneticist. He described several rules of biological inheritance in his work The genetic laws of nature (Die genetischen Gesetze der Natur, 1819). His second law is the same as that which Mendel published. In his third law, he developed the basic principles of mutation (he can be considered a forerunner of Hugo de Vries). Festetics argued that changes observed in the generation of farm animals, plants, and humans are the result of scientific laws. Festetics empirically deduced that organisms inherit their characteristics, not acquire them. He recognized recessive traits and inherent variation by postulating that traits of past generations could reappear later, and organisms could produce progeny with different attributes. These observations represent an important prelude to Mendel's theory of particulate inheritance insofar as it features a transition of heredity from its status as myth to that of a scientific discipline, by providing a fundamental theoretical basis for genetics in the twentieth century. 50px Text was copied from this source, which is available under a Creative Commons Attribution 4.0 International License .
Other theories of inheritance preceded Mendel's work. A popular theory during the 19th century, and implied by Charles Darwin's 1859 On the Origin of Species, was blending inheritance: the idea that individuals inherit a smooth blend of traits from their parents. Mendel's work provided examples where traits were definitely not blended after hybridization, showing that traits are produced by combinations of distinct genes rather than a continuous blend. Blending of traits in the progeny is now explained by the action of multiple genes with quantitative effects. Another theory that had some support at that time was the inheritance of acquired characteristics: the belief that individuals inherit traits strengthened by their parents. This theory (commonly associated with Jean-Baptiste Lamarck) is now known to be wrong—the experiences of individuals do not affect the genes they pass to their children.Lamarck, J-B (2008). In Encyclopædia Britannica. Retrieved from Encyclopædia Britannica Online on 16 March 2008. Other theories included Darwin's pangenesis (which had both acquired and inherited aspects) and Francis Galton's reformulation of pangenesis as both particulate and inherited.Peter J. Bowler, The Mendelian Revolution: The Emergency of Hereditarian Concepts in Modern Science and Society (Baltimore: Johns Hopkins University Press, 1989): chapters 2 & 3.
Mendelian genetics
Modern genetics started with Mendel's studies of the nature of inheritance in plants. In his paper "Versuche über Pflanzenhybriden" ("Experiments on Plant Hybridization"), presented in 1865 to the Naturforschender Verein (Society for Research in Nature) in Brno, Mendel traced the inheritance patterns of certain traits in pea plants and described them mathematically. Although this pattern of inheritance could only be observed for a few traits, Mendel's work suggested that heredity was particulate, not acquired, and that the inheritance patterns of many traits could be explained through simple rules and ratios.
The importance of Mendel's work did not gain wide understanding until 1900, after his death, when Hugo de Vries and other scientists rediscovered his research. William Bateson, a proponent of Mendel's work, coined the word genetics in 1905.genetics, n., Oxford English Dictionary, 3rd ed. The letter was to an Adam Sedgwick, a zoologist and "Reader in Animal Morphology" at Trinity College, Cambridge The adjective genetic, derived from the Greek word genesis—γένεσις, "origin", predates the noun and was first used in a biological sense in 1860.genetic, adj., Oxford English Dictionary, 3rd ed. Bateson both acted as a mentor and was aided significantly by the work of other scientists from Newnham College at Cambridge, specifically the work of Becky Saunders, Nora Darwin Barlow, and Muriel Wheldale Onslow. Bateson popularized the usage of the word genetics to describe the study of inheritance in his inaugural address to the Third International Conference on Plant Hybridization in London in 1906. :Initially titled the "International Conference on Hybridisation and Plant Breeding", the title was changed as a result of Bateson's speech. See:
After the rediscovery of Mendel's work, scientists tried to determine which molecules in the cell were responsible for inheritance. In 1900, Nettie Stevens began studying the mealworm. Over the next 11 years, she discovered that females only had the X chromosome and males had both X and Y chromosomes. She was able to conclude that sex is a chromosomal factor and is determined by the male. In 1911, Thomas Hunt Morgan argued that genes are on chromosomes, based on observations of a sex-linked white eye mutation in fruit flies. In 1913, his student Alfred Sturtevant used the phenomenon of genetic linkage to show that genes are arranged linearly on the chromosome.
Molecular genetics
Although genes were known to exist on chromosomes, chromosomes are composed of both protein and DNA, and scientists did not know which of the two is responsible for inheritance. In 1928, Frederick Griffith discovered the phenomenon of transformation: dead bacteria could transfer genetic material to "transform" other still-living bacteria. Sixteen years later, in 1944, the Avery–MacLeod–McCarty experiment identified DNA as the molecule responsible for transformation. Reprint: The role of the nucleus as the repository of genetic information in eukaryotes had been established by Hämmerling in 1943 in his work on the single celled alga Acetabularia. The Hershey–Chase experiment in 1952 confirmed that DNA (rather than protein) is the genetic material of the viruses that infect bacteria, providing further evidence that DNA is the molecule responsible for inheritance.
James Watson and Francis Crick determined the structure of DNA in 1953, using the X-ray crystallography work of Rosalind Franklin and Maurice Wilkins that indicated DNA has a helical structure (i.e., shaped like a corkscrew). Their double-helix model had two strands of DNA with the nucleotides pointing inward, each matching a complementary nucleotide on the other strand to form what look like rungs on a twisted ladder. This structure showed that genetic information exists in the sequence of nucleotides on each strand of DNA. The structure also suggested a simple method for replication: if the strands are separated, new partner strands can be reconstructed for each based on the sequence of the old strand. This property is what gives DNA its semi-conservative nature where one strand of new DNA is from an original parent strand.
Although the structure of DNA showed how inheritance works, it was still not known how DNA influences the behavior of cells. In the following years, scientists tried to understand how DNA controls the process of protein production. It was discovered that the cell uses DNA as a template to create matching messenger RNA, molecules with nucleotides very similar to DNA. The nucleotide sequence of a messenger RNA is used to create an amino acid sequence in protein; this translation between nucleotide sequences and amino acid sequences is known as the genetic code.
With the newfound molecular understanding of inheritance came an explosion of research. A notable theory arose from Tomoko Ohta in 1973 with her amendment to the neutral theory of molecular evolution through publishing the nearly neutral theory of molecular evolution. In this theory, Ohta stressed the importance of natural selection and the environment to the rate at which genetic evolution occurs. One important development was chain-termination DNA sequencing in 1977 by Frederick Sanger. This technology allows scientists to read the nucleotide sequence of a DNA molecule. In 1983, Kary Banks Mullis developed the polymerase chain reaction, providing a quick way to isolate and amplify a specific section of DNA from a mixture. The efforts of the Human Genome Project, Department of Energy, NIH, and parallel private efforts by Celera Genomics led to the sequencing of the human genome in 2003.
Features of inheritance
Discrete inheritance and Mendel's laws
At its most fundamental level, inheritance in organisms occurs by passing discrete heritable units, called genes, from parents to offspring. This property was first observed by Gregor Mendel, who studied the segregation of heritable traits in pea plants, showing for example that flowers on a single plant were either purple or white—but never an intermediate between the two colors. The discrete versions of the same gene controlling the inherited appearance (phenotypes) are called alleles.
In the case of the pea, which is a diploid species, each individual plant has two copies of each gene, one copy inherited from each parent. Many species, including humans, have this pattern of inheritance. Diploid organisms with two copies of the same allele of a given gene are called homozygous at that gene locus, while organisms with two different alleles of a given gene are called heterozygous. The set of alleles for a given organism is called its genotype, while the observable traits of the organism are called its phenotype. When organisms are heterozygous at a gene, often one allele is called dominant as its qualities dominate the phenotype of the organism, while the other allele is called recessive as its qualities recede and are not observed. Some alleles do not have complete dominance and instead have incomplete dominance by expressing an intermediate phenotype, or codominance by expressing both alleles at once.
When a pair of organisms reproduce sexually, their offspring randomly inherit one of the two alleles from each parent. These observations of discrete inheritance and the segregation of alleles are collectively known as Mendel's first law or the Law of Segregation. However, the probability of getting one gene over the other can change due to dominant, recessive, homozygous, or heterozygous genes. For example, Mendel found that if you cross heterozygous organisms your odds of getting the dominant trait is 3:1. Real geneticist study and calculate probabilities by using theoretical probabilities, empirical probabilities, the product rule, the sum rule, and more.
Notation and diagrams
Geneticists use diagrams and symbols to describe inheritance. A gene is represented by one or a few letters. Often a "+" symbol is used to mark the usual, non-mutant allele for a gene.
In fertilization and breeding experiments (and especially when discussing Mendel's laws) the parents are referred to as the "P" generation and the offspring as the "F1" (first filial) generation. When the F1 offspring mate with each other, the offspring are called the "F2" (second filial) generation. One of the common diagrams used to predict the result of cross-breeding is the Punnett square.
When studying human genetic diseases, geneticists often use pedigree charts to represent the inheritance of traits. These charts map the inheritance of a trait in a family tree.
Multiple gene interactions
Organisms have thousands of genes, and in sexually reproducing organisms these genes generally assort independently of each other. This means that the inheritance of an allele for yellow or green pea color is unrelated to the inheritance of alleles for white or purple flowers. This phenomenon, known as "Mendel's law of independent assortment, means that the alleles of different genes get shuffled between parents to form offspring with many different combinations. Different genes often interact to influence the same trait. In the Blue-eyed Mary (Omphalodes verna), for example, there exists a gene with alleles that determine the color of flowers: blue or magenta. Another gene, however, controls whether the flowers have color at all or are white. When a plant has two copies of this white allele, its flowers are white—regardless of whether the first gene has blue or magenta alleles. This interaction between genes is called epistasis, with the second gene epistatic to the first.
Many traits are not discrete features (e.g. purple or white flowers) but are instead continuous features (e.g. human height and skin color). These complex traits are products of many genes. The influence of these genes is mediated, to varying degrees, by the environment an organism has experienced. The degree to which an organism's genes contribute to a complex trait is called heritability. Measurement of the heritability of a trait is relative—in a more variable environment, the environment has a bigger influence on the total variation of the trait. For example, human height is a trait with complex causes. It has a heritability of 89% in the United States. In Nigeria, however, where people experience a more variable access to good nutrition and health care, height has a heritability of only 62%.
Molecular basis for inheritance
DNA and chromosomes
DNA normally exists as a double-stranded molecule, coiled into the shape of a double helix. Each nucleotide in DNA preferentially pairs with its partner nucleotide on the opposite strand: A pairs with T, and C pairs with G. Thus, in its two-stranded form, each strand effectively contains all necessary information, redundant with its partner strand. This structure of DNA is the physical basis for inheritance: DNA replication duplicates the genetic information by splitting the strands and using each strand as a template for synthesis of a new partner strand.
Genes are arranged linearly along long chains of DNA base-pair sequences. In bacteria, each cell usually contains a single circular genophore, while eukaryotic organisms (such as plants and animals) have their DNA arranged in multiple linear chromosomes. These DNA strands are often extremely long; the largest human chromosome, for example, is about 247 million base pairs in length. The DNA of a chromosome is associated with structural proteins that organize, compact, and control access to the DNA, forming a material called chromatin; in eukaryotes, chromatin is usually composed of nucleosomes, segments of DNA wound around cores of histone proteins.Alberts et al. (2002), II.4. DNA and chromosomes: Chromosomal DNA and Its Packaging in the Chromatin Fiber The full set of hereditary material in an organism (usually the combined DNA sequences of all chromosomes) is called the genome.
DNA is most often found in the nucleus of cells, but Ruth Sager helped in the discovery of nonchromosomal genes found outside of the nucleus. In plants, these are often found in the chloroplasts and in other organisms, in the mitochondria. These nonchromosomal genes can still be passed on by either partner in sexual reproduction and they control a variety of hereditary characteristics that replicate and remain active throughout generations.
While haploid organisms have only one copy of each chromosome, most animals and many plants are diploid, containing two of each chromosome and thus two copies of every gene. The two alleles for a gene are located on identical loci of the two homologous chromosomes, each allele inherited from a different parent.
Many species have so-called sex chromosomes that determine the sex of each organism. In humans and many other animals, the Y chromosome contains the gene that triggers the development of the specifically male characteristics. In evolution, this chromosome has lost most of its content and also most of its genes, while the X chromosome is similar to the other chromosomes and contains many genes. This being said, Mary Frances Lyon discovered that there is X-chromosome inactivation during reproduction to avoid passing on twice as many genes to the offspring. Lyon's discovery led to the discovery of X-linked diseases.
Reproduction
When cells divide, their full genome is copied and each daughter cell inherits one copy. This process, called mitosis, is the simplest form of reproduction and is the basis for asexual reproduction. Asexual reproduction can also occur in multicellular organisms, producing offspring that inherit their genome from a single parent. Offspring that are genetically identical to their parents are called clones.
Eukaryotic organisms often use sexual reproduction to generate offspring that contain a mixture of genetic material inherited from two different parents. The process of sexual reproduction alternates between forms that contain single copies of the genome (haploid) and double copies (diploid). Haploid cells fuse and combine genetic material to create a diploid cell with paired chromosomes. Diploid organisms form haploids by dividing, without replicating their DNA, to create daughter cells that randomly inherit one of each pair of chromosomes. Most animals and many plants are diploid for most of their lifespan, with the haploid form reduced to single cell gametes such as sperm or eggs.
Although they do not use the haploid/diploid method of sexual reproduction, bacteria have many methods of acquiring new genetic information. Some bacteria can undergo conjugation, transferring a small circular piece of DNA to another bacterium. Bacteria can also take up raw DNA fragments found in the environment and integrate them into their genomes, a phenomenon known as transformation. These processes result in horizontal gene transfer, transmitting fragments of genetic information between organisms that would be otherwise unrelated. Natural bacterial transformation occurs in many bacterial species, and can be regarded as a sexual process for transferring DNA from one cell to another cell (usually of the same species). Transformation requires the action of numerous bacterial gene products, and its primary adaptive function appears to be repair of DNA damages in the recipient cell.
Recombination and genetic linkage
The diploid nature of chromosomes allows for genes on different chromosomes to assort independently or be separated from their homologous pair during sexual reproduction wherein haploid gametes are formed. In this way new combinations of genes can occur in the offspring of a mating pair. Genes on the same chromosome would theoretically never recombine. However, they do, via the cellular process of chromosomal crossover. During crossover, chromosomes exchange stretches of DNA, effectively shuffling the gene alleles between the chromosomes. This process of chromosomal crossover generally occurs during meiosis, a series of cell divisions that creates haploid cells. Meiotic recombination, particularly in microbial eukaryotes, appears to serve the adaptive function of repair of DNA damages.
The first cytological demonstration of crossing over was performed by Harriet Creighton and Barbara McClintock in 1931. Their research and experiments on corn provided cytological evidence for the genetic theory that linked genes on paired chromosomes do in fact exchange places from one homolog to the other.
The probability of chromosomal crossover occurring between two given points on the chromosome is related to the distance between the points. For an arbitrarily long distance, the probability of crossover is high enough that the inheritance of the genes is effectively uncorrelated. For genes that are closer together, however, the lower probability of crossover means that the genes demonstrate genetic linkage; alleles for the two genes tend to be inherited together. The amounts of linkage between a series of genes can be combined to form a linear linkage map that roughly describes the arrangement of the genes along the chromosome.
Gene expression
Genetic code
class=skin-invert-image|thumb|left|upright=1.3|The genetic code: Using a triplet code, DNA, through a messenger RNA intermediary, specifies a protein.
Genes express their functional effect through the production of proteins, which are molecules responsible for most functions in the cell. Proteins are made up of one or more polypeptide chains, each composed of a sequence of amino acids. The DNA sequence of a gene is used to produce a specific amino acid sequence. This process begins with the production of an RNA molecule with a sequence matching the gene's DNA sequence, a process called transcription.
This messenger RNA molecule then serves to produce a corresponding amino acid sequence through a process called translation. Each group of three nucleotides in the sequence, called a codon, corresponds either to one of the twenty possible amino acids in a protein or an instruction to end the amino acid sequence; this correspondence is called the genetic code. The flow of information is unidirectional: information is transferred from nucleotide sequences into the amino acid sequence of proteins, but it never transfers from protein back into the sequence of DNA—a phenomenon Francis Crick called the central dogma of molecular biology.
The specific sequence of amino acids results in a unique three-dimensional structure for that protein, and the three-dimensional structures of proteins are related to their functions.Alberts et al. (2002), I.3. Proteins: The Shape and Structure of Proteins Alberts et al. (2002), I.3. Proteins: Protein Function Some are simple structural molecules, like the fibers formed by the protein collagen. Proteins can bind to other proteins and simple molecules, sometimes acting as enzymes by facilitating chemical reactions within the bound molecules (without changing the structure of the protein itself). Protein structure is dynamic; the protein hemoglobin bends into slightly different forms as it facilitates the capture, transport, and release of oxygen molecules within mammalian blood.
A single nucleotide difference within DNA can cause a change in the amino acid sequence of a protein. Because protein structures are the result of their amino acid sequences, some changes can dramatically change the properties of a protein by destabilizing the structure or changing the surface of the protein in a way that changes its interaction with other proteins and molecules. For example, sickle-cell anemia is a human genetic disease that results from a single base difference within the coding region for the β-globin section of hemoglobin, causing a single amino acid change that changes hemoglobin's physical properties.
Sickle-cell versions of hemoglobin stick to themselves, stacking to form fibers that distort the shape of red blood cells carrying the protein. These sickle-shaped cells no longer flow smoothly through blood vessels, having a tendency to clog or degrade, causing the medical problems associated with this disease.
Some DNA sequences are transcribed into RNA but are not translated into protein products—such RNA molecules are called non-coding RNA. In some cases, these products fold into structures which are involved in critical cell functions (e.g. ribosomal RNA and transfer RNA). RNA can also have regulatory effects through hybridization interactions with other RNA molecules (such as microRNA).
Nature and nurture
Although genes contain all the information an organism uses to function, the environment plays an important role in determining the ultimate phenotypes an organism displays. The phrase "nature and nurture" refers to this complementary relationship. The phenotype of an organism depends on the interaction of genes and the environment. An interesting example is the coat coloration of the Siamese cat. In this case, the body temperature of the cat plays the role of the environment. The cat's genes code for dark hair, thus the hair-producing cells in the cat make cellular proteins resulting in dark hair. But these dark hair-producing proteins are sensitive to temperature (i.e. have a mutation causing temperature-sensitivity) and denature in higher-temperature environments, failing to produce dark-hair pigment in areas where the cat has a higher body temperature. In a low-temperature environment, however, the protein's structure is stable and produces dark-hair pigment normally. The protein remains functional in areas of skin that are colder—such as its legs, ears, tail, and faceso the cat has dark hair at its extremities.
Environment plays a major role in effects of the human genetic disease phenylketonuria. The mutation that causes phenylketonuria disrupts the ability of the body to break down the amino acid phenylalanine, causing a toxic build-up of an intermediate molecule that, in turn, causes severe symptoms of progressive intellectual disability and seizures. However, if someone with the phenylketonuria mutation follows a strict diet that avoids this amino acid, they remain normal and healthy.
A common method for determining how genes and environment ("nature and nurture") contribute to a phenotype involves studying identical and fraternal twins, or other siblings of multiple births.For example, Identical siblings are genetically the same since they come from the same zygote. Meanwhile, fraternal twins are as genetically different from one another as normal siblings. By comparing how often a certain disorder occurs in a pair of identical twins to how often it occurs in a pair of fraternal twins, scientists can determine whether that disorder is caused by genetic or postnatal environmental factors. One famous example involved the study of the Genain quadruplets, who were identical quadruplets all diagnosed with schizophrenia.
Gene regulation
The genome of a given organism contains thousands of genes, but not all these genes need to be active at any given moment. A gene is expressed when it is being transcribed into mRNA and there exist many cellular methods of controlling the expression of genes such that proteins are produced only when needed by the cell. Transcription factors are regulatory proteins that bind to DNA, either promoting or inhibiting the transcription of a gene. Within the genome of Escherichia coli bacteria, for example, there exists a series of genes necessary for the synthesis of the amino acid tryptophan. However, when tryptophan is already available to the cell, these genes for tryptophan synthesis are no longer needed. The presence of tryptophan directly affects the activity of the genes—tryptophan molecules bind to the tryptophan repressor (a transcription factor), changing the repressor's structure such that the repressor binds to the genes. The tryptophan repressor blocks the transcription and expression of the genes, thereby creating negative feedback regulation of the tryptophan synthesis process.Alberts et al. (2002), II.3. Control of Gene Expression – The Tryptophan Repressor is a Simple Switch That Turns Genes On and Off in Bacteria
Differences in gene expression are especially clear within multicellular organisms, where cells all contain the same genome but have very different structures and behaviors due to the expression of different sets of genes. All the cells in a multicellular organism derive from a single cell, differentiating into variant cell types in response to external and intercellular signals and gradually establishing different patterns of gene expression to create different behaviors.
Within eukaryotes, there exist structural features of chromatin that influence the transcription of genes, often in the form of modifications to DNA and chromatin that are stably inherited by daughter cells. These features are called "epigenetic" because they exist "on top" of the DNA sequence and retain inheritance from one cell generation to the next. Because of epigenetic features, different cell types grown within the same medium can retain very different properties. Although epigenetic features are generally dynamic over the course of development, some, like the phenomenon of paramutation, have multigenerational inheritance and exist as rare exceptions to the general rule of DNA as the basis for inheritance.
Genetic change
Mutations
During the process of DNA replication, errors occasionally occur in the polymerization of the second strand. These errors, called mutations, can affect the phenotype of an organism, especially if they occur within the protein coding sequence of a gene. Error rates are usually very low—1 error in every 10–100 million bases—due to the "proofreading" ability of DNA polymerases. Processes that increase the rate of changes in DNA are called mutagenic: mutagenic chemicals promote errors in DNA replication, often by interfering with the structure of base-pairing, while UV radiation induces mutations by causing damage to the DNA structure. Chemical damage to DNA occurs naturally as well and cells use DNA repair mechanisms to repair mismatches and breaks. The repair does not, however, always restore the original sequence. A particularly important source of DNA damages appears to be reactive oxygen species produced by cellular aerobic respiration, and these can lead to mutations.
In organisms that use chromosomal crossover to exchange DNA and recombine genes, errors in alignment during meiosis can also cause mutations. Errors in crossover are especially likely when similar sequences cause partner chromosomes to adopt a mistaken alignment; this makes some regions in genomes more prone to mutating in this way. These errors create large structural changes in DNA sequence—duplications, inversions, deletions of entire regions—or the accidental exchange of whole parts of sequences between different chromosomes, chromosomal translocation.thumb|This is a diagram showing mutations in an RNA sequence. Figure (1) is a normal RNA sequence, consisting of 4 codons. Figure (2) shows a missense, single point, non silent mutation. Figures (3 and 4) both show frameshift mutations, which is why they are grouped together. Figure 3 shows a deletion of the second base pair in the second codon. Figure 4 shows an insertion in the third base pair of the second codon. Figure (5) shows a repeat expansion, where an entire codon is duplicated.
Natural selection and evolution
Mutations alter an organism's genotype and occasionally this causes different phenotypes to appear. Most mutations have little effect on an organism's phenotype, health, or reproductive fitness. Mutations that do have an effect are usually detrimental, but occasionally some can be beneficial. Studies in the fly Drosophila melanogaster suggest that if a mutation changes a protein produced by a gene, about 70 percent of these mutations are harmful with the remainder being either neutral or weakly beneficial.
Population genetics studies the distribution of genetic differences within populations and how these distributions change over time. Changes in the frequency of an allele in a population are mainly influenced by natural selection, where a given allele provides a selective or reproductive advantage to the organism, as well as other factors such as mutation, genetic drift, genetic hitchhiking, artificial selection and migration.
Over many generations, the genomes of organisms can change significantly, resulting in evolution. In the process called adaptation, selection for beneficial mutations can cause a species to evolve into forms better able to survive in their environment.Earlier related ideas were acknowledged in New species are formed through the process of speciation, often caused by geographical separations that prevent populations from exchanging genes with each other.
By comparing the homology between different species' genomes, it is possible to calculate the evolutionary distance between them and when they may have diverged. Genetic comparisons are generally considered a more accurate method of characterizing the relatedness between species than the comparison of phenotypic characteristics. The evolutionary distances between species can be used to form evolutionary trees; these trees represent the common descent and divergence of species over time, although they do not show the transfer of genetic material between unrelated species (known as horizontal gene transfer and most common in bacteria).
Research and technology
Model organisms
Although geneticists originally studied inheritance in a wide variety of organisms, the range of species studied has narrowed. One reason is that when significant research already exists for a given organism, new researchers are more likely to choose it for further study, and so eventually a few model organisms became the basis for most genetics research. Common research topics in model organism genetics include the study of gene regulation and the involvement of genes in development and cancer. Organisms were chosen, in part, for convenience—short generation times and easy genetic manipulation made some organisms popular genetics research tools. Widely used model organisms include the gut bacterium Escherichia coli, the plant Arabidopsis thaliana, baker's yeast (Saccharomyces cerevisiae), the nematode Caenorhabditis elegans, the common fruit fly (Drosophila melanogaster), the zebrafish (Danio rerio), and the common house mouse (Mus musculus).
Medicine
alt=|thumb|Schematic relationship between biochemistry, genetics and molecular biology
Medical genetics seeks to understand how genetic variation relates to human health and disease. When searching for an unknown gene that may be involved in a disease, researchers commonly use genetic linkage and genetic pedigree charts to find the location on the genome associated with the disease. At the population level, researchers take advantage of genome wide association studies (GWAS) to look for locations in the genome that are associated with diseases, a method especially useful for multigenic traits not clearly defined by a single gene. Once a candidate gene is found, further research is often done on the corresponding (or homologous) genes of model organisms. In addition to studying genetic diseases, the increased availability of genotyping methods has led to the field of pharmacogenetics: the study of how genotype can affect drug responses.
Individuals differ in their inherited tendency to develop cancer, and cancer is a genetic disease. The process of cancer development in the body is a combination of events. Mutations occasionally occur within cells in the body as they divide. Although these mutations will not be inherited by any offspring, they can affect the behavior of cells, sometimes causing them to grow and divide more frequently. There are biological mechanisms that attempt to stop this process; signals are given to inappropriately dividing cells that should trigger cell death, but sometimes additional mutations occur that cause cells to ignore these messages. An internal process of natural selection occurs within the body and eventually mutations accumulate within cells to promote their own growth, creating a cancerous tumor that grows and invades various tissues of the body. Normally, a cell divides only in response to signals called growth factors and stops growing once in contact with surrounding cells and in response to growth-inhibitory signals. It usually then divides a limited number of times and dies, staying within the epithelium where it is unable to migrate to other organs. To become a cancer cell, a cell has to accumulate mutations in a number of genes (three to seven). A cancer cell can divide without growth factor and ignores inhibitory signals. Also, it is immortal and can grow indefinitely, even after it makes contact with neighboring cells. It may escape from the epithelium and ultimately from the primary tumor. Then, the escaped cell can cross the endothelium of a blood vessel and get transported by the bloodstream to colonize a new organ, forming deadly metastasis. Although there are some genetic predispositions in a small fraction of cancers, the major fraction is due to a set of new genetic mutations that originally appear and accumulate in one or a small number of cells that will divide to form the tumor and are not transmitted to the progeny (somatic mutations). The most frequent mutations are a loss of function of p53 protein, a tumor suppressor, or in the p53 pathway, and gain of function mutations in the Ras proteins, or in other oncogenes. Chapter 18: Cancer Genetics
Research methods
DNA can be manipulated in the laboratory. Restriction enzymes are commonly used enzymes that cut DNA at specific sequences, producing predictable fragments of DNA.Lodish et al. (2000), Chapter 7: 7.1. DNA Cloning with Plasmid Vectors DNA fragments can be visualized through use of gel electrophoresis, which separates fragments according to their length.
The use of ligation enzymes allows DNA fragments to be connected. By binding ("ligating") fragments of DNA together from different sources, researchers can create recombinant DNA, the DNA often associated with genetically modified organisms. Recombinant DNA is commonly used in the context of plasmids: short circular DNA molecules with a few genes on them. In the process known as molecular cloning, researchers can amplify the DNA fragments by inserting plasmids into bacteria and then culturing them on plates of agar (to isolate clones of bacteria cells). "Cloning" can also refer to the various means of creating cloned ("clonal") organisms.
DNA can also be amplified using a procedure called the polymerase chain reaction (PCR).Lodish et al. (2000), Chapter 7: 7.7. Polymerase Chain Reaction: An Alternative to Cloning By using specific short sequences of DNA, PCR can isolate and exponentially amplify a targeted region of DNA. Because it can amplify from extremely small amounts of DNA, PCR is also often used to detect the presence of specific DNA sequences.
DNA sequencing and genomics
DNA sequencing, one of the most fundamental technologies developed to study genetics, allows researchers to determine the sequence of nucleotides in DNA fragments. The technique of chain-termination sequencing, developed in 1977 by a team led by Frederick Sanger, is still routinely used to sequence DNA fragments. Using this technology, researchers have been able to study the molecular sequences associated with many human diseases.
As sequencing has become less expensive, researchers have sequenced the genomes of many organisms using a process called genome assembly, which uses computational tools to stitch together sequences from many different fragments.Brown (2002), Section 2, Chapter 6: 6.2. Assembly of a Contiguous DNA Sequence These technologies were used to sequence the human genome in the Human Genome Project completed in 2003. New high-throughput sequencing technologies are dramatically lowering the cost of DNA sequencing, with many researchers hoping to bring the cost of resequencing a human genome down to a thousand dollars.
Next-generation sequencing (or high-throughput sequencing) came about due to the ever-increasing demand for low-cost sequencing. These sequencing technologies allow the production of potentially millions of sequences concurrently. The large amount of sequence data available has created the subfield of genomics, research that uses computational tools to search for and analyze patterns in the full genomes of organisms. Genomics can also be considered a subfield of bioinformatics, which uses computational approaches to analyze large sets of biological data.
Society and culture
On 19 March 2015, a group of leading biologists urged a worldwide ban on clinical use of methods, particularly the use of CRISPR and zinc finger, to edit the human genome in a way that can be inherited. In April 2015, Chinese researchers reported results of basic research to edit the DNA of non-viable human embryos using CRISPR.
See also
Bacterial genome size
Cryoconservation of animal genetic resources
Eugenics
Embryology
Genetic disorder
Genetic diversity
Genetic engineering
Genetic enhancement
Glossary of genetics (M−Z)
Index of genetics articles
Medical genetics
Molecular tools for gene study
Neuroepigenetics
Outline of genetics
Timeline of the history of genetics
Plant genetic resources
References
Further reading
External links
Genetics
|
biology
| 6,483
|
12401
|
Graph theory
|
https://en.wikipedia.org/wiki/Graph_theory
|
In mathematics and computer science, graph theory is the study of graphs, which are mathematical structures used to model pairwise relations between objects. A graph in this context is made up of vertices (also called nodes or points) which are connected by edges (also called arcs, links or lines). A distinction is made between undirected graphs, where edges link two vertices symmetrically, and directed graphs, where edges link two vertices asymmetrically. Graphs are one of the principal objects of study in discrete mathematics.
Definitions
Definitions in graph theory vary. The following are some of the more basic ways of defining graphs and related mathematical structures.
Graph
150 px|thumb|An undirected graph with three vertices and three edges.
In one restricted but very common sense of the term,See, for instance, Iyanaga and Kawada, 69 J, p. 234 or Biggs, p. 4. a graph is an ordered pair comprising:
, a set of vertices (also called nodes or points);
, a set of edges (also called links or lines), which are unordered pairs of vertices (that is, an edge is associated with two distinct vertices).
To avoid ambiguity, this type of object may be called an undirected simple graph.
In the edge , the vertices and are called the endpoints of the edge. The edge is said to join and and to be incident on and on . A vertex may exist in a graph and not belong to an edge. Under this definition, multiple edges, in which two or more edges connect the same vertices, are not allowed.
150px|thumb|Example of an undirected multigraph with 3 vertices, 3 edges and 4 loops.
In one more general sense of the term allowing multiple edges,See, for instance, Graham et al., p. 5. a graph is an ordered triple comprising:
, a set of vertices (also called nodes or points);
, a set of edges (also called links or lines);
, an incidence function mapping every edge to an unordered pair of vertices (that is, an edge is associated with two distinct vertices).
To avoid ambiguity, this type of object may be called an undirected multigraph.
A loop is an edge that joins a vertex to itself. Graphs as defined in the two definitions above cannot have loops, because a loop joining a vertex to itself is the edge (for an undirected simple graph) or is incident on (for an undirected multigraph) which is not in . To allow loops, the definitions must be expanded. For undirected simple graphs, the definition of should be modified to . For undirected multigraphs, the definition of should be modified to . To avoid ambiguity, these types of objects may be called undirected simple graph permitting loops and undirected multigraph permitting loops (sometimes also undirected pseudograph), respectively.
and are usually taken to be finite, and many of the well-known results are not true (or are rather different) for infinite graphs because many of the arguments fail in the infinite case. Moreover, is often assumed to be non-empty, but is allowed to be the empty set. The order of a graph is , its number of vertices. The size of a graph is , its number of edges. The degree or valency of a vertex is the number of edges that are incident to it, where a loop is counted twice. The degree of a graph is the maximum of the degrees of its vertices.
In an undirected simple graph of order n, the maximum degree of each vertex is and the maximum size of the graph is .
The edges of an undirected simple graph permitting loops induce a symmetric homogeneous relation on the vertices of that is called the adjacency relation of . Specifically, for each edge , its endpoints and are said to be adjacent to one another, which is denoted .
Directed graph
A directed graph or digraph is a graph in which edges have orientations.
In one restricted but very common sense of the term, a directed graph is an ordered pair comprising:
, a set of vertices (also called nodes or points);
, a set of edges (also called directed edges, directed links, directed lines, arrows or arcs) which are ordered pairs of vertices (that is, an edge is associated with two distinct vertices).
To avoid ambiguity, this type of object may be called a directed simple graph. In set theory and graph theory, denotes the set of -tuples of elements of that is, ordered sequences of elements that are not necessarily distinct.
In the edge directed from to , the vertices and are called the endpoints of the edge, the tail of the edge and the head of the edge. The edge is said to join and and to be incident on and on . A vertex may exist in a graph and not belong to an edge. The edge is called the inverted edge of . Multiple edges, not allowed under the definition above, are two or more edges with both the same tail and the same head.
In one more general sense of the term allowing multiple edges, a directed graph is an ordered triple comprising:
, a set of vertices (also called nodes or points);
, a set of edges (also called directed edges, directed links, directed lines, arrows or arcs);
, an incidence function mapping every edge to an ordered pair of vertices (that is, an edge is associated with two distinct vertices).
To avoid ambiguity, this type of object may be called a directed multigraph.
A loop is an edge that joins a vertex to itself. Directed graphs as defined in the two definitions above cannot have loops, because a loop joining a vertex to itself is the edge (for a directed simple graph) or is incident on (for a directed multigraph) which is not in . So to allow loops the definitions must be expanded. For directed simple graphs, the definition of should be modified to . For directed multigraphs, the definition of should be modified to . To avoid ambiguity, these types of objects may be called precisely a directed simple graph permitting loops and a directed multigraph permitting loops (or a quiver) respectively.
The edges of a directed simple graph permitting loops is a homogeneous relation ~ on the vertices of that is called the adjacency relation of . Specifically, for each edge , its endpoints and are said to be adjacent to one another, which is denoted ~ .
Applications
Graphs can be used to model many types of relations and processes in physical, biological, social and information systems. Many practical problems can be represented by graphs. Emphasizing their application to real-world systems, the term network is sometimes defined to mean a graph in which attributes (e.g. names) are associated with the vertices and edges, and the subject that expresses and understands real-world systems as a network is called network science.
Computer science
Within computer science, 'causal' and 'non-causal' linked structures are graphs that are used to represent networks of communication, data organization, computational devices, the flow of computation, etc. For instance, the link structure of a website can be represented by a directed graph, in which the vertices(nodes) represent web pages and directed edges represent links from one page to another. A similar approach can be taken to problems in social media, travel, biology, computer chip design, mapping the progression of neuro-degenerative diseases, and many other fields. The development of algorithms to handle graphs is therefore of major interest in computer science. The transformation of graphs is often formalized and represented by graph rewrite systems. Complementary to graph transformation systems focusing on rule-based in-memory manipulation of graphs are graph databases geared towards transaction-safe, persistent storing and querying of graph-structured data.
Linguistics
Graph-theoretic methods, in various forms, have proven particularly useful in linguistics, since natural language often lends itself well to discrete structure. Traditionally, syntax and compositional semantics follow tree-based structures, whose expressive power lies in the principle of compositionality, modeled in a hierarchical graph. More contemporary approaches such as head-driven phrase structure grammar model the syntax of natural language using typed feature structures, which are directed acyclic graphs.
Within lexical semantics, especially as applied to computers, modeling word meaning is easier when a given word is understood in terms of related words; semantic networks are therefore important in computational linguistics. Still, other methods in phonology (e.g. optimality theory, which uses lattice graphs) and morphology (e.g. finite-state morphology, using finite-state transducers) are common in the analysis of language as a graph. Indeed, the usefulness of this area of mathematics to linguistics has borne organizations such as TextGraphs, as well as various 'Net' projects, such as WordNet, VerbNet, and others.
Physics and chemistry
Graph theory is also used to study molecules in chemistry and physics. In condensed matter physics, the three-dimensional structure of complicated simulated atomic structures can be studied quantitatively by gathering statistics on graph-theoretic properties related to the topology of the atoms. Also, "the Feynman graphs and rules of calculation summarize quantum field theory in a form in close contact with the experimental numbers one wants to understand." In chemistry a graph makes a natural model for a molecule, where vertices represent atoms and edges bonds. This approach is especially used in computer processing of molecular structures, ranging from chemical editors to database searching. In statistical physics, graphs can represent local connections between interacting parts of a system, as well as the dynamics of a physical process on such
systems. Similarly, in computational neuroscience graphs can be used to represent functional connections between brain areas that interact to give rise to various cognitive processes, where the vertices represent different areas of the brain and the edges represent the connections between those areas. Graph theory plays an important role in electrical modeling of electrical networks, here, weights are associated with resistance of the wire segments to obtain electrical properties of network structures. Graphs are also used to represent the micro-scale channels of porous media, in which the vertices represent the pores and the edges represent the smaller channels connecting the pores. Chemical graph theory uses the molecular graph as a means to model molecules.
Graphs and networks are excellent models to study and understand phase transitions and critical phenomena.
Removal of nodes or edges leads to a critical transition where the network breaks into small clusters which is studied as a phase transition. This breakdown is studied via percolation theory.
Social sciences
Graph theory is also widely used in sociology as a way, for example, to measure actors' prestige or to explore rumor spreading, notably through the use of social network analysis software. Under the umbrella of social networks are many different types of graphs. Acquaintanceship and friendship graphs describe whether people know each other. Influence graphs model whether certain people can influence the behavior of others. Finally, collaboration graphs model whether two people work together in a particular way, such as acting in a movie together.
Biology
Likewise, graph theory is useful in biology and conservation efforts where a vertex can represent regions where certain species exist (or inhabit) and the edges represent migration paths or movement between the regions. This information is important when looking at breeding patterns or tracking the spread of disease, parasites or how changes to the movement can affect other species.
Graphs are also commonly used in molecular biology and genomics to model and analyse datasets with complex relationships. For example, graph-based methods are often used to 'cluster' cells together into cell-types in single-cell transcriptome analysis. Another use is to model genes or proteins in a pathway and study the relationships between them, such as metabolic pathways and gene regulatory networks. Evolutionary trees, ecological networks, and hierarchical clustering of gene expression patterns are also represented as graph structures.
Graph theory is also used in connectomics; nervous systems can be seen as a graph, where the nodes are neurons and the edges are the connections between them.
Mathematics
In mathematics, graphs are useful in geometry and certain parts of topology such as knot theory. Algebraic graph theory has close links with group theory. Algebraic graph theory has been applied to many areas including dynamic systems and complexity.
Other topics
A graph structure can be extended by assigning a weight to each edge of the graph. Graphs with weights, or weighted graphs, are used to represent structures in which pairwise connections have some numerical values. For example, if a graph represents a road network, the weights could represent the length of each road. There may be several weights associated with each edge, including distance (as in the previous example), travel time, or monetary cost. Such weighted graphs are commonly used to program GPS's, and travel-planning search engines that compare flight times and costs.
History
The paper written by Leonhard Euler on the Seven Bridges of Königsberg and published in 1736 is regarded as the first paper in the history of graph theory. This paper, as well as the one written by Vandermonde on the knight problem, carried on with the analysis situs initiated by Leibniz. Euler's formula relating the number of edges, vertices, and faces of a convex polyhedron was studied and generalized by Cauchy and L'Huilier, and represents the beginning of the branch of mathematics known as topology.
More than one century after Euler's paper on the bridges of Königsberg and while Listing was introducing the concept of topology, Cayley was led by an interest in particular analytical forms arising from differential calculus to study a particular class of graphs, the trees. This study had many implications for theoretical chemistry. The techniques he used mainly concern the enumeration of graphs with particular properties. Enumerative graph theory then arose from the results of Cayley and the fundamental results published by Pólya between 1935 and 1937. These were generalized by De Bruijn in 1959. Cayley linked his results on trees with contemporary studies of chemical composition. The fusion of ideas from mathematics with those from chemistry began what has become part of the standard terminology of graph theory.
In particular, the term "graph" was introduced by Sylvester in a paper published in 1878 in Nature, where he draws an analogy between "quantic invariants" and "co-variants" of algebra and molecular diagrams:
"[…] Every invariant and co-variant thus becomes expressible by a graph precisely identical with a Kekuléan diagram or chemicograph. […] I give a rule for the geometrical multiplication of graphs, i.e. for constructing a graph to the product of in- or co-variants whose separate graphs are given. […]" (italics as in the original).
The first textbook on graph theory was written by Dénes Kőnig, and published in 1936. Another book by Frank Harary, published in 1969, was "considered the world over to be the definitive textbook on the subject", and enabled mathematicians, chemists, electrical engineers and social scientists to talk to each other. Harary donated all of the royalties to fund the Pólya Prize.
One of the most famous and stimulating problems in graph theory is the four color problem: "Is it true that any map drawn in the plane may have its regions colored with four colors, in such a way that any two regions having a common border have different colors?" This problem was first posed by Francis Guthrie in 1852 and its first written record is in a letter of De Morgan addressed to Hamilton the same year. Many incorrect proofs have been proposed, including those by Cayley, Kempe, and others. The study and the generalization of this problem by Tait, Heawood, Ramsey and Hadwiger led to the study of the colorings of the graphs embedded on surfaces with arbitrary genus. Tait's reformulation generated a new class of problems, the factorization problems, particularly studied by Petersen and Kőnig. The works of Ramsey on colorations and more specially the results obtained by Turán in 1941 was at the origin of another branch of graph theory, extremal graph theory.
The four color problem remained unsolved for more than a century. In 1969 Heinrich Heesch published a method for solving the problem using computers.Heinrich Heesch: Untersuchungen zum Vierfarbenproblem. Mannheim: Bibliographisches Institut 1969. A computer-aided proof produced in 1976 by Kenneth Appel and Wolfgang Haken makes fundamental use of the notion of "discharging" developed by Heesch. The proof involved checking the properties of 1,936 configurations by computer, and was not fully accepted at the time due to its complexity. A simpler proof considering only 633 configurations was given twenty years later by Robertson, Seymour, Sanders and Thomas.
The autonomous development of topology from 1860 and 1930 fertilized graph theory back through the works of Jordan, Kuratowski and Whitney. Another important factor of common development of graph theory and topology came from the use of the techniques of modern algebra. The first example of such a use comes from the work of the physicist Gustav Kirchhoff, who published in 1845 his Kirchhoff's circuit laws for calculating the voltage and current in electric circuits.
The introduction of probabilistic methods in graph theory, especially in the study of Erdős and Rényi of the asymptotic probability of graph connectivity, gave rise to yet another branch, known as random graph theory, which has been a fruitful source of graph-theoretic results.
Representation
A graph is an abstraction of relationships that emerge in nature; hence, it cannot be coupled to a certain representation. The way it is represented depends on the degree of convenience such representation provides for a certain application. The most common representations are the visual, in which, usually, vertices are drawn and connected by edges, and the tabular, in which rows of a table provide information about the relationships between the vertices within the graph.
Visual: Graph drawing
Graphs are usually represented visually by drawing a point or circle for every vertex, and drawing a line between two vertices if they are connected by an edge. If the graph is directed, the direction is indicated by drawing an arrow. If the graph is weighted, the weight is added on the arrow.
A graph drawing should not be confused with the graph itself (the abstract, non-visual structure) as there are several ways to structure the graph drawing. All that matters is which vertices are connected to which others by how many edges and not the exact layout. In practice, it is often difficult to decide if two drawings represent the same graph. Depending on the problem domain some layouts may be better suited and easier to understand than others.
The pioneering work of W. T. Tutte was very influential on the subject of graph drawing. Among other achievements, he introduced the use of linear algebraic methods to obtain graph drawings.
Graph drawing also can be said to encompass problems that deal with the crossing number and its various generalizations. The crossing number of a graph is the minimum number of intersections between edges that a drawing of the graph in the plane must contain. For a planar graph, the crossing number is zero by definition. Drawings on surfaces other than the plane are also studied.
There are other techniques to visualize a graph away from vertices and edges, including circle packings, intersection graph, and other visualizations of the adjacency matrix.
Tabular: Graph data structures
The tabular representation lends itself well to computational applications. There are different ways to store graphs in a computer system. The data structure used depends on both the graph structure and the algorithm used for manipulating the graph. Theoretically one can distinguish between list and matrix structures but in concrete applications the best structure is often a combination of both. List structures are often preferred for sparse graphs as they have smaller memory requirements. Matrix structures on the other hand provide faster access for some applications but can consume huge amounts of memory. Implementations of sparse matrix structures that are efficient on modern parallel computer architectures are an object of current investigation.
List structures include the edge list, an array of pairs of vertices, and the adjacency list, which separately lists the neighbors of each vertex: Much like the edge list, each vertex has a list of which vertices it is adjacent to.
Matrix structures include the incidence matrix, a matrix of 0's and 1's whose rows represent vertices and whose columns represent edges, and the adjacency matrix, in which both the rows and columns are indexed by vertices. In both cases a 1 indicates two adjacent objects and a 0 indicates two non-adjacent objects. The degree matrix indicates the degree of vertices. The Laplacian matrix is a modified form of the adjacency matrix that incorporates information about the degrees of the vertices, and is useful in some calculations such as Kirchhoff's theorem on the number of spanning trees of a graph.
The distance matrix, like the adjacency matrix, has both its rows and columns indexed by vertices, but rather than containing a 0 or a 1 in each cell it contains the length of a shortest path between two vertices.
Problems
Enumeration
There is a large literature on graphical enumeration: the problem of counting graphs meeting specified conditions. Some of this work is found in Harary and Palmer (1973).
Subgraphs, induced subgraphs, and minors
A common problem, called the subgraph isomorphism problem, is finding a fixed graph as a subgraph in a given graph. One reason to be interested in such a question is that many graph properties are hereditary for subgraphs, which means that a graph has the property if and only if all subgraphs have it too.
Finding maximal subgraphs of a certain kind is often an NP-complete problem. For example:
Finding the largest complete subgraph is called the clique problem (NP-complete).
One special case of subgraph isomorphism is the graph isomorphism problem. It asks whether two graphs are isomorphic. It is not known whether this problem is NP-complete, nor whether it can be solved in polynomial time.
A similar problem is finding induced subgraphs in a given graph. Again, some important graph properties are hereditary with respect to induced subgraphs, which means that a graph has a property if and only if all induced subgraphs also have it. Finding maximal induced subgraphs of a certain kind is also often NP-complete. For example:
Finding the largest edgeless induced subgraph or independent set is called the independent set problem (NP-complete).
Still another such problem, the minor containment problem, is to find a fixed graph as a minor of a given graph. A minor or subcontraction of a graph is any graph obtained by taking a subgraph and contracting some (or no) edges. Many graph properties are hereditary for minors, which means that a graph has a property if and only if all minors have it too. For example, Wagner's Theorem states:
A graph is planar if it contains as a minor neither the complete bipartite graph K3,3 (see the Three-cottage problem) nor the complete graph K5.
A similar problem, the subdivision containment problem, is to find a fixed graph as a subdivision of a given graph. A subdivision or homeomorphism of a graph is any graph obtained by subdividing some (or no) edges. Subdivision containment is related to graph properties such as planarity. For example, Kuratowski's Theorem states:
A graph is planar if it contains as a subdivision neither the complete bipartite graph K3,3 nor the complete graph K5.
Another problem in subdivision containment is the Kelmans–Seymour conjecture:
Every 5-vertex-connected graph that is not planar contains a subdivision of the 5-vertex complete graph K5.
Another class of problems has to do with the extent to which various species and generalizations of graphs are determined by their point-deleted subgraphs. For example:
The reconstruction conjecture
Graph coloring
Many problems and theorems in graph theory have to do with various ways of coloring graphs. Typically, one is interested in coloring a graph so that no two adjacent vertices have the same color, or with other similar restrictions. One may also consider coloring edges (possibly so that no two coincident edges are the same color), or other variations. Among the famous results and conjectures concerning graph coloring are the following:
Four-color theorem
Strong perfect graph theorem
Erdős–Faber–Lovász conjecture
Total coloring conjecture, also called Behzad's conjecture (unsolved)
List coloring conjecture (unsolved)
Hadwiger conjecture (graph theory) (unsolved)
Subsumption and unification
Constraint modeling theories concern families of directed graphs related by a partial order. In these applications, graphs are ordered by specificity, meaning that more constrained graphs—which are more specific and thus contain a greater amount of information—are subsumed by those that are more general. Operations between graphs include evaluating the direction of a subsumption relationship between two graphs, if any, and computing graph unification. The unification of two argument graphs is defined as the most general graph (or the computation thereof) that is consistent with (i.e. contains all of the information in) the inputs, if such a graph exists; efficient unification algorithms are known.
For constraint frameworks which are strictly compositional, graph unification is the sufficient satisfiability and combination function. Well-known applications include automatic theorem proving and modeling the elaboration of linguistic structure.
Route problems
Hamiltonian path problem
Minimum spanning tree
Route inspection problem (also called the "Chinese postman problem")
Seven bridges of Königsberg
Shortest path problem
Steiner tree
Three-cottage problem
Traveling salesman problem (NP-hard)
Network flow
There are numerous problems arising especially from applications that have to do with various notions of flows in networks, for example:
Max flow min cut theorem
Visibility problems
Museum guard problem
Covering problems
Covering problems in graphs may refer to various set cover problems on subsets of vertices/subgraphs.
Dominating set problem is the special case of set cover problem where sets are the closed neighborhoods.
Vertex cover problem is the special case of set cover problem where sets to cover are every edges.
The original set cover problem, also called hitting set, can be described as a vertex cover in a hypergraph.
Decomposition problems
Decomposition, defined as partitioning the edge set of a graph (with as many vertices as necessary accompanying the edges of each part of the partition), has a wide variety of questions. Often, the problem is to decompose a graph into subgraphs isomorphic to a fixed graph; for instance, decomposing a complete graph into Hamiltonian cycles. Other problems specify a family of graphs into which a given graph should be decomposed, for instance, a family of cycles, or decomposing a complete graph Kn into specified trees having, respectively, 1, 2, 3, ..., edges.
Some specific decomposition problems and similar problems that have been studied include:
Arboricity, a decomposition into as few forests as possible
Cycle double cover, a collection of cycles covering each edge exactly twice
Edge coloring, a decomposition into as few matchings as possible
Graph factorization, a decomposition of a regular graph into regular subgraphs of given degrees
Graph classes
Many problems involve characterizing the members of various classes of graphs. Some examples of such questions are below:
Enumerating the members of a class
Characterizing a class in terms of forbidden substructures
Ascertaining relationships among classes (e.g. does one property of graphs imply another)
Finding efficient algorithms to decide membership in a class
Finding representations for members of a class
See also
Gallery of named graphs
Glossary of graph theory
List of graph theory topics
List of unsolved problems in graph theory
Publications in graph theory
Graph algorithm
Graph theorists
Subareas
Algebraic graph theory
Geometric graph theory
Extremal graph theory
Probabilistic graph theory
Spectral graph theory
Topological graph theory
Graph drawing
Notes
References
Lowell W. Beineke; Bjarne Toft; and Robin J. Wilson: Milestones in Graph Theory: A Century of Progress, AMS/MAA, (SPECTRUM, v.108), ISBN 978-1-4704-6431-8 (2025).
English edition, Wiley 1961; Methuen & Co, New York 1962; Russian, Moscow 1961; Spanish, Mexico 1962; Roumanian, Bucharest 1969; Chinese, Shanghai 1963; Second printing of the 1962 first English edition, Dover, New York 2001.
External links
Graph theory tutorial
A searchable database of small connected graphs
House of Graphs — searchable database of graphs with a drawing-based search feature.
rocs — a graph theory IDE
The Social Life of Routers — non-technical paper discussing graphs of people and computers
Graph Theory Software — tools to teach and learn graph theory
A list of graph algorithms with references and links to graph library implementations
Online textbooks
Phase Transitions in Combinatorial Optimization Problems, Section 3: Introduction to Graphs (2006) by Hartmann and Weigt
Digraphs: Theory Algorithms and Applications 2007 by Jorgen Bang-Jensen and Gregory Gutin
Graph Theory, by Reinhard Diestel
|
computer_science
| 4,728
|
12590
|
Grace Hopper
|
https://en.wikipedia.org/wiki/Grace_Hopper
|
Grace Brewster Hopper (; December 9, 1906 – January 1, 1992) was an American computer scientist, mathematician, and United States Navy rear admiral. She was a pioneer of computer programming. Hopper was the first to devise the theory of machine-independent programming languages, and used this theory to develop the FLOW-MATIC programming language and COBOL, an early high-level programming language still in use today. She was also one of the first programmers on the Harvard Mark I computer. She is credited with writing the first computer manual, "A Manual of Operation for the Automatic Sequence Controlled Calculator."
Before joining the Navy, Hopper earned a Ph.D. in both mathematics and mathematical physics from Yale University and was a professor of mathematics at Vassar College. She left her position at Vassar to join the United States Navy Reserve during World War II. Hopper began her computing career in 1944 as a member of the Harvard Mark I team, led by Howard H. Aiken. In 1949, she joined the Eckert–Mauchly Computer Corporation and was part of the team that developed the UNIVAC I computer. At Eckert–Mauchly she managed the development of one of the first COBOL compilers.
She believed that programming should be simplified with an English-based computer programming language. Her compiler converted English terms into machine code understood by computers. By 1952, Hopper had finished her program linker (originally called a compiler), which was written for the A-0 System. In 1954, Eckert–Mauchly chose Hopper to lead their department for automatic programming, and she led the release of some of the first compiled languages like FLOW-MATIC. In 1959, she participated in the CODASYL consortium, helping to create a machine-independent programming language called COBOL, which was based on English words. Hopper promoted the use of the language throughout the 60s.
The U.S. Navy guided-missile destroyer was named for her, as was the Cray XE6 "Hopper" supercomputer at NERSC, and the Nvidia GPU architecture "Hopper". During her lifetime, Hopper was awarded 40 honorary degrees from universities across the world. A college at Yale University was renamed in her honor. In 1991, she received the National Medal of Technology. On November 22, 2016, she was posthumously awarded the Presidential Medal of Freedom by President Barack Obama. In 2024, the Institute of Electrical and Electronics Engineers (IEEE) dedicated a marker in honor of Grace Hopper at the University of Pennsylvania for her role in inventing the A-0 compiler during her time as a Lecturer in the School of Engineering, citing her inspirational impact on young engineers.
Early life and education
Grace Brewster Murray was born in New York City. She was the eldest of three children. Her parents, Walter Fletcher Murray and Mary Campbell Van Horne, were of Scottish and Dutch descent, and attended West End Collegiate Church. Her great-grandfather, Alexander Wilson Russell, an admiral in the US Navy, fought in the Battle of Mobile Bay during the Civil War.
Grace was very curious as a child; this was a lifelong trait. At the age of seven, she decided to determine how an alarm clock worked and dismantled seven alarm clocks before her mother realized what she was doing (she was then limited to one clock). Later in life, she was known for keeping a clock that ran backward; she explained, "Humans are allergic to change. They love to say, 'We've always done it this way.' I try to fight that. That's why I have a clock on my wall that runs counterclockwise." For her preparatory school education, she attended the Hartridge School in Plainfield, New Jersey. Grace was initially rejected for early admission to Vassar College at age 16 (because her test scores in Latin were too low), but she was admitted the next year. She graduated Phi Beta Kappa from Vassar in 1928 with a bachelor's degree in mathematics and physics and earned her master's degree at Yale University in 1930.
In 1930, Grace Murray married New York University professor Vincent Foster Hopper (1906–1976); they divorced in 1945. Biography on pp. 281–289 of the Supplementary Material at AMS She did not marry again and retained his surname.
In 1934, Hopper earned a Ph.D. in mathematics from Yale under the direction of Øystein Ore.Though some books, including Kurt Beyer's Grace Hopper and the Invention of the Information Age, reported that Hopper was the first woman to earn a Yale Ph.D. in mathematics, the first of ten women before 1934 was Charlotte Cynthia Barnum (1860–1934). Her dissertation, "New Types of Irreducibility Criteria", was published that year.G. M. Hopper and O. Ore, "New types of irreducibility criteria", Bull. Amer. Math. Soc. 40 (1934) 216 She began teaching mathematics at Vassar in 1931, and was promoted to associate professor in 1941.
Career
World War II
Hopper tried to be commissioned in the Navy early in World War II, however she was turned down. At age 34, she was too old to enlist and her weight-to-height ratio was too low. She was also denied on the basis that her job as a mathematician and mathematics professor at Vassar College was valuable to the war effort. During the war in 1943, Hopper obtained a leave of absence from Vassar and was sworn into the United States Navy Reserve; she was one of many women who volunteered to serve in the WAVES.
She had to get an exemption to be commissioned; she was below the Navy minimum weight of . She reported in December and trained at the Naval Reserve Midshipmen's School at Smith College in Northampton, Massachusetts. Hopper graduated first in her class in 1944, and was assigned to the Bureau of Ships Computation Project at Harvard University as a lieutenant, junior grade. She served on the Mark I computer programming staff headed by Howard H. Aiken.
Hopper and Aiken co-authored three papers on the Mark I, also known as the Automatic Sequence Controlled Calculator. Hopper's request to transfer to the regular Navy, out of WAVES, at the end of the war was denied due to being two years older than the cutoff age of 38. She continued to serve in the Navy Reserve. Hopper remained at the Harvard Computation Lab until 1949, turning down a full professorship at Vassar in favor of working as a research fellow under a Navy contract at Harvard.
UNIVAC
In 1949, Hopper became an employee of the Eckert–Mauchly Computer Corporation as a senior mathematician and joined the team developing the UNIVAC I. Hopper also served as UNIVAC director of Automatic Programming Development for Remington Rand. The UNIVAC was the first known large-scale electronic computer to be on the market in 1951.
When Hopper recommended the development of a new programming language that would use entirely English words, she "was told very quickly that [she] couldn't do this because computers didn't understand English." Still, she persisted. "It's much easier for most people to write an English statement than it is to use symbols", she explained. "So I decided data processors ought to be able to write their programs in English, and the computers would translate them into machine code."
Her idea was not accepted for three years. In the meantime, she published her first paper on the subject, compilers, in 1952. In the early 1950s, the company was taken over by the Remington Rand corporation, and it was while she was working for them that her original compiler work was done. The program was known as the A compiler and its first version was A-0.
In 1952, she had an operational link-loader, which at the time was referred to as a compiler. She later said that "Nobody believed that", and that she "had a running compiler and nobody would touch it. They told me computers could only do arithmetic."
In 1954 Hopper was named the company's first director of automatic programming. Beginning in 1954, Hopper's work was influenced by the Laning and Zierler system, which was the first compiler to accept algebraic notation as input. Her department released some of the first compiler-based programming languages, including MATH-MATIC and FLOW-MATIC.
Hopper said that her compiler A-0, "translated mathematical notation into machine code. Manipulating symbols was fine for mathematicians but it was no good for data processors who were not symbol manipulators. Very few people are really symbol manipulators. If they are, they become professional mathematicians, not data processors. It's much easier for most people to write an English statement than it is to use symbols. So I decided data processors ought to be able to write their programs in English, and the computers would translate them into machine code. That was the beginning of COBOL, a computer language for data processors. I could say 'Subtract income tax from pay' instead of trying to write that in octal code or using all kinds of symbols. COBOL is the major language used today in data processing."
COBOL
In the spring of 1959, computer experts from industry and government were brought together in a two-day conference known as the Conference on Data Systems Languages (CODASYL). Hopper served as a technical consultant to the committee, and many of her former employees served on the short-term committee that defined the new language COBOL (an acronym for COmmon Business-Oriented Language). The new language extended Hopper's FLOW-MATIC language with some ideas from the IBM equivalent, COMTRAN. Hopper's belief that programs should be written in a language that was close to English (rather than in machine code or in languages close to machine code, such as assembly languages) was captured in the new business language, and COBOL went on to be the most ubiquitous business language to date. Among the members of the committee that worked on COBOL was Mount Holyoke College alumna Jean E. Sammet.
From 1967 to 1977, Hopper served as the director of the Navy Programming Languages Group in the Navy's Office of Information Systems Planning and was promoted to the rank of captain in 1973. She developed validation software for COBOL and its compiler as part of a COBOL standardization program for the entire Navy.
Standards
In the 1970s, Hopper advocated for the Defense Department to replace large, centralized systems with networks of small, distributed computers. Any user on any computer node could access common databases on the network. She developed the implementation of standards for testing computer systems and components, most significantly for early programming languages such as FORTRAN and COBOL. The Navy tests for conformance to these standards led to significant convergence among the programming language dialects of the major computer vendors. In the 1980s, these tests (and their official administration) were assumed by the National Bureau of Standards (NBS), known today as the National Institute of Standards and Technology (NIST).
Retirement
In accordance with Navy attrition regulations, Hopper retired from the Naval Reserve with the rank of commander at age 60 at the end of 1966. She was recalled to active duty in August 1967 for a six-month period that turned into an indefinite assignment. She again retired in 1971 but was again asked to return to active duty in 1972. She was promoted to captain in 1973 by Admiral Elmo R. Zumwalt Jr.
After Republican Representative Philip Crane saw her on a March 1983 segment of 60 Minutes, he championed a joint resolution to promote Hopper to commodore on the retired list; the resolution was referred to, but not reported out of, the Senate Armed Services Committee. Hopper was instead promoted to commodore on December 15, 1983, via the Appointments Clause by President Ronald Reagan. She remained on active duty for several years beyond mandatory retirement by special approval of Congress. Effective November 8, 1985, the rank of commodore was renamed rear admiral (lower half) and Hopper became one of the Navy's few female admirals.
After a career that spanned more than 42 years, Hopper retired from the Navy on August 14, 1986. At the time, she was the oldest serving member of the Navy. At a celebration held in Boston on the to commemorate her retirement, Hopper was awarded the Defense Distinguished Service Medal, the highest non-combat decoration awarded by the Department of Defense.
At the time of her retirement, she was the oldest active-duty commissioned officer in the United States Navy (79 years, eight months and five days), and had her retirement ceremony aboard the oldest commissioned ship in the United States Navy (188 years, 9 months, 23 days).
Post-retirement
After her retirement from the Navy, Hopper was hired as a senior consultant to Digital Equipment Corporation (DEC). Hopper was initially offered a position by Rita Yavinsky, but she insisted on going through the typical formal interview process. She then proposed in jest that she would be willing to accept a position which made her available on alternating Thursdays, exhibited at their museum of computing as a pioneer, in exchange for a generous salary and unlimited expense account. Instead, she was hired as a full-time Principal Corporate Consulting Engineer, a tech-track SVP-equivalent. In this position, Hopper represented the company at industry forums, serving on various industry committees, along with other obligations. She retained that position until her death at age 85 in 1992.
At DEC Hopper served primarily as a goodwill ambassador. She lectured widely about the early days of computing, her career, and on efforts that computer vendors could take to make life easier for their users. She visited most of Digital's engineering facilities, where she generally received a standing ovation at the conclusion of her remarks. Although no longer a serving officer, she always wore her Navy full dress uniform to these lectures contrary to U.S. Department of Defense policy. In 2016 Hopper received the Presidential Medal of Freedom, the nation's highest civilian honor, in recognition of her remarkable contributions to the field of computer science.
"The most important thing I've accomplished, other than building the compiler," she said, "is training young people. They come to me, you know, and say, 'Do you think we can do this?' I say, 'Try it.' And I back 'em up. They need that. I keep track of them as they get older and I stir 'em up at intervals so they don't forget to take chances."
Anecdotes
300px|thumb|Log book showing the "bug" found caught in a Mark II relay
Throughout much of her later career, Hopper was much in demand as a speaker at various computer-related events. She was well known for her lively and irreverent speaking style, as well as a rich treasury of early war stories. She also received the nickname "Grandma COBOL".
While Hopper was working on a Mark II Computer at Harvard University in 1947, her associates discovered a moth that was stuck in a relay and impeding the operation of the computer. Upon extraction, the insect was affixed to a log sheet for that day with the notation, "First actual case of bug being found". While neither she nor her crew members mentioned the exact phrase, "debugging", in their log entries, the case is held as a historical instance of "debugging" a computer and Hopper is credited with popularizing the term in computing. For many decades, the term "bug" for a malfunction had been in use in several fields before being applied to computers.Edison to Puskas, November 13, 1878, Edison papers, Edison National Laboratory, U.S. National Park Service, West Orange, N.J., cited in Thomas P. Hughes, American Genesis: A History of the American Genius for Invention, Penguin Books, 1989, , on page 75. The remains of the moth can be found taped into the group's log book at the Smithsonian Institution's National Museum of American History in Washington, D.C.
Hopper became known for her nanoseconds visual aid. People (such as generals and admirals) used to ask her why satellite communication took so long. She started handing out pieces of wire that were just under one foot long——the distance that light travels in one nanosecond. She gave these pieces of wire the metonym "nanoseconds". She was careful to tell her audience that the length of her nanoseconds was actually the maximum distance the signals would travel in a vacuum in a nanosecond, and that signals would travel more slowly through the actual wires that were her teaching aids. Later she used the same pieces of wire to illustrate why computers had to be small to be fast. At many of her talks and visits, she handed out "nanoseconds" to everyone in the audience, contrasting them with a coil of wire long, representing a microsecond. Later, while giving these lectures while working for DEC, she passed out packets of pepper, calling the individual grains of ground pepper picoseconds.
Jay Elliot described Hopper as appearing to be all Navy', but when you reach inside, you find a 'Pirate' dying to be released".
Death
On New Year's Day 1992, Hopper died in her sleep of natural causes at her home in Arlington County, Virginia; she was 85 years of age. She was interred with full military honors in Arlington National Cemetery.
Dates of rank
Rank MidshipmanMIDN Lieutenant junior gradeO-2 LieutenantO-3 Lieutenant commanderO-4 CommanderO-5 CaptainO-6 Commodore/Rear admiral (lower half)O-7 Insignia N/A 60px 60px 60px 60px 60px 60px Date May 4, 1944 June 27, 1944 June 1, 1946 April 1, 1952 July 1, 1957 August 2, 1973 December 15, 1983/redesignated November 8, 1985
Awards and honors
Military awards
Defense Distinguished Service Medal(1986)Legion of Merit(1967)Meritorious Service Medal(1980)Presidential Medal of Freedom(2016, Posthumous)American Campaign Medal(1944)World War II Victory Medal(1945)National Defense Service Medal with bronze service star(1953, 1966) Armed Forces Reserve Medalwith two bronze hourglass devices(1963, 1973, 1983)Naval Reserve Medal(1953)
Other awards
1964: Hopper was awarded the Society of Women Engineers Achievement Award, the Society's highest honor, "In recognition of her significant contributions to the burgeoning computer industry as an engineering manager and originator of automatic programming systems." In May 1950, Hopper was one of the founding members of the Society of Women Engineers.
1969: Hopper was awarded the inaugural Data Processing Management Association Man of the Year award (now called the Distinguished Information Sciences Award).
1971: The annual Grace Murray Hopper Award for Outstanding Young Computer Professionals was established in 1971 by the Association for Computing Machinery.
1973: Elected to the U.S. National Academy of Engineering.
1973: First American and first woman of any nationality to be made a Distinguished Fellow of the British Computer Society.
1981: Received an Honorary PhD from Clarkson University.
1982: American Association of University Women Achievement Award and an Honorary Doctor of Science from Marquette University.
1983: Golden Plate Award of the American Academy of Achievement.
1985: Honorary Doctor of Science from Wright State University
1985: Honorary Doctor of Letters from Western New England College (now Western New England University).
1986: Received the Defense Distinguished Service Medal at her retirement.
1986: Received an Honorary Doctor of Science from Syracuse University.
1987: She became the first Computer History Museum Fellow Award Recipient "for contributions to the development of programming languages, for standardization efforts, and for lifelong naval service."
1988: Received the Golden Gavel Award, Toastmasters International.
1991: National Medal of Technology "For her pioneering accomplishments in the development of computer programming languages that simplified computer technology and opened the door to a significantly larger universe of users."
1991: Elected a Fellow of the American Academy of Arts and Sciences.
1992: The Society of Women Engineers established three annual, renewable, "Admiral Grace Murray Hopper Scholarships"
1994: Inducted into the National Women's Hall of Fame.
1996: was launched. Nicknamed Amazing Grace, it is on a very short list of U.S. military vessels named after women.
2001: Eavan Boland wrote a poem dedicated to Grace Hopper titled "Code" in her 2001 release Against Love Poetry.
2001: The Gracies, the Government Technology Leadership Award were named in her honor.
2009: The Department of Energy's National Energy Research Scientific Computing Center named its flagship system "Hopper".
2009: Office of Naval Intelligence creates the Grace Hopper Information Services Center.
2013: Google made the Google Doodle for Hopper's 107th birthday an animation of her sitting at a computer, using COBOL to print out her age. At the end of the animation, a moth flies out of the computer.
2016: On November 22, 2016, Hopper was posthumously awarded a Presidential Medal of Freedom for her accomplishments in the field of computer science.
2017: Hopper College at Yale University was named in her honor.
2021: The Admiral Grace Hopper Award was established by the chancellor of the College of Information and Cyberspace (CIC) of the National Defense University to recognize leaders in the fields of information and cybersecurity throughout the National Security community.
Legacy
Grace Hopper was awarded 40 honorary degrees from universities worldwide during her lifetime.
Nvidia has named their 2024 CPU generation Grace and GPU generation Hopper after Grace Hopper.
The Navy's Hopper Information Services Center is named for her.
The Navy named a guided-missile destroyer Hopper after her.🖉
On 30 June 2021, a satellite named after her (ÑuSat 20 or "Grace", COSPAR 2021-059AU) was launched into space.
On 26 August 2024 the NSA released a 90-minute talk in 1982 by Hopper in two parts.
Places
Grace Hopper Avenue in Monterey, California, is the location of the Navy's Fleet Numerical Meteorology and Oceanography Center as well as the National Weather Service's San Francisco Bay Area forecast office.
Grace M. Hopper Navy Regional Data Automation Center at Naval Air Station, North Island, California.
Grace Murray Hopper Park, on South Joyce Street in Arlington County, Virginia, is a small memorial park in front of her former residence (River House Apartments) and is now owned by Arlington County.
Brewster Academy, a school in Wolfeboro, New Hampshire, United States, dedicated their computer lab to her in 1985, calling it the Grace Murray Hopper Center for Computer Learning. The academy bestows a Grace Murray Hopper Prize to a graduate who excelled in the field of computer systems. Hopper had spent her childhood summers at a family home in Wolfeboro.
Grace Hopper College, one of the residential colleges of Yale University.
An administration building on Naval Support Activity Annapolis (previously known as Naval Station Annapolis) in Annapolis, Maryland is named the Grace Hopper Building in her honor.
Hopper Hall is Naval Academy's newest academic building that houses its cyber science department, among others. It is the first building at any service academy named after a woman.
The US Naval Academy also owns a Cray XC-30 supercomputer named "Grace", hosted at the University of Maryland-College Park.
Building 1482 aboard Naval Air Station North Island, housing the Naval Computer and Telecommunication Station San Diego, is named the Grace Hopper Building, and also contains the History of Naval Communications Museum.
Building 6007, C2/CNT West in Aberdeen Proving Ground, Maryland, is named after her.
The street outside of the Nathan Deal Georgia Cyber Innovation and Training Center in Augusta, Georgia, is named Grace Hopper Lane.
Grace Hopper Academy is a for-profit immersive programming school in New York City named in Grace Hopper's honor. It opened in January 2016 with the goal of increasing the proportion of women in software engineering careers.
A bridge over Goose Creek, to join the north and south sides of the Naval Support Activity Charleston side of Joint Base Charleston, South Carolina, is named the Grace Hopper Memorial Bridge in her honor.
Minor planet 5773 Hopper discovered by Eleanor Helin is named in her honor. The official naming citation was published by the Minor Planet Center on 8 November 2019 ().
Grace Hopper Hall, a community meeting hall in Orlando, Florida, on the site of the former Orlando Naval Training Center, is named for her.
The United States Naval Academy dedicated Hopper Hall, their cyber, computer science, and computer engineering building, to RDML Hopper in 2020, and it opened to midshipmen in the spring of 2021.
Programs
Women at Microsoft Corporation formed an employee group called Hoppers and established a scholarship in her honor.
Beginning in 2015, one of the nine competition fields at the FIRST Robotics Competition world championship is named for Hopper.
A named professorship in the Department of Computer Sciences was established at Yale University in her honor. Joan Feigenbaum was named to this chair in 2008.Yale News, July 18, 2008
In 2020, Google named its new undersea network cable 'Grace Hopper'. The cable connects the US, UK and Spain and it was estimated to be completed by 2022. Nonetheless, The Grace Hopper cable was completed in 2021, and it stretches 3,900 miles.
In popular culture
In his comic book series, Secret Coders by Gene Luen Yang, the main character is named Hopper Gracie-Hu.
Since 2013, Hopper's official portrait has been included in the matplotlib python library as sample data to replace the controversial Lenna image.
Grace Hopper Celebration of Women in Computing
Her legacy was an inspiring factor in the creation of the Grace Hopper Celebration of Women in Computing. Held yearly, this conference is designed to bring the research and career interests of women in computing to the forefront.
See also
Bug (engineering)#History
Code: Debugging the Gender Gap
List of pioneers in computer science
Futures techniques
Systems engineering
Women in computing
Hopper (microarchitecture)
Women in the United States Navy
List of female United States military generals and flag officers
Timeline of women in science
Notes
References
Obituary notices
Betts, Mitch (Computerworld 26: 14, 1992)
Bromberg, Howard (IEEE Software 9: 103–104, 1992)
Danca, Richard A. (Federal Computer Week 6: 26–27, 1992)
Hancock, Bill (Digital Review 9: 40, 1992)
Power, Kevin (Government Computer News 11: 70, 1992)
Sammet, J. E. (Communications of the ACM 35 (4): 128–131, 1992)
Weiss, Eric A. (IEEE Annals of the History of Computing 14: 56–58, 1992)
Further reading
Williams' book focuses on the lives and contributions of four notable women scientists: Mary Sears (1905–1997); Florence van Straten (1913–1992); Grace Murray Hopper (1906–1992); Mina Spiegel Rees (1902–1997).
External links
Oral History of Captain Grace Hopper – Interviewed by: Angeline Pantages 1980, Naval Data Automation Command, Maryland.
from Chips, the United States Navy information technology magazine.
Grace Hopper: Navy to the Core, a Pirate at Heart (2014), To learn more about Hopper's story and Navy legacy navy.mil.
The Queen of Code (2015), a documentary film about Grace Hopper produced by FiveThirtyEight.
Norwood, Arlisha. "Grace Hopper". National Women's History Museum. 2017.
Category:1906 births
Category:1992 deaths
Category:20th-century American engineers
Category:20th-century American inventors
Category:20th-century American mathematicians
Category:20th-century American naval officers
Category:20th-century American women mathematicians
Category:20th-century American women scientists
Category:Achievement Award Recipients of the Society of Women Engineers
Category:American computer programmers
Category:American computer science educators
Category:American computer scientists
Category:American people of Dutch descent
Category:American people of Scottish descent
Category:American software engineers
Category:American women computer scientists
Category:American women inventors
Category:Burials at Arlington National Cemetery
Category:COBOL
Category:Fellows of the American Academy of Arts and Sciences
Category:Fellows of the British Computer Society
Category:Female admirals of the United States Navy
Category:Female United States Navy officers
Category:Graduate Women in Science members
Category:Harvard University staff
Category:Mathematicians from New York (state)
Category:Members of the Society of Women Engineers
Category:Military personnel from New York City
Category:Military personnel from New York (state)
Category:National Medal of Technology recipients
Category:Presidential Medal of Freedom recipients
Category:Programming language designers
Category:Recipients of the Defense Distinguished Service Medal
Category:Recipients of the Legion of Merit
Category:Recipients of the Meritorious Service Medal (United States)
Category:United States Navy personnel of the Korean War
Category:United States Navy personnel of the Vietnam War
Category:United States Navy rear admirals (lower half)
Category:Vassar College alumni
Category:Vassar College faculty
Category:Wardlaw-Hartridge School alumni
Category:WAVES personnel
Category:Yale University alumni
|
biographies
| 4,546
|
12644
|
Glycolysis
|
https://en.wikipedia.org/wiki/Glycolysis
|
The wide occurrence of glycolysis in other species indicates that it is an ancient metabolic pathway. Indeed, the reactions that make up glycolysis and its parallel pathway, the pentose phosphate pathway, can occur in the oxygen-free conditions of the Archean oceans, also in the absence of enzymes, catalyzed by metal ions, meaning this is a plausible prebiotic pathway for abiogenesis.
The most common type of glycolysis is the Embden–Meyerhof–Parnas (EMP) pathway, which was discovered by Gustav Embden, Otto Meyerhof, and Jakub Karol Parnas. Glycolysis also refers to other pathways, such as the Entner–Doudoroff pathway and various heterofermentative and homofermentative pathways. However, the discussion here will be limited to the Embden–Meyerhof–Parnas pathway.Kim BH, Gadd GM. (2011) Bacterial Physiology and Metabolism, 3rd edition.
The glycolysis pathway can be separated into two phases:
Investment phase – wherein ATP is consumed
Yield phase – wherein more ATP is produced than originally consumed
Overview
The overall reaction of glycolysis is:
The use of symbols in this equation makes it appear unbalanced with respect to oxygen atoms, hydrogen atoms, and charges. Atom balance is maintained by the two phosphate (Pi) groups:
Each exists in the form of a hydrogen phosphate anion (), dissociating to contribute overall
Each liberates an oxygen atom when it binds to an adenosine diphosphate (ADP) molecule, contributing 2O overall
Charges are balanced by the difference between ADP and ATP. In the cellular environment, all three hydroxyl groups of ADP dissociate into −O− and H+, giving ADP3−, and this ion tends to exist in an ionic bond with Mg2+, giving ADPMg−. ATP behaves identically except that it has four hydroxyl groups, giving ATPMg2−. When these differences along with the true charges on the two phosphate groups are considered together, the net charges of −4 on each side are balanced.
In high-oxygen (aerobic) conditions, eukaryotic cells can continue from glycolysis to metabolise the pyruvate through the citric acid cycle or the electron transport chain to produce significantly more ATP.
Importantly, under low-oxygen (anaerobic) conditions, glycolysis is the only biochemical pathway in eukaryotes that can generate ATP, and, for many anaerobic respiring organisms the most important producer of ATP. Therefore, many organisms have evolved fermentation pathways to recycle NAD+ to continue glycolysis to produce ATP for survival. These pathways include ethanol fermentation and lactic acid fermentation.
Metabolism of common monosaccharides, including glycolysis, gluconeogenesis, glycogenesis and glycogenolysis none|1000px|class=skin-invert-image
History
The modern understanding of the pathway of glycolysis took almost 100 years to fully learn. The combined results of many smaller experiments were required to understand the entire pathway.
The first steps in understanding glycolysis began in the 19th century. For economic reasons, the French wine industry sought to investigate why wine sometimes turned distasteful, instead of fermenting into alcohol. The French scientist Louis Pasteur researched this issue during the 1850s. His experiments showed that alcohol fermentation occurs by the action of living microorganisms, yeasts, and that glucose consumption decreased under aerobic conditions (the Pasteur effect).
The component steps of glycolysis were first analysed by the non-cellular fermentation experiments of Eduard Buchner during the 1890s. Buchner demonstrated that the conversion of glucose to ethanol was possible using a non-living extract of yeast, due to the action of enzymes in the extract. This experiment not only revolutionized biochemistry, but also allowed later scientists to analyze this pathway in a more controlled laboratory setting. In a series of experiments (1905–1911), scientists Arthur Harden and William Young discovered more pieces of glycolysis. They discovered the regulatory effects of ATP on glucose consumption during alcohol fermentation. They also shed light on the role of one compound as a glycolysis intermediate: fructose 1,6-bisphosphate.
The elucidation of fructose 1,6-bisphosphate was accomplished by measuring levels when yeast juice was incubated with glucose. production increased rapidly then slowed down. Harden and Young noted that this process would restart if an inorganic phosphate (Pi) was added to the mixture. Harden and Young deduced that this process produced organic phosphate esters, and further experiments allowed them to extract fructose diphosphate (F-1,6-DP).
Arthur Harden and William Young along with Nick Sheppard determined, in a second experiment, that a heat-sensitive high-molecular-weight subcellular fraction (the enzymes) and a heat-insensitive low-molecular-weight cytoplasm fraction (ADP, ATP and NAD+ and other cofactors) are required together for fermentation to proceed. This experiment begun by observing that dialyzed (purified) yeast juice could not ferment or even create a sugar phosphate. This mixture was rescued with the addition of undialyzed yeast extract that had been boiled. Boiling the yeast extract renders all proteins inactive (as it denatures them). The ability of boiled extract plus dialyzed juice to complete fermentation suggests that the cofactors were non-protein in character.
In the 1920s Otto Meyerhof was able to link together some of the many individual pieces of glycolysis discovered by Buchner, Harden, and Young. Meyerhof and his team were able to extract different glycolytic enzymes from muscle tissue, and combine them to artificially create the pathway from glycogen to lactic acid.
In one paper, Meyerhof and scientist Renate Junowicz-Kockolaty investigated the reaction that splits fructose 1,6-diphosphate into the two triose phosphates. Previous work proposed that the split occurred via 1,3-diphosphoglyceraldehyde plus an oxidizing enzyme and cozymase. Meyerhoff and Junowicz found that the equilibrium constant for the isomerase and aldoses reaction were not affected by inorganic phosphates or any other cozymase or oxidizing enzymes. They further removed diphosphoglyceraldehyde as a possible intermediate in glycolysis.
With all of these pieces available by the 1930s, Gustav Embden proposed a detailed, step-by-step outline of that pathway we now know as glycolysis. The biggest difficulties in determining the intricacies of the pathway were due to the very short lifetime and low steady-state concentrations of the intermediates of the fast glycolytic reactions. By the 1940s, Meyerhof, Embden and many other biochemists had finally completed the puzzle of glycolysis. The understanding of the isolated pathway has been expanded in the subsequent decades, to include further details of its regulation and integration with other metabolic pathways.
Sequence of reactions
Summary of reactions
Preparatory phase
The first five steps of Glycolysis are regarded as the preparatory (or investment) phase, since they consume energy to convert the glucose into two three-carbon sugar phosphates (G3P).
Once glucose enters the cell, the first step is phosphorylation of glucose by a family of enzymes called hexokinases to form glucose 6-phosphate (G6P). This reaction consumes ATP, but it acts to keep the glucose concentration inside the cell low, promoting continuous transport of blood glucose into the cell through the plasma membrane transporters. In addition, phosphorylation blocks the glucose from leaking out – the cell lacks transporters for G6P, and free diffusion out of the cell is prevented due to the charged nature of G6P. Glucose may alternatively be formed from the phosphorolysis or hydrolysis of intracellular starch or glycogen.
In animals, an isozyme of hexokinase called glucokinase is also used in the liver, which has a much lower affinity for glucose (Km in the vicinity of normal glycemia), and differs in regulatory properties. The different substrate affinity and alternate regulation of this enzyme are a reflection of the role of the liver in maintaining blood sugar levels.
Cofactors: Mg2+
G6P is then rearranged into fructose 6-phosphate (F6P) by glucose phosphate isomerase. Fructose can also enter the glycolytic pathway by phosphorylation at this point.
The change in structure is an isomerization, in which the G6P has been converted to F6P. The reaction requires an enzyme, phosphoglucose isomerase, to proceed. This reaction is freely reversible under normal cell conditions. However, it is often driven forward because of a low concentration of F6P, which is constantly consumed during the next step of glycolysis. Under conditions of high F6P concentration, this reaction readily runs in reverse. This phenomenon can be explained through Le Chatelier's Principle. Isomerization to a keto sugar is necessary for carbanion stabilization in the fourth reaction step (below).
The energy expenditure of another ATP in this step is justified in 2 ways: The glycolytic process (up to this step) becomes irreversible, and the energy supplied destabilizes the molecule. Because the reaction catalyzed by phosphofructokinase 1 (PFK-1) is coupled to the hydrolysis of ATP (an energetically favorable step) it is, in essence, irreversible, and a different pathway must be used to do the reverse conversion during gluconeogenesis. This makes the reaction a key regulatory point (see below).
Furthermore, the second phosphorylation event is necessary to allow the formation of two charged groups (rather than only one) in the subsequent step of glycolysis, ensuring the prevention of free diffusion of substrates out of the cell.
The same reaction can also be catalyzed by pyrophosphate-dependent phosphofructokinase (PFP or PPi-PFK), which is found in most plants, some bacteria, archea, and protists, but not in animals. This enzyme uses pyrophosphate (PPi) as a phosphate donor instead of ATP. It is a reversible reaction, increasing the flexibility of glycolytic metabolism. A rarer ADP-dependent PFK enzyme variant has been identified in archaean species.
Cofactors: Mg2+
Destabilizing the molecule in the previous reaction allows the hexose ring to be split by aldolase into two triose sugars: dihydroxyacetone phosphate (a ketose), and glyceraldehyde 3-phosphate (an aldose). There are two classes of aldolases: class I aldolases, present in animals and plants, and class II aldolases, present in fungi and bacteria; the two classes use different mechanisms in cleaving the ketose ring.
Electrons delocalized in the carbon-carbon bond cleavage associate with the alcohol group. The resulting carbanion is stabilized by the structure of the carbanion itself via resonance charge distribution and by the presence of a charged ion prosthetic group.
Triosephosphate isomerase rapidly interconverts dihydroxyacetone phosphate with glyceraldehyde 3-phosphate (GADP) that proceeds further into glycolysis. This is advantageous, as it directs dihydroxyacetone phosphate down the same pathway as glyceraldehyde 3-phosphate, simplifying regulation.
Pay-off phase
The second half of glycolysis is known as the pay-off phase, characterised by a net gain of the energy-rich molecules ATP and NADH. Since glucose leads to two triose sugars in the preparatory phase, each reaction in the pay-off phase occurs twice per glucose molecule. This yields 2 NADH molecules and 4 ATP molecules, leading to a net gain of 2 NADH molecules and 2 ATP molecules from the glycolytic pathway per glucose.
The aldehyde groups of the triose sugars are oxidised, and inorganic phosphate is added to them, forming 1,3-bisphosphoglycerate.
The hydrogen is used to reduce two molecules of NAD+, a hydrogen carrier, to give NADH + H+ for each triose.
Hydrogen atom balance and charge balance are both maintained because the phosphate (Pi) group actually exists in the form of a hydrogen phosphate anion (), which dissociates to contribute the extra H+ ion and gives a net charge of −3 on both sides.
Here, arsenate (), an anion akin to inorganic phosphate may replace phosphate as a substrate to form 1-arseno-3-phosphoglycerate. This, however, is unstable and readily hydrolyzes to form 3-phosphoglycerate, the intermediate in the next step of the pathway. As a consequence of bypassing this step, the molecule of ATP generated from 1–3 bisphosphoglycerate in the next reaction will not be made, even though the reaction proceeds. As a result, arsenate is an uncoupler of glycolysis.
This step is the enzymatic transfer of a phosphate group from 1,3-bisphosphoglycerate to ADP by phosphoglycerate kinase, forming ATP and 3-phosphoglycerate. At this step, glycolysis has reached the break-even point: 2 molecules of ATP were consumed, and 2 new molecules have now been synthesized. This step, one of the two substrate-level phosphorylation steps, requires ADP; thus, when the cell has plenty of ATP (and little ADP), this reaction does not occur. Because ATP decays relatively quickly when it is not metabolized, this is an important regulatory point in the glycolytic pathway.
ADP actually exists as ADPMg−, and ATP as ATPMg2−, balancing the charges at −5 both sides.
Cofactors: Mg2+
Phosphoglycerate mutase isomerises 3-phosphoglycerate into 2-phosphoglycerate.
Enolase next converts 2-phosphoglycerate to phosphoenolpyruvate. This reaction is an elimination reaction involving an E1cB mechanism.
Cofactors: 2 Mg2+, one "conformational" ion to coordinate with the carboxylate group of the substrate, and one "catalytic" ion that participates in the dehydration.
A final substrate-level phosphorylation now forms a molecule of pyruvate and a molecule of ATP by means of the enzyme pyruvate kinase. This serves as an additional regulatory step, similar to the phosphoglycerate kinase step.
Cofactors: Mg2+
Biochemical logic
The existence of more than one point of regulation indicates that intermediates between those points enter and leave the glycolysis pathway by other processes. For example, in the first regulated step, hexokinase converts glucose into glucose-6-phosphate. Instead of continuing through the glycolysis pathway, this intermediate can be converted into glucose storage molecules, such as glycogen or starch. The reverse reaction, breaking down, e.g., glycogen, produces mainly glucose-6-phosphate; very little free glucose is formed in the reaction. The glucose-6-phosphate so produced can enter glycolysis after the first control point.
In the second regulated step (the third step of glycolysis), phosphofructokinase converts fructose-6-phosphate into fructose-1,6-bisphosphate, which then is converted into glyceraldehyde-3-phosphate and dihydroxyacetone phosphate. The dihydroxyacetone phosphate can be removed from glycolysis by conversion into glycerol-3-phosphate, which can be used to form triglycerides. Conversely, triglycerides can be broken down into fatty acids and glycerol; the latter, in turn, can be converted into dihydroxyacetone phosphate, which can enter glycolysis after the second control point.
Free energy changes
+ Concentrations of metabolites in erythrocytes Compound Concentration / mMGlucose5.0Glucose-6-phosphate0.083Fructose-6-phosphate0.014Fructose-1,6-bisphosphate0.031Dihydroxyacetone phosphate0.14Glyceraldehyde-3-phosphate0.0191,3-Bisphosphoglycerate0.0012,3-Bisphosphoglycerate4.03-Phosphoglycerate0.122-Phosphoglycerate0.03Phosphoenolpyruvate0.023Pyruvate0.051ATP1.85ADP0.14Pi1.0
The change in free energy, ΔG, for each step in the glycolysis pathway can be calculated using ΔG = ΔG°′ + RTln Q, where Q is the reaction quotient. This requires knowing the concentrations of the metabolites. All of these values are available for erythrocytes, with the exception of the concentrations of NAD+ and NADH. The ratio of NAD+ to NADH in the cytoplasm is approximately 1000, which makes the oxidation of glyceraldehyde-3-phosphate (step 6) more favourable.
Using the measured concentrations of each step, and the standard free energy changes, the actual free energy change can be calculated. (Neglecting this is very common—the delta G of ATP hydrolysis in cells is not the standard free energy change of ATP hydrolysis quoted in textbooks).
+ Change in free energy for each step of glycolysis Step ReactionΔG°′ (kJ/mol)ΔG (kJ/mol) 1 Glucose + ATP4− → Glucose-6-phosphate2− + ADP3− + H+ 2 Glucose-6-phosphate2− → Fructose-6-phosphate2− 3 Fructose-6-phosphate2− + ATP4− → Fructose-1,6-bisphosphate4− + ADP3− + H+ 4 Fructose-1,6-bisphosphate4− → Dihydroxyacetone phosphate2− + Glyceraldehyde-3-phosphate2− 5 Dihydroxyacetone phosphate2− → Glyceraldehyde-3-phosphate2− 6 Glyceraldehyde-3-phosphate2− + Pi2− + NAD+ → 1,3-Bisphosphoglycerate4− + NADH + H+ 7 1,3-Bisphosphoglycerate4− + ADP3− → 3-Phosphoglycerate3− + ATP4− 8 3-Phosphoglycerate3− → 2-Phosphoglycerate3− 9 2-Phosphoglycerate3− → Phosphoenolpyruvate3− + H2O 10 Phosphoenolpyruvate3− + ADP3− + H+ → Pyruvate− + ATP4−
From measuring the physiological concentrations of metabolites in an erythrocyte it seems that about seven of the steps in glycolysis are in equilibrium for that cell type. Three of the steps—the ones with large negative free energy changes—are not in equilibrium and are referred to as irreversible; such steps are often subject to regulation.
Step 5 in the figure is shown behind the other steps, because that step is a side-reaction that can decrease or increase the concentration of the intermediate glyceraldehyde-3-phosphate. That compound is converted to dihydroxyacetone phosphate by the enzyme triose phosphate isomerase, which is a catalytically perfect enzyme; its rate is so fast that the reaction can be assumed to be in equilibrium. The fact that ΔG is not zero indicates that the actual concentrations in the erythrocyte are not accurately known.
Regulation
The enzymes that catalyse glycolysis are regulated via a range of biological mechanisms in order to control overall flux through the pathway. This is vital for both homeostatsis in a static environment, and metabolic adaptation to a changing environment or need. The details of regulation for some enzymes are highly conserved between species, whereas others vary widely.
Gene Expression: Firstly, the cellular concentrations of glycolytic enzymes are modulated via regulation of gene expression via transcription factors, with several glycolysis enzymes themselves acting as regulatory protein kinases in the nucleus.
Allosteric inhibition and activation by metabolites: In particular end-product inhibition of regulated enzymes by metabolites such as ATP serves as negative feedback regulation of the pathway.
Allosteric inhibition and activation by Protein-protein interactions (PPI). Indeed, some proteins interact with and regulate multiple glycolytic enzymes.
Post-translational modification (PTM). In particular, phosphorylation and dephosphorylation is a key mechanism of regulation of pyruvate kinase in the liver.
Localization
Regulation by insulin in animals
In animals, regulation of blood glucose levels by the pancreas in conjunction with the liver is a vital part of homeostasis. The beta cells in the pancreatic islets are sensitive to the blood glucose concentration. A rise in the blood glucose concentration causes them to release insulin into the blood, which has an effect particularly on the liver, but also on fat and muscle cells, causing these tissues to remove glucose from the blood. When the blood sugar falls the pancreatic beta cells cease insulin production, but, instead, stimulate the neighboring pancreatic alpha cells to release glucagon into the blood. This, in turn, causes the liver to release glucose into the blood by breaking down stored glycogen, and by means of gluconeogenesis. If the fall in the blood glucose level is particularly rapid or severe, other glucose sensors cause the release of epinephrine from the adrenal glands into the blood. This has the same action as glucagon on glucose metabolism, but its effect is more pronounced. In the liver glucagon and epinephrine cause the phosphorylation of the key, regulated enzymes of glycolysis, fatty acid synthesis, cholesterol synthesis, gluconeogenesis, and glycogenolysis. Insulin has the opposite effect on these enzymes. The phosphorylation and dephosphorylation of these enzymes (ultimately in response to the glucose level in the blood) is the dominant manner by which these pathways are controlled in the liver, fat, and muscle cells. Thus the phosphorylation of phosphofructokinase inhibits glycolysis, whereas its dephosphorylation through the action of insulin stimulates glycolysis.
Regulated Enzymes in Glycolysis
The three regulatory enzymes are hexokinase (or glucokinase in the liver), phosphofructokinase, and pyruvate kinase. The flux through the glycolytic pathway is adjusted in response to conditions both inside and outside the cell. The internal factors that regulate glycolysis do so primarily to provide ATP in adequate quantities for the cell's needs. The external factors act primarily on the liver, fat tissue, and muscles, which can remove large quantities of glucose from the blood after meals (thus preventing hyperglycemia by storing the excess glucose as fat or glycogen, depending on the tissue type). The liver is also capable of releasing glucose into the blood between meals, during fasting, and exercise thus preventing hypoglycemia by means of glycogenolysis and gluconeogenesis. These latter reactions coincide with the halting of glycolysis in the liver.
In addition hexokinase and glucokinase act independently of the hormonal effects as controls at the entry points of glucose into the cells of different tissues. Hexokinase responds to the glucose-6-phosphate (G6P) level in the cell, or, in the case of glucokinase, to the blood sugar level in the blood to impart entirely intracellular controls of the glycolytic pathway in different tissues (see below).
When glucose has been converted into G6P by hexokinase or glucokinase, it can either be converted to glucose-1-phosphate (G1P) for conversion to glycogen, or it is alternatively converted by glycolysis to pyruvate, which enters the mitochondrion where it is converted into acetyl-CoA and then into citrate. Excess citrate is exported from the mitochondrion back into the cytosol, where ATP citrate lyase regenerates acetyl-CoA and oxaloacetate (OAA). The acetyl-CoA is then used for fatty acid synthesis and cholesterol synthesis, two important ways of utilizing excess glucose when its concentration is high in blood. The regulated enzymes catalyzing these reactions perform these functions when they have been dephosphorylated through the action of insulin on the liver cells. Between meals, during fasting, exercise or hypoglycemia, glucagon and epinephrine are released into the blood. This causes liver glycogen to be converted back to G6P, and then converted to glucose by the liver-specific enzyme glucose 6-phosphatase and released into the blood. Glucagon and epinephrine also stimulate gluconeogenesis, which converts non-carbohydrate substrates into G6P, which joins the G6P derived from glycogen, or substitutes for it when the liver glycogen store have been depleted. This is critical for brain function, since the brain utilizes glucose as an energy source under most conditions. The simultaneously phosphorylation of, particularly, phosphofructokinase, but also, to a certain extent pyruvate kinase, prevents glycolysis occurring at the same time as gluconeogenesis and glycogenolysis.
Hexokinase and glucokinase
All cells contain the enzyme hexokinase, which catalyzes the conversion of glucose that has entered the cell into glucose-6-phosphate (G6P). Since the cell membrane is impervious to G6P, hexokinase essentially acts to transport glucose into the cells from which it can then no longer escape. Hexokinase is inhibited by high levels of G6P in the cell. Thus the rate of entry of glucose into cells partially depends on how fast G6P can be disposed of by glycolysis, and by glycogen synthesis (in the cells which store glycogen, namely liver and muscles).
Glucokinase, unlike hexokinase, is not inhibited by G6P. It occurs in liver cells, and will only phosphorylate the glucose entering the cell to form G6P, when the glucose in the blood is abundant. This being the first step in the glycolytic pathway in the liver, it therefore imparts an additional layer of control of the glycolytic pathway in this organ.
Phosphofructokinase
Phosphofructokinase is an important control point in the glycolytic pathway, since it is one of the irreversible steps and has key allosteric effectors, AMP and fructose 2,6-bisphosphate (F2,6BP).
F2,6BP is a very potent activator of phosphofructokinase (PFK-1) that is synthesized when F6P is phosphorylated by a second phosphofructokinase (PFK2). In the liver, when blood sugar is low and glucagon elevates cAMP, PFK2 is phosphorylated by protein kinase A. The phosphorylation inactivates PFK2, and another domain on this protein becomes active as fructose bisphosphatase-2, which converts F2,6BP back to F6P. Both glucagon and epinephrine cause high levels of cAMP in the liver. The result of lower levels of liver F2,6BP is a decrease in activity of phosphofructokinase and an increase in activity of fructose 1,6-bisphosphatase, so that gluconeogenesis (in essence, "glycolysis in reverse") is favored. This is consistent with the role of the liver in such situations, since the response of the liver to these hormones is to release glucose to the blood.
ATP competes with AMP for the allosteric effector site on the PFK enzyme. ATP concentrations in cells are much higher than those of AMP, typically 100-fold higher, but the concentration of ATP does not change more than about 10% under physiological conditions, whereas a 10% drop in ATP results in a 6-fold increase in AMP. Thus, the relevance of ATP as an allosteric effector is questionable. An increase in AMP is a consequence of a decrease in energy charge in the cell.
Citrate inhibits phosphofructokinase when tested in vitro by enhancing the inhibitory effect of ATP. However, it is doubtful that this is a meaningful effect in vivo, because citrate in the cytosol is utilized mainly for conversion to acetyl-CoA for fatty acid and cholesterol synthesis.
TIGAR, a p53 induced enzyme, is responsible for the regulation of phosphofructokinase and acts to protect against oxidative stress. TIGAR is a single enzyme with dual function that regulates F2,6BP. It can behave as a phosphatase (fructuose-2,6-bisphosphatase) which cleaves the phosphate at carbon-2 producing F6P. It can also behave as a kinase (PFK2) adding a phosphate onto carbon-2 of F6P which produces F2,6BP. In humans, the TIGAR protein is encoded by C12orf5 gene. The TIGAR enzyme will hinder the forward progression of glycolysis, by creating a build up of fructose-6-phosphate (F6P) which is isomerized into glucose-6-phosphate (G6P). The accumulation of G6P will shunt carbons into the pentose phosphate pathway.
Pyruvate kinase
The final step of glycolysis is catalysed by pyruvate kinase to form pyruvate and another ATP. It is regulated by a range of different transcriptional, covalent and non-covalent regulation mechanisms, which can vary widely in different tissues. For example, in the liver, pyruvate kinase is regulated based on glucose availability. During fasting (no glucose available), glucagon activates protein kinase A which phosphorylates pyruvate kinase to inhibit it. An increase in blood sugar leads to secretion of insulin, which activates protein phosphatase 1, leading to dephosphorylation and re-activation of pyruvate kinase. These controls prevent pyruvate kinase from being active at the same time as the enzymes that catalyze the reverse reaction (pyruvate carboxylase and phosphoenolpyruvate carboxykinase), preventing a futile cycle. Conversely, the isoform of pyruvate kinasein found in muscle is not affected by protein kinase A (which is activated by adrenaline in that tissue), so that glycolysis remains active in muscles even during fasting.
Post-glycolysis processes
The overall process of glycolysis is:
Glucose + 2 NAD+ + 2 ADP + 2 Pi → 2 Pyruvate + 2 NADH + 2 H+ + 2 ATP + 2 H2O
If glycolysis were to continue indefinitely, all of the NAD+ would be used up, and glycolysis would stop. To allow glycolysis to continue, organisms must be able to oxidize NADH back to NAD+. How this is performed depends on which external electron acceptor is available.
Anoxic regeneration of NAD+
One method of doing this is to simply have the pyruvate do the oxidation; in this process, pyruvate is converted to lactate (the conjugate base of lactic acid) in a process called lactic acid fermentation:
Pyruvate + NADH + H+ → Lactate + NAD+
This process occurs in the bacteria involved in making yogurt (the lactic acid causes the milk to curdle). This process also occurs in animals under hypoxic (or partially anaerobic) conditions, found, for example, in overworked muscles that are starved of oxygen. In many tissues, this is a cellular last resort for energy; most animal tissue cannot tolerate anaerobic conditions for an extended period of time.
Some organisms, such as yeast, convert NADH back to NAD+ in a process called ethanol fermentation. In this process, the pyruvate is converted first to acetaldehyde and carbon dioxide, and then to ethanol.
Lactic acid fermentation and ethanol fermentation can occur in the absence of oxygen. This anaerobic fermentation allows many single-cell organisms to use glycolysis as their only energy source.
Anoxic regeneration of NAD+ is only an effective means of energy production during short, intense exercise in vertebrates, for a period ranging from 10 seconds to 2 minutes during a maximal effort in humans. (At lower exercise intensities it can sustain muscle activity in diving animals, such as seals, whales and other aquatic vertebrates, for very much longer periods of time.) Under these conditions NAD+ is replenished by NADH donating its electrons to pyruvate to form lactate. This produces 2 ATP molecules per glucose molecule, or about 5% of glucose's energy potential (38 ATP molecules in bacteria). But the speed at which ATP is produced in this manner is about 100 times that of oxidative phosphorylation. The pH in the cytoplasm quickly drops when hydrogen ions accumulate in the muscle, eventually inhibiting the enzymes involved in glycolysis.
The burning sensation in muscles during hard exercise can be attributed to the release of hydrogen ions during the shift to glucose fermentation from glucose oxidation to carbon dioxide and water, when aerobic metabolism can no longer keep pace with the energy demands of the muscles. These hydrogen ions form a part of lactic acid. The body falls back on this less efficient but faster method of producing ATP under low oxygen conditions. This is thought to have been the primary means of energy production in earlier organisms before oxygen reached high concentrations in the atmosphere between 2000 and 2500 million years ago, and thus would represent a more ancient form of energy production than the aerobic replenishment of NAD+ in cells.
The liver in mammals gets rid of this excess lactate by transforming it back into pyruvate under aerobic conditions; see Cori cycle.
Fermentation of pyruvate to lactate is sometimes also called "anaerobic glycolysis", however, glycolysis ends with the production of pyruvate regardless of the presence or absence of oxygen.
In the above two examples of fermentation, NADH is oxidized by transferring two electrons to pyruvate. However, anaerobic bacteria use a wide variety of compounds as the terminal electron acceptors in cellular respiration: nitrogenous compounds, such as nitrates and nitrites; sulfur compounds, such as sulfates, sulfites, sulfur dioxide, and elemental sulfur; carbon dioxide; iron compounds; manganese compounds; cobalt compounds; and uranium compounds.
Aerobic regeneration of NAD+ and further catabolism of pyruvate
In aerobic eukaryotes, a complex mechanism has developed to use the oxygen in air as the final electron acceptor, in a process called oxidative phosphorylation. Aerobic prokaryotes, which lack mitochondria, use a variety of simpler mechanisms.
Firstly, the NADH + H+ generated by glycolysis has to be transferred to the mitochondrion to be oxidized, and thus to regenerate the NAD+ necessary for glycolysis to continue. However the inner mitochondrial membrane is impermeable to NADH and NAD+. Use is therefore made of two "shuttles" to transport the electrons from NADH across the mitochondrial membrane. They are the malate-aspartate shuttle and the glycerol phosphate shuttle. In the former the electrons from NADH are transferred to cytosolic oxaloacetate to form malate. The malate then traverses the inner mitochondrial membrane into the mitochondrial matrix, where it is reoxidized by NAD+ forming intra-mitochondrial oxaloacetate and NADH. The oxaloacetate is then re-cycled to the cytosol via its conversion to aspartate which is readily transported out of the mitochondrion. In the glycerol phosphate shuttle electrons from cytosolic NADH are transferred to dihydroxyacetone to form glycerol-3-phosphate which readily traverses the outer mitochondrial membrane. Glycerol-3-phosphate is then reoxidized to dihydroxyacetone, donating its electrons to FAD instead of NAD+. This reaction takes place on the inner mitochondrial membrane, allowing FADH2 to donate its electrons directly to coenzyme Q (ubiquinone) which is part of the electron transport chain which ultimately transfers electrons to molecular oxygen , with the formation of water, and the release of energy eventually captured in the form of ATP.
The glycolytic end-product, pyruvate (plus NAD+) is converted to acetyl-CoA, and NADH + H+ within the mitochondria in a process called pyruvate decarboxylation.
The resulting acetyl-CoA enters the citric acid cycle (or Krebs Cycle), where the acetyl group of the acetyl-CoA is converted into carbon dioxide by two decarboxylation reactions with the formation of yet more intra-mitochondrial NADH + H+.
The intra-mitochondrial NADH + H+ is oxidized to NAD+ by the electron transport chain, using oxygen as the final electron acceptor to form water. The energy released during this process is used to create a hydrogen ion (or proton) gradient across the inner membrane of the mitochondrion.
Finally, the proton gradient is used to produce about 2.5 ATP for every NADH + H+ oxidized in a process called oxidative phosphorylation.
Conversion of carbohydrates into fatty acids and cholesterol
The pyruvate produced by glycolysis is an important intermediary in the conversion of carbohydrates into fatty acids and cholesterol. This occurs via the conversion of pyruvate into acetyl-CoA in the mitochondrion. However, this acetyl CoA needs to be transported into cytosol where the synthesis of fatty acids and cholesterol occurs. This cannot occur directly. To obtain cytosolic acetyl-CoA, citrate (produced by the condensation of acetyl CoA with oxaloacetate) is removed from the citric acid cycle and carried across the inner mitochondrial membrane into the cytosol. There it is cleaved by ATP citrate lyase into acetyl-CoA and oxaloacetate. The oxaloacetate is returned to mitochondrion as malate (and then back into oxaloacetate to transfer more acetyl-CoA out of the mitochondrion). The cytosolic acetyl-CoA can be carboxylated by acetyl-CoA carboxylase into malonyl CoA, the first committed step in the synthesis of fatty acids, or it can be combined with acetoacetyl-CoA to form 3-hydroxy-3-methylglutaryl-CoA (HMG-CoA) which is the rate limiting step controlling the synthesis of cholesterol. Cholesterol can be used as is, as a structural component of cellular membranes, or it can be used to synthesize the steroid hormones, bile salts, and vitamin D.
Conversion of pyruvate into oxaloacetate for the citric acid cycle
Pyruvate molecules produced by glycolysis are actively transported across the inner mitochondrial membrane, and into the matrix where they can either be oxidized and combined with coenzyme A to form , acetyl-CoA, and NADH, or they can be carboxylated (by pyruvate carboxylase) to form oxaloacetate. This latter reaction "fills up" the amount of oxaloacetate in the citric acid cycle, and is therefore an anaplerotic reaction (from the Greek meaning to "fill up"), increasing the cycle's capacity to metabolize acetyl-CoA when the tissue's energy needs (e.g. in heart and skeletal muscle) are suddenly increased by activity.
In the citric acid cycle all the intermediates (e.g. citrate, iso-citrate, alpha-ketoglutarate, succinate, fumarate, malate and oxaloacetate) are regenerated during each turn of the cycle. Adding more of any of these intermediates to the mitochondrion therefore means that that additional amount is retained within the cycle, increasing all the other intermediates as one is converted into the other. Hence the addition of oxaloacetate greatly increases the amounts of all the citric acid intermediates, thereby increasing the cycle's capacity to metabolize acetyl CoA, converting its acetate component into and water, with the release of enough energy to form 11 ATP and 1 GTP molecule for each additional molecule of acetyl CoA that combines with oxaloacetate in the cycle.
To cataplerotically remove oxaloacetate from the citric cycle, malate can be transported from the mitochondrion into the cytoplasm, decreasing the amount of oxaloacetate that can be regenerated. Furthermore, citric acid intermediates are constantly used to form a variety of substances such as the purines, pyrimidines and porphyrins.
Intermediates for other pathways
This article concentrates on the catabolic role of glycolysis with regard to converting potential chemical energy to usable chemical energy during the oxidation of glucose to pyruvate. Many of the metabolites in the glycolytic pathway are also used by anabolic pathways, and, as a consequence, flux through the pathway is critical to maintain a supply of carbon skeletons for biosynthesis.
The following metabolic pathways are all strongly reliant on glycolysis as a source of metabolites: and many more.
Pentose phosphate pathway, which begins with the dehydrogenation of glucose-6-phosphate, the first intermediate to be produced by glycolysis, produces various pentose sugars, and NADPH for the synthesis of fatty acids and cholesterol.
Glycogen synthesis also starts with glucose-6-phosphate at the beginning of the glycolytic pathway.
Glycerol, for the formation of triglycerides and phospholipids, is produced from the glycolytic intermediate glyceraldehyde-3-phosphate.
Various post-glycolytic pathways:
Fatty acid synthesis
Cholesterol synthesis
The citric acid cycle which in turn leads to:
Amino acid synthesis
Nucleotide synthesis
Tetrapyrrole synthesis
Although gluconeogenesis and glycolysis share many intermediates the one is not functionally a branch or tributary of the other. There are two regulatory steps in both pathways which, when active in the one pathway, are automatically inactive in the other. The two processes can therefore not be simultaneously active. Indeed, if both sets of reactions were highly active at the same time the net result would be the hydrolysis of four high energy phosphate bonds (two ATP and two GTP) per reaction cycle.
NAD+ is the oxidizing agent in glycolysis, as it is in most other energy yielding metabolic reactions (e.g. beta-oxidation of fatty acids, and during the citric acid cycle). The NADH thus produced is primarily used to ultimately transfer electrons to to produce water, or, when is not available, to produce compounds such as lactate or ethanol (see Anoxic regeneration of NAD+ above). NADH is rarely used for synthetic processes, the notable exception being gluconeogenesis. During fatty acid and cholesterol synthesis the reducing agent is NADPH. This difference exemplifies a general principle that NADPH is consumed during biosynthetic reactions, whereas NADH is generated in energy-yielding reactions. The source of the NADPH is two-fold. When malate is oxidatively decarboxylated by "NADP+-linked malic enzyme" pyruvate, and NADPH are formed. NADPH is also formed by the pentose phosphate pathway which converts glucose into ribose, which can be used in synthesis of nucleotides and nucleic acids, or it can be catabolized to pyruvate.
Glycolysis in disease
Diabetes
Cellular uptake of glucose occurs in response to insulin signals, and glucose is subsequently broken down through glycolysis, lowering blood sugar levels. However, insulin resistance or low insulin levels seen in diabetes result in hyperglycemia, where glucose levels in the blood rise and glucose is not properly taken up by cells. Hepatocytes further contribute to this hyperglycemia through gluconeogenesis. Glycolysis in hepatocytes controls hepatic glucose production, and when glucose is overproduced by the liver without having a means of being broken down by the body, hyperglycemia results.
Genetic diseases
Glycolytic mutations are generally rare due to importance of the metabolic pathway; the majority of occurring mutations result in an inability of the cell to respire, and therefore cause the death of the cell at an early stage. However, some mutations (glycogen storage diseases and other inborn errors of carbohydrate metabolism) are seen with one notable example being pyruvate kinase deficiency, leading to chronic hemolytic anemia.
In combined malonic and methylmalonic aciduria (CMAMMA) due to ACSF3 deficiency, glycolysis is reduced by −50%, which is caused by reduced lipoylation of mitochondrial enzymes such as the pyruvate dehydrogenase complex and α-ketoglutarate dehydrogenase complex.
Cancer
Malignant tumor cells perform glycolysis at a rate that is ten times faster than their noncancerous tissue counterparts. During their genesis, limited capillary support often results in hypoxia (decreased O2 supply) within the tumor cells. Thus, these cells rely on anaerobic metabolic processes such as glycolysis for ATP (adenosine triphosphate). Some tumor cells overexpress specific glycolytic enzymes which result in higher rates of glycolysis. Often these enzymes are Isoenzymes, of traditional glycolysis enzymes, that vary in their susceptibility to traditional feedback inhibition. The increase in glycolytic activity ultimately counteracts the effects of hypoxia by generating sufficient ATP from this anaerobic pathway. This phenomenon was first described in 1930 by Otto Warburg and is referred to as the Warburg effect. The Warburg hypothesis claims that cancer is primarily caused by dysfunctionality in mitochondrial metabolism, rather than because of the uncontrolled growth of cells.
A number of theories have been advanced to explain the Warburg effect. One such theory suggests that the increased glycolysis is a normal protective process of the body and that malignant change could be primarily caused by energy metabolism.
This high glycolysis rate has important medical applications, as high aerobic glycolysis by malignant tumors is utilized clinically to diagnose and monitor treatment responses of cancers by imaging uptake of 2-18F-2-deoxyglucose (FDG) (a radioactive modified hexokinase substrate) with positron emission tomography (PET).
There is ongoing research to affect mitochondrial metabolism and treat cancer by reducing glycolysis and thus starving cancerous cells in various new ways, including a ketogenic diet.
Interactive pathway map
The diagram below shows human protein names. Names in other organisms may be different and the number of isozymes (such as HK1, HK2, ...) is likely to be different too.
Alternative nomenclature
Some of the metabolites in glycolysis have alternative names and nomenclature. In part, this is because some of them are common to other pathways, such as the Calvin cycle.
This articleAlternative1GlucoseGlcDextrose2Glucose-6-phosphateG6P3Fructose-6-phosphateF6P4 Fructose-1,6-bisphosphateF1,6BPFructose 1,6-diphosphateFBP; FDP; F1,6DP5Dihydroxyacetone phosphateDHAPGlycerone phosphate6Glyceraldehyde-3-phosphateGADP3-PhosphoglyceraldehydePGAL; G3P; GALP; GAP; TP7 1,3-Bisphosphoglycerate1,3BPGGlycerate-1,3-bisphosphate,glycerate-1,3-diphosphate,1,3-diphosphoglyceratePGAP; BPG; DPG83-Phosphoglycerate3PGGlycerate-3-phosphatePGA; GP9 2-Phosphoglycerate2PGGlycerate-2-phosphate10PhosphoenolpyruvatePEP11 PyruvatePyrPyruvic acid conjugate base
Structure of glycolysis components in Fischer projections and polygonal model
The intermediates of glycolysis depicted in Fischer projections show the chemical changing step by step. Such image can be compared to polygonal model representation.
Structure of glycolysis components in skeletal diagram and conservation-of-matter model
The intermediates of glycolysis depicted in skeletal diagram show the chemical structures changing step by step, with cofactors such as NADH, ATP, and water and phosphates to balance reactions' stoichiometry. Each enzyme that mediates each reaction is indicated in the reversible arrow model of chemical reactions, as most enzymes catalyze bidirectional chemical reactions. Duplicates, such as the reversible re-arrangement between dihydroxyacetone and glyceraldehyde on the bottom row of reactions, represent two moles of C3 fragments derived from a single mole of the preceding C6 fragment of fructose bisphosphate, giving a net of two ATP generated. Thus the diagram must be read with rules of stoichiometry and balance-of-matter principles in mind. Follow the green "START" button to the red "END" button to trace the pathway through the structural pathway diagram.
See also
Carbohydrate catabolism
Citric acid cycle
Cori cycle
Fermentation (biochemistry)
Gluconeogenesis
Glycolytic oscillation
Glycogenoses (glycogen storage diseases)
Inborn errors of carbohydrate metabolism
Pentose phosphate pathway
Pyruvate decarboxylation
Triose kinase
References
External links
A Detailed Glycolysis Animation provided by IUBMB (Adobe Flash Required)
The Glycolytic enzymes in Glycolysis at RCSB PDB
Glycolytic cycle with animations at wdv.com
Metabolism, Cellular Respiration and Photosynthesis – The Virtual Library of Biochemistry, Molecular Biology and Cell Biology
The chemical logic behind glycolysis at ufp.pt
Expasy biochemical pathways poster at ExPASy
metpath: Interactive representation of glycolysis
Category:Biochemical reactions
Category:Carbohydrates
Category:Cellular respiration
Category:Metabolic pathways
|
biology
| 6,797
|
12891
|
Gene therapy
|
https://en.wikipedia.org/wiki/Gene_therapy
|
Gene therapy is medical technology that aims to produce a therapeutic effect through the manipulation of gene expression or through altering the biological properties of living cells.
The first attempt at modifying human DNA was performed in 1980, by Martin Cline, but the first successful nuclear gene transfer in humans, approved by the National Institutes of Health, was performed in May 1989. The first therapeutic use of gene transfer as well as the first direct insertion of human DNA into the nuclear genome was performed by French Anderson in a trial starting in September 1990. Between 1989 and December 2018, over 2,900 clinical trials were conducted, with more than half of them in phase I. In 2003, Gendicine became the first gene therapy to receive regulatory approval. Since that time, further gene therapy drugs were approved, such as alipogene tiparvovec (2012), Strimvelis (2016), tisagenlecleucel (2017), voretigene neparvovec (2017), patisiran (2018), onasemnogene abeparvovec (2019), idecabtagene vicleucel (2021), nadofaragene firadenovec, valoctocogene roxaparvovec and etranacogene dezaparvovec (all 2022). Most of these approaches utilize adeno-associated viruses (AAVs) and lentiviruses for performing gene insertions, in vivo and ex vivo, respectively. AAVs are characterized by stabilizing the viral capsid, lower immunogenicity, ability to transduce both dividing and nondividing cells, the potential to integrate site specifically and to achieve long-term expression in the in-vivo treatment. ASO / siRNA approaches such as those conducted by Alnylam and Ionis Pharmaceuticals require non-viral delivery systems, and utilize alternative mechanisms for trafficking to liver cells by way of GalNAc transporters.
Not all medical procedures that introduce alterations to a patient's genetic makeup can be considered gene therapy. Bone marrow transplantation and organ transplants in general have been found to introduce foreign DNA into patients.
Background
Gene therapy was first conceptualized in the 1960s, when the feasibility of adding new genetic functions to mammalian cells began to be researched. Several methods to do so were tested, including injecting genes with a micropipette directly into a living mammalian cell, and exposing cells to a precipitate of DNA that contained the desired genes. Scientists theorized that a virus could also be used as a vehicle, or vector, to deliver new genes into cells.
One of the first scientists to report the successful direct incorporation of functional DNA into a mammalian cell was biochemist Dr. Lorraine Marquardt Kraus (6 September 1922 – 1 July 2016) at the University of Tennessee Health Science Center in Memphis, Tennessee. In 1961, she managed to genetically alter the hemoglobin of cells from bone marrow taken from a patient with sickle cell anaemia. She did this by incubating the patient's cells in tissue culture with DNA extracted from a donor with normal hemoglobin. In 1968, researchers Theodore Friedmann, Jay Seegmiller, and John Subak-Sharpe at the National Institutes of Health (NIH), Bethesda, in the United States successfully corrected genetic defects associated with Lesch-Nyhan syndrome, a debilitating neurological disease, by adding foreign DNA to cultured cells collected from patients suffering from the disease.
The first attempt, an unsuccessful one, at gene therapy (as well as the first case of medical transfer of foreign genes into humans not counting organ transplantation) was performed by geneticist Martin Cline of the University of California, Los Angeles in California, United States on 10 July 1980. Cline claimed that one of the genes in his patients was active six months later, though he never published this data or had it verified.
After extensive research on animals throughout the 1980s and a 1989 bacterial gene tagging trial on humans, the first gene therapy widely accepted as a success was demonstrated in a trial that started on 14 September 1990, when Ashanthi DeSilva was treated for ADA-SCID.
The first somatic treatment that produced a permanent genetic change was initiated in 1993. The goal was to cure malignant brain tumors by using recombinant DNA to transfer a gene making the tumor cells sensitive to a drug that in turn would cause the tumor cells to die.
The polymers are either translated into proteins, interfere with target gene expression, or possibly correct genetic mutations. The most common form uses DNA that encodes a functional, therapeutic gene to replace a mutated gene. The polymer molecule is packaged within a "vector", which carries the molecule inside cells.
Early clinical failures led to dismissals of gene therapy. Clinical successes since 2006 regained researchers' attention, although , it was still largely an experimental technique. These include treatment of retinal diseases Leber's congenital amaurosis and choroideremia, X-linked SCID, ADA-SCID, adrenoleukodystrophy, chronic lymphocytic leukemia (CLL), acute lymphocytic leukemia (ALL), multiple myeloma, haemophilia, and Parkinson's disease. Between 2013 and April 2014, US companies invested over $600 million in the field.
The first commercial gene therapy, Gendicine, was approved in China in 2003, for the treatment of certain cancers. In 2011, Neovasculgen was registered in Russia as the first-in-class gene-therapy drug for treatment of peripheral artery disease, including critical limb ischemia.
In 2012, alipogene tiparvovec, a treatment for a rare inherited disorder, lipoprotein lipase deficiency, became the first treatment to be approved for clinical use in either the European Union or the United States after its endorsement by the European Commission.
Following early advances in genetic engineering of bacteria, cells, and small animals, scientists started considering how to apply it to medicine. Two main approaches were considered – replacing or disrupting defective genes. Scientists focused on diseases caused by single-gene defects, such as cystic fibrosis, haemophilia, muscular dystrophy, thalassemia, and sickle cell anemia. alipogene tiparvovec treats one such disease, caused by a defect in lipoprotein lipase.
DNA must be administered, reach the damaged cells, enter the cell and either express or disrupt a protein. Multiple delivery techniques have been explored. The initial approach incorporated DNA into an engineered virus to deliver the DNA into a chromosome. Naked DNA approaches have also been explored, especially in the context of vaccine development.
Generally, efforts focused on administering a gene that causes a needed protein to be expressed. More recently, increased understanding of nuclease function has led to more direct DNA editing, using techniques such as zinc finger nucleases and CRISPR. The vector incorporates genes into chromosomes. The expressed nucleases then knock out and replace genes in the chromosome. these approaches involve removing cells from patients, editing a chromosome and returning the transformed cells to patients.
Gene editing is a potential approach to alter the human genome to treat genetic diseases, viral diseases, and cancer. these approaches are being studied in clinical trials.
Classification
Breadth of definition
In 1986, a meeting at the Institute Of Medicine defined gene therapy as the addition or replacement of a gene in a targeted cell type. In the same year, the FDA announced that it had jurisdiction over approving "gene therapy" without defining the term. The FDA added a very broad definition in 1993 of any treatment that would 'modify or manipulate the expression of genetic material or to alter the biological properties of living cells'. In 2018 this was narrowed to 'products that mediate their effects by transcription or translation of transferred genetic material or by specifically altering host (human) genetic sequences'.
Writing in 2018, in the Journal of Law and the Biosciences, Sherkow et al. argued for a narrower definition of gene therapy than the FDA's in light of new technology that would consist of any treatment that intentionally and permanently modified a cell's genome, with the definition of genome including episomes outside the nucleus but excluding changes due to episomes that are lost over time. This definition would also exclude introducing cells that did not derive from a patient themselves, but include ex vivo approaches, and would not depend on the vector used.
During the COVID-19 pandemic, some academics insisted that the mRNA vaccines for COVID were not gene therapy to prevent the spread of incorrect information that the vaccine could alter DNA, other academics maintained that the vaccines were a gene therapy because they introduced genetic material into a cell. Fact-checkers, such as Full Fact, Reuters, PolitiFact, and FactCheck.org said that calling the vaccines a gene therapy was incorrect. Podcast host Joe Rogan was criticized for calling mRNA vaccines gene therapy as was British politician Andrew Bridgen, with fact checker Full Fact calling for Bridgen to be removed from the conservative party for this and other statements.
Genes present or added
Gene therapy encapsulates many forms of adding different nucleic acids to a cell. Gene augmentation adds a new protein coding gene to a cell. One form of gene augmentiation is gene replacement therapy, a treatment for monogenic recessive disorders where a single gene is not functional; an additional functional gene is added. For diseases caused by multiple genes or a dominant gene, gene silencing or gene editing approaches are more appropriate but gene addition, a form of gene augmentation where new gene is added, may improve a cells function without modifying the genes that cause a disorder.
Cell types
Gene therapy may be classified into two types by the type of cell it affects: somatic cell and germline gene therapy.
In somatic cell gene therapy (SCGT), the therapeutic genes are transferred into any cell other than a gamete, germ cell, gametocyte, or undifferentiated stem cell. Any such modifications affect the individual patient only, and are not inherited by offspring. Somatic gene therapy represents mainstream basic and clinical research, in which therapeutic DNA (either integrated in the genome or as an external episome or plasmid) is used to treat disease. Over 600 clinical trials utilizing SCGT are underway in the US. Most focus on severe genetic disorders, including immunodeficiencies, haemophilia, thalassaemia, and cystic fibrosis. Such single gene disorders are good candidates for somatic cell therapy. The complete correction of a genetic disorder or the replacement of multiple genes is not yet possible. Only a few of the trials are in the advanced stages.
In germline gene therapy (GGT), germ cells (sperm or egg cells) are modified by the introduction of functional genes into their genomes. Modifying a germ cell causes all the organism's cells to contain the modified gene. The change is therefore heritable and passed on to later generations. Australia, Canada, Germany, Israel, Switzerland, and the Netherlands prohibit GGT for application in human beings, for technical and ethical reasons, including insufficient knowledge about possible risks to future generations and higher risks versus SCGT. The US has no federal controls specifically addressing human genetic modification (beyond FDA regulations for therapies in general).
In vivo versus ex vivo therapies
alt=Ex vivo gene therapy|thumb
In in vivo gene therapy, a vector (typically, a virus) is introduced to the patient, which then achieves the desired biological effect by passing the genetic material (e.g. for a missing protein) into the patient's cells. In ex vivo gene therapies, such as CAR-T therapeutics, the patient's own cells (autologous) or healthy donor cells (allogeneic) are modified outside the body (hence, ex vivo) using a vector to express a particular protein, such as a chimeric antigen receptor.
In vivo gene therapy is seen as simpler, since it does not require the harvesting of mitotic cells. However, ex vivo gene therapies are better tolerated and less associated with severe immune responses. The death of Jesse Gelsinger in a trial of an adenovirus-vectored treatment for ornithine transcarbamylase deficiency due to a systemic inflammatory reaction led to a temporary halt on gene therapy trials across the United States. , in vivo and ex vivo therapeutics are both seen as safe.
Gene editing
While the concept of gene replacement therapy is mostly suitable for recessive diseases, novel strategies have been suggested that are capable of also treating conditions with a dominant pattern of inheritance.
The introduction of CRISPR gene editing has opened new doors for its application and utilization in gene therapy, as instead of pure replacement of a gene, it enables correction of the particular genetic defect. Solutions to medical hurdles, such as the eradication of latent human immunodeficiency virus (HIV) reservoirs and correction of the mutation that causes sickle cell disease, may be available as a therapeutic option in the future.
Prosthetic gene therapy aims to enable cells of the body to take over functions they physiologically do not carry out. One example is the so-called vision restoration gene therapy, that aims to restore vision in patients with end-stage retinal diseases.Patent: US7824869B2 In end-stage retinal diseases, the photoreceptors, as the primary light sensitive cells of the retina are irreversibly lost. By the means of prosthetic gene therapy light sensitive proteins are delivered into the remaining cells of the retina, to render them light sensitive and thereby enable them to signal visual information towards the brain.
In vivo, gene editing systems using CRISPR have been used in studies with mice to treat cancer and have been effective at reducing tumors. In vitro, the CRISPR system has been used to treat HPV cancer tumors. Adeno-associated virus, Lentivirus based vectors have been to introduce the genome for the CRISPR system.
Vectors
The delivery of DNA into cells can be accomplished by multiple methods. The two major classes are recombinant viruses (sometimes called biological nanoparticles or viral vectors) and naked DNA or DNA complexes (non-viral methods).
Viruses
In order to replicate, viruses introduce their genetic material into the host cell, tricking the host's cellular machinery into using it as blueprints for viral proteins. Retroviruses go a stage further by having their genetic material copied into the nuclear genome of the host cell. Scientists exploit this by substituting part of a virus's genetic material with therapeutic DNA or RNA. Like the genetic material (DNA or RNA) in viruses, therapeutic genetic material can be designed to simply serve as a temporary blueprint that degrades naturally, as in a non-integrative vectors, or to enter the host's nucleus becoming a permanent part of the host's nuclear DNA in infected cells.
A number of viruses have been used for human gene therapy, including viruses such as lentivirus, adenoviruses, herpes simplex, vaccinia, and adeno-associated virus.
Adenovirus viral vectors (Ad) temporarily modify a cell's genetic expression with genetic material that is not integrated into the host cell's DNA. As of 2017, such vectors were used in 20% of trials for gene therapy. Adenovirus vectors are mostly used in cancer treatments and novel genetic vaccines such as the Ebola vaccine, vaccines used in clinical trials for HIV and SARS-CoV-2, or cancer vaccines.
Lentiviral vectors based on lentivirus, a retrovirus, can modify a cell's nuclear genome to permanently express a gene, although vectors can be modified to prevent integration. Retroviruses were used in 18% of trials before 2018. Libmeldy is an ex vivo stem cell treatment for metachromatic leukodystrophy which uses a lentiviral vector and was authorized by the European medical agency in 2020.
Adeno-associated virus (AAV) is a virus that is incapable of transmission between cells unless the cell is infected by another virus, a helper virus. Adenovirus and the herpes viruses act as helper viruses for AAV. AAV persists within the cell outside of the cell's nuclear genome for an extended period of time through the formation of concatemers mostly organized as episomes. Genetic material from AAV vectors is integrated into the host cell's nuclear genome at a low frequency and likely mediated by the DNA-modifying enzymes of the host cell. Animal models suggest that integration of AAV genetic material into the host cell's nuclear genome may cause hepatocellular carcinoma, a form of liver cancer. Several AAV investigational agents have been explored in treatment of wet age related macular degeneration by both intravitreal and subretinal approaches as a potential application of AAV gene therapy for human disease.
Non-viral
Non-viral vectors for gene therapy present certain advantages over viral methods, such as large scale production and low host immunogenicity. However, non-viral methods initially produced lower levels of transfection and gene expression, and thus lower therapeutic efficacy. Newer technologies offer promise of solving these problems, with the advent of increased cell-specific targeting and subcellular trafficking control.
Methods for non-viral gene therapy include the injection of naked DNA, electroporation, the gene gun, sonoporation, magnetofection, the use of oligonucleotides, lipoplexes, dendrimers, and inorganic nanoparticles. These therapeutics can be administered directly or through scaffold enrichment.
More recent approaches, such as those performed by companies such as Ligandal, offer the possibility of creating cell-specific targeting technologies for a variety of gene therapy modalities, including RNA, DNA and gene editing tools such as CRISPR. Other companies, such as Arbutus Biopharma and Arcturus Therapeutics, offer non-viral, non-cell-targeted approaches that mainly exhibit liver trophism. In more recent years, startups such as Sixfold Bio, GenEdit, and Spotlight Therapeutics have begun to solve the non-viral gene delivery problem. Non-viral techniques offer the possibility of repeat dosing and greater tailorability of genetic payloads, which in the future will be more likely to take over viral-based delivery systems.
Companies such as Editas Medicine, Intellia Therapeutics, CRISPR Therapeutics, Casebia, Cellectis, Precision Biosciences, bluebird bio, Excision BioTherapeutics, and Sangamo have developed non-viral gene editing techniques, however frequently still use viruses for delivering gene insertion material following genomic cleavage by guided nucleases. These companies focus on gene editing, and still face major delivery hurdles.
BioNTech, Moderna Therapeutics and CureVac focus on delivery of mRNA payloads, which are necessarily non-viral delivery problems.
Alnylam, Dicerna Pharmaceuticals, and Ionis Pharmaceuticals focus on delivery of siRNA (antisense oligonucleotides) for gene suppression, which also necessitate non-viral delivery systems.
In academic contexts, a number of laboratories are working on delivery of PEGylated particles, which form serum protein coronas and chiefly exhibit LDL receptor mediated uptake in cells in vivo.
Treatment
Cancer
alt=Direct gene therapy|thumb|Suicide gene therapy graphic used to treat cancer
There have been attempts to treat cancer using gene therapy. As of 2017, 65% of gene therapy trials were for cancer treatment.
In 2025, a consortium of researchers, in partnership with the National Science Foundation of Iran, conducted an in vitro study to develop a novel formulation of an anti-breast cancer medication that employs gene therapy and intelligent nanocarriers for the first time. This approach enabled researchers to inhibit the proliferation of over 90% of breast cancer cells by concurrently silencing two critical genes (Integrin β3 and IGF-1R) and initiating programmed cell death within 48 hours.
Adenovirus vectors are useful for some cancer gene therapies because adenovirus can transiently insert genetic material into a cell without permanently altering the cell's nuclear genome. These vectors can be used to cause antigens to be added to cancers causing an immune response, or hinder angiogenesis by expressing certain proteins. An Adenovirus vector is used in the commercial products Gendicine and Oncorine. Another commercial product, Rexin G, uses a retrovirus-based vector and selectively binds to receptors that are more expressed in tumors.
One approach, suicide gene therapy, works by introducing genes encoding enzymes that will cause a cancer cell to die. Another approach is the use oncolytic viruses, such as Oncorine, which are viruses that selectively reproduce in cancerous cells leaving other cells unaffected.
mRNA has been suggested as a non-viral vector for cancer gene therapy that would temporarily change a cancerous cell's function to create antigens or kill the cancerous cells and there have been several trials.
Afamitresgene autoleucel, sold under the brand name Tecelra, is an autologous T cell immunotherapy used for the treatment of synovial sarcoma. It is a T cell receptor (TCR) gene therapy. It is the first FDA-approved engineered cell therapy for a solid tumor. It uses a self-inactivating lentiviral vector to express a T-cell receptor specific for MAGE-A4, a melanoma-associated antigen.
Genetic diseases
Gene therapy approaches to replace a faulty gene with a healthy gene have been proposed and are being studied for treating some genetic diseases. As of 2017, 11.1% of gene therapy clinical trials targeted monogenic diseases.
Diseases such as sickle cell disease that are caused by autosomal recessive disorders for which a person's normal phenotype or cell function may be restored in cells that have the disease by a normal copy of the gene that is mutated, may be a good candidate for gene therapy treatment. The risks and benefits related to gene therapy for sickle cell disease are not known.
Gene therapy has been used in the eye. The eye is especially suitable for adeno-associated virus vectors. Voretigene neparvovec is an approved gene therapy to treat patients with vision impairment due to mutations in the RPE65 gene alipogene tiparvovec, a treatment for familial lipoprotein lipase (LPL) deficiency, and Zolgensma for the treatment of spinal muscular atrophy both use an adeno-associated virus vector.
Infectious diseases
As of 2017, 7% of genetic therapy trials targeted infectious diseases. 69.2% of trials targeted HIV, 11% hepatitis B or C, and 7.1% malaria.
List of gene therapies for treatment of disease
Some genetic therapies have been approved by the US Food and Drug Administration (FDA), the European Medicines Agency (EMA), and for use in Russia and China.
+List of approved gene therapies for the treatment of diseaseINNBrand nameTypeManufacturerTargetUS Food and Drug Administration (FDA) approvedEuropean Medicines Agency (EMA) authorizedafamitresgene autoleucelTecelraEx vivoAdaptimmunesynovial sarcomaAugust 2024 alipogene tiparvovecGlyberaIn vivoChiesi Farmaceuticilipoprotein lipase deficiencyWithdrawnatidarsagene autotemcelLibmeldy, Lenmeldy
(Arylsulfatase A gene encoding autologous CD34+ cells)Ex vivoOrchard Therapeuticsmetachromatic leukodystrophyMarch 2024December 2020autologous CD34+Strimvelisadenosine deaminase deficiency (ADA-SCID)May 2016axicabtagene ciloleucelYescartaEx vivoKite pharmalarge B-cell lymphomaOctober 2017August 2018beremagene geperpavecVyjuvekIn vivoKrystal Biotechdystrophic epidermolysis bullosa (DEB)May 2023betibeglogene autotemcelZynteglobeta thalassemiaAugust 2022May 2019brexucabtagene autoleucelTecartusEx vivoKite Pharmamantle cell lymphoma and acute lymphoblastic leukemiaJuly 2020December 2020cambiogenplasmidNeovasculgenvascular endothelial growth factor peripheral artery diseasedelandistrogene moxeparvovecElevidysIn vivoCatalentDuchenne muscular dystrophyJune 2023eladocagene exuparvovecKebilidi, UpstazaIn vivoPTC Therapeuticsaromatic L‑amino acid decarboxylase (AADC) deficiencyNovember 2024 July 2022 Text was copied from this source which is copyright European Medicines Agency. Reproduction is authorized provided the source is acknowledged.elivaldogene autotemcelSkysonacerebral adrenoleukodystrophyJuly 2021exagamglogene autotemcelCasgevyEx vivoVertex Pharmaceuticalssickle cell diseaseDecember 2023gendicinehead and neck squamous cell carcinomaidecabtagene vicleucelAbecmaEx vivoCelgenemultiple myelomaMarch 2021lisocabtagene maraleucelBreyanziEx vivoJuno TherapeuticsB-cell lymphomaFebruary 2021lovotibeglogene autotemcelLyfgeniaEx vivoBluebird Biosickle cell diseaseDecember 2023nadofaragene firadenovecAdstiladrinFerring Pharmaceuticalshigh-risk Bacillus Calmette-Guérin (BCG)-unresponsive non-muscle-invasive bladder cancer (NMIBC) with carcinoma in situ (CIS)Yesobecabtagene autoleucelAucatzyl Autolus Therapeuticsacute lymphoblastic leukemiaNovember 2024 onasemnogene abeparvovecZolgensmaIn vivoNovartis Gene Therapiesspinal muscular atrophy type IMay 2019March 2020 prademagene zamikeracel Zevaskyn recessive dystrophic epidermolysis bullosa April 2025 revakinagene taroretcel Encelto Neurotech Pharmaceuticals macular telangiectasia type 2 March 2025talimogene laherparepvecImlygicIn vivoAmgenmelanomaOctober 2015December 2015tisagenlecleucelKymriahB cell lymphoblastic leukemiaAugust 2018valoctocogene roxaparvovecRoctavianBioMarin International Limitedhemophilia AAugust 2022voretigene neparvovecLuxturnaIn vivoSpark Therapeuticsbiallelic RPE65 mutation associated Leber congenital amaurosisDecember 2017November 2018
Adverse effects, contraindications and hurdles for use
Some of the unsolved problems include:
Off-target effects – The possibility of unwanted, likely harmful, changes to the genome present a large barrier to the widespread implementation of this technology. Improvements to the specificity of gRNAs and Cas enzymes present viable solutions to this issue as well as the refinement of the delivery method of CRISPR. It is likely that different diseases will benefit from different delivery methods.
Short-lived nature – Before gene therapy can become a permanent cure for a condition, the therapeutic DNA introduced into target cells must remain functional and the cells containing the therapeutic DNA must be stable. Problems with integrating therapeutic DNA into the nuclear genome and the rapidly dividing nature of many cells prevent it from achieving long-term benefits. Patients require multiple treatments.
Immune response – Any time a foreign object is introduced into human tissues, the immune system is stimulated to attack the invader. Stimulating the immune system in a way that reduces gene therapy effectiveness is possible. The immune system's enhanced response to viruses that it has seen before reduces the effectiveness to repeated treatments.
Problems with viral vectors – Viral vectors carry the risks of toxicity, inflammatory responses, and gene control and targeting issues.
Multigene disorders – Some commonly occurring disorders, such as heart disease, high blood pressure, Alzheimer's disease, arthritis, and diabetes, are affected by variations in multiple genes, which complicate gene therapy.
Some therapies may breach the Weismann barrier (between soma and germ-line) protecting the testes, potentially modifying the germline, falling afoul of regulations in countries that prohibit the latter practice.
Insertional mutagenesis – If the DNA is integrated in a sensitive spot in the genome, for example in a tumor suppressor gene, the therapy could induce a tumor. This has occurred in clinical trials for X-linked severe combined immunodeficiency (X-SCID) patients, in which hematopoietic stem cells were transduced with a corrective transgene using a retrovirus, and this led to the development of T cell leukemia in 3 of 20 patients. One possible solution is to add a functional tumor suppressor gene to the DNA to be integrated. This may be problematic since the longer the DNA is, the harder it is to integrate into cell genomes. CRISPR technology allows researchers to make much more precise genome changes at exact locations.
Cost – alipogene tiparvovec (Glybera), for example, at a cost of $1.6 million per patient, was reported in 2013, to be the world's most expensive drug.
Deaths
Three patients' deaths have been reported in gene therapy trials, putting the field under close scrutiny. The first was that of Jesse Gelsinger, who died in 1999, because of immune rejection response. One X-SCID patient died of leukemia in 2003. In 2007, a rheumatoid arthritis patient died from an infection; the subsequent investigation concluded that the death was not related to gene therapy.
Regulations
Regulations covering genetic modification are part of general guidelines about human-involved biomedical research. There are no international treaties which are legally binding in this area, but there are recommendations for national laws from various bodies.
The Helsinki Declaration (Ethical Principles for Medical Research Involving Human Subjects) was amended by the World Medical Association's General Assembly in 2008. This document provides principles physicians and researchers must consider when involving humans as research subjects. The Statement on Gene Therapy Research initiated by the Human Genome Organization (HUGO) in 2001, provides a legal baseline for all countries. HUGO's document emphasizes human freedom and adherence to human rights, and offers recommendations for somatic gene therapy, including the importance of recognizing public concerns about such research.
United States
No federal legislation lays out protocols or restrictions about human genetic engineering. This subject is governed by overlapping regulations from local and federal agencies, including the Department of Health and Human Services, the FDA and NIH's Recombinant DNA Advisory Committee. Researchers seeking federal funds for an investigational new drug application, (commonly the case for somatic human genetic engineering,) must obey international and federal guidelines for the protection of human subjects.
NIH serves as the main gene therapy regulator for federally funded research. Privately funded research is advised to follow these regulations. NIH provides funding for research that develops or enhances genetic engineering techniques and to evaluate the ethics and quality in current research. The NIH maintains a mandatory registry of human genetic engineering research protocols that includes all federally funded projects.
An NIH advisory committee published a set of guidelines on gene manipulation. The guidelines discuss lab safety as well as human test subjects and various experimental types that involve genetic changes. Several sections specifically pertain to human genetic engineering, including Section III-C-1. This section describes required review processes and other aspects when seeking approval to begin clinical research involving genetic transfer into a human patient. The protocol for a gene therapy clinical trial must be approved by the NIH's Recombinant DNA Advisory Committee prior to any clinical trial beginning; this is different from any other kind of clinical trial.
As with other kinds of drugs, the FDA regulates the quality and safety of gene therapy products and supervises how these products are used clinically. Therapeutic alteration of the human genome falls under the same regulatory requirements as any other medical treatment. Research involving human subjects, such as clinical trials, must be reviewed and approved by the FDA and an Institutional Review Board.
Gene doping
Athletes may adopt gene therapy technologies to improve their performance. Gene doping is not known to occur, but multiple gene therapies may have such effects. Kayser et al. argue that gene doping could level the playing field if all athletes receive equal access. Critics claim that any therapeutic intervention for non-therapeutic/enhancement purposes compromises the ethical foundations of medicine and sports.
Genetic enhancement
Genetic engineering could be used to cure diseases, but also to change physical appearance, metabolism, and even improve physical capabilities and mental faculties such as memory and intelligence. Ethical claims about germline engineering include beliefs that every fetus has a right to remain genetically unmodified, that parents hold the right to genetically modify their offspring, and that every child has the right to be born free of preventable diseases. For parents, genetic engineering could be seen as another child enhancement technique to add to diet, exercise, education, training, cosmetics, and plastic surgery. Another theorist claims that moral concerns limit but do not prohibit germline engineering.
A 2020 issue of the journal Bioethics was devoted to moral issues surrounding germline genetic engineering in people.
Possible regulatory schemes include a complete ban, provision to everyone, or professional self-regulation. The American Medical Association's Council on Ethical and Judicial Affairs stated that "genetic interventions to enhance traits should be considered permissible only in severely restricted situations: (1) clear and meaningful benefits to the fetus or child; (2) no trade-off with other characteristics or traits; and (3) equal access to the genetic technology, irrespective of income or other socioeconomic characteristics."
As early in the history of biotechnology as 1990, there have been scientists opposed to attempts to modify the human germline using these new tools, and such concerns have continued as technology progressed. With the advent of new techniques like CRISPR, in March 2015 a group of scientists urged a worldwide moratorium on clinical use of gene editing technologies to edit the human genome in a way that can be inherited. In April 2015, researchers sparked controversy when they reported results of basic research to edit the DNA of non-viable human embryos using CRISPR. A committee of the American National Academy of Sciences and National Academy of Medicine gave qualified support to human genome editing in 2017 once answers have been found to safety and efficiency problems "but only for serious conditions under stringent oversight."
History
1970s and earlier
In 1972, Friedmann and Roblin authored a paper in Science titled "Gene therapy for human genetic disease?". Rogers (1970) was cited for proposing that exogenous good DNA be used to replace the defective DNA in those with genetic defects.Rogers S, New Scientist 1970, p. 194
1980s
In 1984, a retrovirus vector system was designed that could efficiently insert foreign genes into mammalian chromosomes.
1990s
The first approved gene therapy clinical research in the US took place in September 1990, at the National Institutes of Health (NIH), under the direction of William French Anderson. Four-year-old Ashanti DeSilva received treatment for a genetic defect that left her with adenosine deaminase deficiency (ADA-SCID), a severe immune system deficiency. The defective gene of the patient's blood cells was replaced by the functional variant. Ashanti's immune system was partially restored by the therapy. Production of the missing enzyme was temporarily stimulated, but the new cells with functional genes were not generated. She led a normal life only with the regular injections performed every two months. The effects were successful, but temporary.
Cancer gene therapy was introduced in 1992/93 (Trojan et al. 1993). The treatment of glioblastoma multiforme, the malignant brain tumor whose outcome is always fatal, was done using a vector expressing antisense IGF-I RNA (clinical trial approved by NIH protocol no.1602 24 November 1993, and by the FDA in 1994). This therapy also represents the beginning of cancer immunogene therapy, a treatment which proves to be effective due to the anti-tumor mechanism of IGF-I antisense, which is related to strong immune and apoptotic phenomena.
In 1992, Claudio Bordignon, working at the Vita-Salute San Raffaele University, performed the first gene therapy procedure using hematopoietic stem cells as vectors to deliver genes intended to correct hereditary diseases. In 2002, this work led to the publication of the first successful gene therapy treatment for ADA-SCID. The success of a multi-center trial for treating children with SCID (severe combined immune deficiency or "bubble boy" disease) from 2000 and 2002, was questioned when two of the ten children treated at the trial's Paris center developed a leukemia-like condition. Clinical trials were halted temporarily in 2002, but resumed after regulatory review of the protocol in the US, the United Kingdom, France, Italy, and Germany.
In 1993, Andrew Gobea was born with SCID following prenatal genetic screening. Blood was removed from his mother's placenta and umbilical cord immediately after birth, to acquire stem cells. The allele that codes for adenosine deaminase (ADA) was obtained and inserted into a retrovirus. Retroviruses and stem cells were mixed, after which the viruses inserted the gene into the stem cell chromosomes. Stem cells containing the working ADA gene were injected into Andrew's blood. Injections of the ADA enzyme were also given weekly. For four years T cells (white blood cells), produced by stem cells, made ADA enzymes using the ADA gene. After four years more treatment was needed.
In 1996, Luigi Naldini and Didier Trono developed a new class of gene therapy vectors based on HIV capable of infecting non-dividing cells that have since then been widely used in clinical and research settings, pioneering lentivirals vector in gene therapy.
Jesse Gelsinger's death in 1999 impeded gene therapy research in the US. As a result, the FDA suspended several clinical trials pending the reevaluation of ethical and procedural practices.
2000s
The modified gene therapy strategy of antisense IGF-I RNA (NIH n˚ 1602) using antisense / triple helix anti-IGF-I approach was registered in 2002, by Wiley gene therapy clinical trial - n˚ 635 and 636. The approach has shown promising results in the treatment of six different malignant tumors: glioblastoma, cancers of liver, colon, prostate, uterus, and ovary (Collaborative NATO Science Programme on Gene Therapy USA, France, Poland n˚ LST 980517 conducted by J. Trojan) (Trojan et al., 2012). This anti-gene antisense/triple helix therapy has proven to be efficient, due to the mechanism stopping simultaneously IGF-I expression on translation and transcription levels, strengthening anti-tumor immune and apoptotic phenomena.
2002
Sickle cell disease can be treated in mice. The mice – which have essentially the same defect that causes human cases – used a viral vector to induce production of fetal hemoglobin (HbF), which normally ceases to be produced shortly after birth. In humans, the use of hydroxyurea to stimulate the production of HbF temporarily alleviates sickle cell symptoms. The researchers demonstrated this treatment to be a more permanent means to increase therapeutic HbF production.
A new gene therapy approach repaired errors in messenger RNA derived from defective genes. This technique has the potential to treat thalassaemia, cystic fibrosis and some cancers.
Researchers created liposomes 25 nanometers across that can carry therapeutic DNA through pores in the nuclear membrane.
2003
In 2003, a research team inserted genes into the brain for the first time. They used liposomes coated in a polymer called polyethylene glycol, which unlike viral vectors, are small enough to cross the blood–brain barrier.
Short pieces of double-stranded RNA (short, interfering RNAs or siRNAs) are used by cells to degrade RNA of a particular sequence. If a siRNA is designed to match the RNA copied from a faulty gene, then the abnormal protein product of that gene will not be produced.
Gendicine is a cancer gene therapy that delivers the tumor suppressor gene p53 using an engineered adenovirus. In 2003, it was approved in China for the treatment of head and neck squamous cell carcinoma.
2006
In March, researchers announced the successful use of gene therapy to treat two adult patients for X-linked chronic granulomatous disease, a disease which affects myeloid cells and damages the immune system. The study is the first to show that gene therapy can treat the myeloid system.
In May, a team reported a way to prevent the immune system from rejecting a newly delivered gene. Similar to organ transplantation, gene therapy has been plagued by this problem. The immune system normally recognizes the new gene as foreign and rejects the cells carrying it. The research utilized a newly uncovered network of genes regulated by molecules known as microRNAs. This natural function selectively obscured their therapeutic gene in immune system cells and protected it from discovery. Mice infected with the gene containing an immune-cell microRNA target sequence did not reject the gene.
In August, scientists successfully treated metastatic melanoma in two patients using killer T cells genetically retargeted to attack the cancer cells.
In November, researchers reported on the use of VRX496, a gene-based immunotherapy for the treatment of HIV that uses a lentiviral vector to deliver an antisense gene against the HIV envelope. In a phase I clinical trial, five subjects with chronic HIV infection who had failed to respond to at least two antiretroviral regimens were treated. A single intravenous infusion of autologous CD4 T cells genetically modified with VRX496 was well tolerated. All patients had stable or decreased viral load; four of the five patients had stable or increased CD4 T cell counts. All five patients had stable or increased immune response to HIV antigens and other pathogens. This was the first evaluation of a lentiviral vector administered in a US human clinical trial.
2007
In May 2007, researchers announced the first gene therapy trial for inherited retinal disease. The first operation was carried out on a 23-year-old British male, Robert Johnson, in early 2007.
2008
Leber's congenital amaurosis is an inherited blinding disease caused by mutations in the RPE65 gene. The results of a small clinical trial in children were published in April. Delivery of recombinant adeno-associated virus (AAV) carrying RPE65 yielded positive results. In May, two more groups reported positive results in independent clinical trials using gene therapy to treat the condition. In all three clinical trials, patients recovered functional vision without apparent side-effects.
2009
In September researchers were able to give trichromatic vision to squirrel monkeys. In November 2009, researchers halted a fatal genetic disorder called adrenoleukodystrophy in two children using a lentivirus vector to deliver a functioning version of ABCD1, the gene that is mutated in the disorder.
2010s
2010
An April paper reported that gene therapy addressed achromatopsia (color blindness) in dogs by targeting cone photoreceptors. Cone function and day vision were restored for at least 33 months in two young specimens. The therapy was less efficient for older dogs.
In September it was announced that an 18-year-old male patient in France with beta thalassemia major had been successfully treated. Beta thalassemia major is an inherited blood disease in which beta haemoglobin is missing and patients are dependent on regular lifelong blood transfusions. The technique used a lentiviral vector to transduce the human β-globin gene into purified blood and marrow cells obtained from the patient in June 2007. The patient's haemoglobin levels were stable at 9 to 10 g/dL. About a third of the hemoglobin contained the form introduced by the viral vector and blood transfusions were not needed. Further clinical trials were planned. Bone marrow transplants are the only cure for thalassemia, but 75% of patients do not find a matching donor.
Cancer immunogene therapy using modified antigene, antisense/triple helix approach was introduced in South America in 2010/11 in La Sabana University, Bogota (Ethical Committee 14 December 2010, no P-004-10). Considering the ethical aspect of gene diagnostic and gene therapy targeting IGF-I, the IGF-I expressing tumors i.e. lung and epidermis cancers were treated (Trojan et al. 2016).
2011
In 2007 and 2008, a man (Timothy Ray Brown) was cured of HIV by repeated hematopoietic stem cell transplantation (see also allogeneic stem cell transplantation, allogeneic bone marrow transplantation, allotransplantation) with double-delta-32 mutation which disables the CCR5 receptor. This cure was accepted by the medical community in 2011. It required complete ablation of existing bone marrow, which is very debilitating.
In August two of three subjects of a pilot study were confirmed to have been cured from chronic lymphocytic leukemia (CLL). The therapy used genetically modified T cells to attack cells that expressed the CD19 protein to fight the disease. In 2013, the researchers announced that 26 of 59 patients had achieved complete remission and the original patient had remained tumor-free.
Human HGF plasmid DNA therapy of cardiomyocytes is being examined as a potential treatment for coronary artery disease as well as treatment for the damage that occurs to the heart after myocardial infarction.
In 2011, Neovasculgen was registered in Russia as the first-in-class gene-therapy drug for treatment of peripheral artery disease, including critical limb ischemia; it delivers the gene encoding for VEGF. Neovasculogen is a plasmid encoding the CMV promoter and the 165 amino acid form of VEGF.
2012
The FDA approved Phase I clinical trials on thalassemia major patients in the US for 10 participants in July. The study was expected to continue until 2015.
In July 2012, the European Medicines Agency recommended approval of a gene therapy treatment for the first time in either Europe or the United States. The treatment used Alipogene tiparvovec (Glybera) to compensate for lipoprotein lipase deficiency, which can cause severe pancreatitis. The recommendation was endorsed by the European Commission in November 2012, and commercial rollout began in late 2014. Alipogene tiparvovec was expected to cost around $1.6 million per treatment in 2012, revised to $1 million in 2015, making it the most expensive medicine in the world at the time. , only the patients treated in clinical trials and a patient who paid the full price for treatment have received the drug.
In December 2012, it was reported that 10 of 13 patients with multiple myeloma were in remission "or very close to it" three months after being injected with a treatment involving genetically engineered T cells to target proteins NY-ESO-1 and LAGE-1, which exist only on cancerous myeloma cells.
2013
In March researchers reported that three of five adult subjects who had acute lymphocytic leukemia (ALL) had been in remission for five months to two years after being treated with genetically modified T cells which attacked cells with CD19 genes on their surface, i.e. all B cells, cancerous or not. The researchers believed that the patients' immune systems would make normal T cells and B cells after a couple of months. They were also given bone marrow. One patient relapsed and died and one died of a blood clot unrelated to the disease.
Following encouraging Phase I trials, in April, researchers announced they were starting Phase II clinical trials (called CUPID2 and SERCA-LVAD) on 250 patients at several hospitals to combat heart disease. The therapy was designed to increase the levels of SERCA2, a protein in heart muscles, improving muscle function. The U.S. Food and Drug Administration (FDA) granted this a breakthrough therapy designation to accelerate the trial and approval process. In 2016, it was reported that no improvement was found from the CUPID 2 trial.
In July researchers reported promising results for six children with two severe hereditary diseases had been treated with a partially deactivated lentivirus to replace a faulty gene and after 7–32 months. Three of the children had metachromatic leukodystrophy, which causes children to lose cognitive and motor skills. The other children had Wiskott–Aldrich syndrome, which leaves them to open to infection, autoimmune diseases, and cancer. Follow up trials with gene therapy on another six children with Wiskott–Aldrich syndrome were also reported as promising.
In October researchers reported that two children born with adenosine deaminase severe combined immunodeficiency disease (ADA-SCID) had been treated with genetically engineered stem cells 18 months previously and that their immune systems were showing signs of full recovery. Another three children were making progress. In 2014, a further 18 children with ADA-SCID were cured by gene therapy. ADA-SCID children have no functioning immune system and are sometimes known as "bubble children".
Also in October researchers reported that they had treated six people with haemophilia in early 2011 using an adeno-associated virus. Over two years later all six were producing clotting factor.
2014
In January researchers reported that six choroideremia patients had been treated with adeno-associated virus with a copy of REP1. Over a six-month to two-year period all had improved their sight. By 2016, 32 patients had been treated with positive results and researchers were hopeful the treatment would be long-lasting. Choroideremia is an inherited genetic eye disease with no approved treatment, leading to loss of sight.
In March researchers reported that 12 HIV patients had been treated since 2009 in a trial with a genetically engineered virus with a rare mutation (CCR5 deficiency) known to protect against HIV with promising results.
Clinical trials of gene therapy for sickle cell disease were started in 2014.
In February LentiGlobin BB305, a gene therapy treatment undergoing clinical trials for treatment of beta thalassemia gained FDA "breakthrough" status after several patients were able to forgo the frequent blood transfusions usually required to treat the disease.
In March researchers delivered a recombinant gene encoding a broadly neutralizing antibody into monkeys infected with simian HIV; the monkeys' cells produced the antibody, which cleared them of HIV. The technique is named immunoprophylaxis by gene transfer (IGT). Animal tests for antibodies to ebola, malaria, influenza, and hepatitis were underway.
In March, scientists, including an inventor of CRISPR, Jennifer Doudna, urged a worldwide moratorium on germline gene therapy, writing "scientists should avoid even attempting, in lax jurisdictions, germline genome modification for clinical application in humans" until the full implications "are discussed among scientific and governmental organizations".
In December, scientists of major world academies called for a moratorium on inheritable human genome edits, including those related to CRISPR-Cas9 technologies but that basic research including embryo gene editing should continue.
2015
Researchers successfully treated a boy with epidermolysis bullosa using skin grafts grown from his own skin cells, genetically altered to repair the mutation that caused his disease.
In November, researchers announced that they had treated a baby girl, Layla Richards, with an experimental treatment using donor T cells genetically engineered using TALEN to attack cancer cells. One year after the treatment she was still free of her cancer (a highly aggressive form of acute lymphoblastic leukaemia [ALL]). Children with highly aggressive ALL normally have a very poor prognosis and Layla's disease had been regarded as terminal before the treatment.
2016
In April the Committee for Medicinal Products for Human Use of the European Medicines Agency endorsed a gene therapy treatment called Strimvelis and the European Commission approved it in June. This treats children born with adenosine deaminase deficiency and who have no functioning immune system. This was the second gene therapy treatment to be approved in Europe.
In October, Chinese scientists reported they had started a trial to genetically modify T cells from 10 adult patients with lung cancer and reinject the modified T cells back into their bodies to attack the cancer cells. The T cells had the PD-1 protein (which stops or slows the immune response) removed using CRISPR-Cas9.
A 2016 Cochrane systematic review looking at data from four trials on topical cystic fibrosis transmembrane conductance regulator (CFTR) gene therapy does not support its clinical use as a mist inhaled into the lungs to treat cystic fibrosis patients with lung infections. One of the four trials did find weak evidence that liposome-based CFTR gene transfer therapy may lead to a small respiratory improvement for people with CF. This weak evidence is not enough to make a clinical recommendation for routine CFTR gene therapy.
2017
In February Kite Pharma announced results from a clinical trial of CAR-T cells in around a hundred people with advanced non-Hodgkin lymphoma.
In March, French scientists reported on clinical research of gene therapy to treat sickle cell disease.
In August, the FDA approved tisagenlecleucel for acute lymphoblastic leukemia. Tisagenlecleucel is an adoptive cell transfer therapy for B-cell acute lymphoblastic leukemia; T cells from a person with cancer are removed, genetically engineered to make a specific T-cell receptor (a chimeric T cell receptor, or "CAR-T") that reacts to the cancer, and are administered back to the person. The T cells are engineered to target a protein called CD19 that is common on B cells. This is the first form of gene therapy to be approved in the United States. In October, a similar therapy called axicabtagene ciloleucel was approved for non-Hodgkin lymphoma.
In October, biophysicist and biohacker Josiah Zayner claimed to have performed the very first in-vivo human genome editing in the form of a self-administered therapy.
On 13 November, medical scientists working with Sangamo Therapeutics, headquartered in Richmond, California, announced the first ever in-body human gene editing therapy. The treatment, designed to permanently insert a healthy version of the flawed gene that causes Hunter syndrome, was given to 44-year-old Brian Madeux and is part of the world's first study to permanently edit DNA inside the human body. The success of the gene insertion was later confirmed. Clinical trials by Sangamo involving gene editing using zinc finger nuclease (ZFN) are ongoing.
In December the results of using an adeno-associated virus with blood clotting factor VIII to treat nine haemophilia A patients were published. Six of the seven patients on the high dose regime increased the level of the blood clotting VIII to normal levels. The low and medium dose regimes had no effect on the patient's blood clotting levels.
In December, the FDA approved voretigene neparvovec, the first in vivo gene therapy, for the treatment of blindness due to Leber's congenital amaurosis. The price of this treatment is for both eyes.
2019
In May, the FDA approved onasemnogene abeparvovec (Zolgensma) for treating spinal muscular atrophy in children under two years of age. The list price of Zolgensma was set at per dose, making it the most expensive drug ever.
In May, the EMA approved betibeglogene autotemcel (Zynteglo) for treating beta thalassemia for people twelve years of age and older. Text was copied from this source which is copyright European Medicines Agency. Reproduction is authorized provided the source is acknowledged.
In July, Allergan and Editas Medicine announced phase I/II clinical trial of AGN-151587 for the treatment of Leber congenital amaurosis 10. This is one of the first studies of a CRISPR-based in vivo human gene editing therapy, where the editing takes place inside the human body. The first injection of the CRISPR-Cas System was confirmed in March 2020.
Exagamglogene autotemcel, a CRISPR-based human gene editing therapy, was used for sickle cell and thalassemia in clinical trials.
2020s
2020
In May, onasemnogene abeparvovec (Zolgensma) was approved by the European Union for the treatment of spinal muscular atrophy in people who either have clinical symptoms of SMA type 1 or who have no more than three copies of the SMN2 gene, irrespective of body weight or age.
In August, Audentes Therapeutics reported that three out of 17 children with X-linked myotubular myopathy participating the clinical trial of a AAV8-based gene therapy treatment AT132 have died. It was suggested that the treatment, whose dosage is based on body weight, exerts a disproportionately toxic effect on heavier patients, since the three patients who died were heavier than the others. The trial has been put on clinical hold.
On 15 October, the Committee for Medicinal Products for Human Use (CHMP) of the European Medicines Agency (EMA) adopted a positive opinion, recommending the granting of a marketing authorisation for the medicinal product Libmeldy (autologous CD34+ cell enriched population that contains hematopoietic stem and progenitor cells transduced ex vivo using a lentiviral vector encoding the human arylsulfatase A gene), a gene therapy for the treatment of children with the "late infantile" (LI) or "early juvenile" (EJ) forms of metachromatic leukodystrophy (MLD). Text was copied from this source which is copyright European Medicines Agency. Reproduction is authorized provided the source is acknowledged. The active substance of Libmeldy consists of the child's own stem cells which have been modified to contain working copies of the ARSA gene. When the modified cells are injected back into the patient as a one-time infusion, the cells are expected to start producing the ARSA enzyme that breaks down the build-up of sulfatides in the nerve cells and other cells of the patient's body. Text was copied from this source which is copyright European Medicines Agency. Reproduction is authorized provided the source is acknowledged. Libmeldy was approved for medical use in the EU in December 2020.
On 15 October, Lysogene, a French biotechnological company, reported the death of a patient in who has received LYS-SAF302, an experimental gene therapy treatment for mucopolysaccharidosis type IIIA (Sanfilippo syndrome type A).
2021
In May, a new method using an altered version of HIV as a lentivirus vector was reported in the treatment of 50 children with ADA-SCID obtaining positive results in 48 of them, this method is expected to be safer than retroviruses vectors commonly used in previous studies of SCID where the development of leukemia was usually observed and had already been used in 2019, but in a smaller group with X-SCID.
In June a clinical trial on six patients affected with transthyretin amyloidosis reported a reduction the concentration of missfolded transthretin (TTR) protein in serum through CRISPR-based inactivation of the TTR gene in liver cells observing mean reductions of 52% and 87% among the lower and higher dose groups.This was done in vivo without taking cells out of the patient to edit them and reinfuse them later.
In July results of a small gene therapy phase I study was published reporting observation of dopamine restoration on seven patients between 4 and 9 years old affected by aromatic L-amino acid decarboxylase deficiency (AADC deficiency).
2022
In February, the first ever gene therapy for Tay–Sachs disease was announced, it uses an adeno-associated virus to deliver the correct instruction for the HEXA gene on brain cells which causes the disease. Only two children were part of a compassionate trial presenting improvements over the natural course of the disease and no vector-related adverse events.
In May, eladocagene exuparvovec is recommended for approval by the European Commission.
In July results of a gene therapy candidate for haemophilia B called FLT180 were announced, it works using an adeno-associated virus (AAV) to restore the clotting factor IX (FIX) protein, normal levels of the protein were observed with low doses of the therapy but immunosuppression was necessitated to decrease the risk of vector-related immune responses.
In December, a 13-year girl that had been diagnosed with T-cell acute lymphoblastic leukaemia was successfully treated at Great Ormond Street Hospital (GOSH) in the first documented use of therapeutic gene editing for this purpose, after undergoing six months of an experimental treatment, where all attempts of other treatments failed. The procedure included reprogramming a healthy T-cell to destroy the cancerous T-cells to first rid her of leukaemia, and then rebuilding her immune system using healthy immune cells. The GOSH team used BASE editing and had previously treated a case of acute lymphoblastic leukaemia in 2015 using TALENs.
2023
In May 2023, the FDA approved beremagene geperpavec for the treatment of wounds in people with dystrophic epidermolysis bullosa (DEB) which is applied as a topical gel that delivers a herpes-simplex virus type 1 (HSV-1) vector encoding the collagen type VII alpha 1 chain (COL7A1) gene that is dysfunctional on those affected by DEB . One trial found 65% of the Vyjuvek-treated wounds completely closed while only 26% of the placebo-treated at 24 weeks. It has been also reported its use as an eyedrop for a patient with DEB that had vision loss due to the widespread blistering with good results.
In June 2023, the FDA gave an accelerated approval to Elevidys for Duchenne muscular dystrophy (DMD) only for boys 4 to 5 years old as they are more likely to benefit from the therapy which consists of one-time intravenous infusion of a virus (AAV rh74 vector) that delivers a functioning "microdystrophin" gene (138 kDa) into the muscle cells to act in place of the normal dystrophin (427 kDa) that is found mutated in this disease.
In July 2023, it was reported that it had been developed a new method to affect genetic expressions through direct current.
In December 2023, two gene therapies were approved for sickle cell disease, exagamglogene autotemcel and lovotibeglogene autotemcel.
2024
In November 2024, FDA granted accelerated approval for eladocagene exuparvovec-tneq (Kebilidi, PTC Therapeutics), a direct-to-brain gene therapy for aromatic L-amino acid decarboxylase deficiency. It uses a recombinant adeno-associated virus serotype 2 (rAAV2) to deliver a functioning DOPA decarboxylase (DDC) gene directly into the putamen, increasing the AADC enzyme and restoring dopamine production. It is administered through a stereotactic surgical procedure.
List of gene therapies
Gene therapy for color blindness
Gene therapy for epilepsy
Gene therapy for osteoarthritis
Gene therapy in Parkinson's disease
Gene therapy of the human retina
List of gene therapies
See also
Molecular oncology
References
Further reading
External links
Category:Applied genetics
Category:Approved gene therapies
Category:Bioethics
Category:Biotechnology
Category:Medical genetics
Category:Molecular biology
Category:Molecular genetics
Category:Gene delivery
Category:1989 introductions
Category:1996 introductions
Category:1989 in biotechnology
Category:Genetic engineering
|
medicine_health
| 9,432
|
13308
|
Hittites
|
https://en.wikipedia.org/wiki/Hittites
|
The Hittites () were an Anatolian Indo-European people who formed one of the first major civilizations of the Bronze Age in West Asia. Possibly originating from beyond the Black Sea, they settled in modern-day Turkey in the early 2nd millennium BC. The Hittites formed a series of polities in north-central Anatolia, including the kingdom of Kussara (before 1750 BC), the Kanesh or Nesha Kingdom (–1650 BC), and an empire centered on their capital, Hattusa (around 1650 BC). Known in modern times as the Hittite Empire, it reached its peak during the mid-14th century BC under Šuppiluliuma I, when it encompassed most of Anatolia and parts of the northern Levant and Upper Mesopotamia, bordering the rival empires of the Hurri-Mitanni and Assyrians.
Between the 15th and 13th centuries BC, the Hittites were one of the dominant powers of the Near East, coming into conflict with the New Kingdom of Egypt, the Middle Assyrian Empire, and the Empire of Mitanni. By the 12th century BC, much of the Hittite Empire had been annexed by the Middle Assyrian Empire, with the remainder being sacked by Phrygian newcomers to the region. From the late 12th century BC, during the Late Bronze Age collapse, the Hittites splintered into several small independent states, some of which survived until the eighth century BC before succumbing to the Neo-Assyrian Empire; lacking a unifying continuity, their descendants scattered and ultimately merged into the modern populations of the Levant and Mesopotamia."Sea Peoples". Ancient History Encyclopedia. September 2009.
The Hittite language—referred to by its speakers as , "the language of Nesa"—was a distinct member of the Anatolian branch of the Indo-European language family; along with the closely related Luwian language, it is the oldest historically attested Indo-European language. The history of the Hittite civilization is known mostly from cuneiform texts found in their former territories, and from diplomatic and commercial correspondence found in the various archives of Assyria, Babylonia, Egypt and the broader Middle East; the decipherment of these texts was a key event in the history of Indo-European studies.
Scholars once attributed the development of iron-smelting to the Hittites, who were believed to have monopolized ironworking during the Bronze Age. This theory has been increasingly contested in the 21st century, with the Late Bronze Age collapse, and subsequent Iron Age, seeing the slow, comparatively continuous spread of ironworking technology across the region. While there are some iron objects from Bronze Age Anatolia, the number is comparable to that of iron objects found in Egypt, Mesopotamia and in other places from the same period; and only a small number of these objects are weapons.Waldbaum, Jane C. (1978). From Bronze to Iron. Gothenburg: Paul Astöms Förlag. pp. 56–58. X-ray fluorescence spectrometry suggests that most or all irons from the Bronze Age are derived from meteorites. The Hittite military also made successful use of chariots.
Modern interest in the Hittites increased with the founding of the Republic of Turkey in 1923. The Hittites attracted the attention of Turkish archaeologists such as Halet Çambel and Tahsin Özgüç. During this period, the new field of Hittitology also influenced the naming of Turkish institutions, such as the state-owned Etibank ("Hittite bank"), and the foundation of the Museum of Anatolian Civilizations in Ankara, built west of the Hittite capital of Hattusa, which houses the world's most comprehensive exhibition of Hittite art and artifacts.
Etymology
The Hittites called their kingdom Hattusa (Hatti in Akkadian), a name received from the Hattians, an earlier people who had inhabited and ruled the central Anatolian region until the beginning of the second millennium BC, and who spoke an unrelated language known as Hattic.Ardzinba, Vladislav. (1974): Some Notes on the Typological Affinity Between Hattian and Northwest Caucasian (Abkhazo-Adygian) Languages. In: "Internationale Tagung der Keilschriftforscher der sozialistischen Länder", Budapest, 23–25. April 1974. Zusammenfassung der Vorträge (Assyriologica 1), pp. 10–15. The modern conventional name "Hittites" is due to the initial identification of the people of Hattusa with the Biblical Hittites by 19th-century archaeologists. The Hittites would have called themselves something closer to "Neshites" or "Neshians" after the city of Nesha, which flourished for some two hundred years until a king named Labarna renamed himself Hattusili I (meaning "the man of Hattusa") sometime around 1650 BC and established his capital city at Hattusa.
Archeological discovery
Biblical background
Before the archeological discoveries that revealed the Hittite civilization, the only source of information about the Hittites had been the Hebrew Bible. Francis William Newman expressed the critical view, common in the early 19th century, that, "no Hittite king could have compared in power to the King of Judah...".
As the discoveries in the second half of the 19th century revealed the scale of the Hittite kingdom, Archibald Sayce asserted that, rather than being compared to Judah, the Anatolian civilization "[was] worthy of comparison to the divided Kingdom of Egypt", and was "infinitely more powerful than that of Judah".The Hittites: the story of a forgotten empire By Archibald Henry Sayce Queen's College, Oxford. October 1888. Introduction Sayce and other scholars also noted that Judah and the Hittites were never enemies in the Hebrew texts; in the Book of Kings, they supplied the Israelites with cedar, chariots, and horses, and in the Book of Genesis were friends and allies to Abraham. Uriah the Hittite was a captain in King David's army and counted as one of his "mighty men" in 1 Chronicles 11.
Initial discoveries
French scholar Charles Texier found the first Hittite ruins in 1834 but did not identify them as such.
The first archaeological evidence for the Hittites appeared in tablets found at the karum of Kanesh (now called Kültepe), containing records of trade between Assyrian merchants and a certain "land of Hatti". Some names in the tablets were neither Hattic nor Assyrian, but clearly Indo-European.
The script on a monument at Boğazkale by a "People of Hattusas" discovered by William Wright in 1884 was found to match peculiar hieroglyphic scripts from Aleppo and Hama in Northern Syria. In 1887, excavations at Amarna in Egypt uncovered the diplomatic correspondence of Pharaoh Amenhotep III and his son, Akhenaten. Two of the letters from a "kingdom of Kheta"—apparently located in the same general region as the Mesopotamian references to "land of Hatti"—were written in standard Akkadian cuneiform, but in an unknown language; although scholars could interpret its sounds, no one could understand it. Shortly after this, Sayce proposed that Hatti or Khatti in Anatolia was identical with the "kingdom of Kheta" mentioned in these Egyptian texts, as well as with the biblical Hittites. Others, such as Max Müller, agreed that Khatti was probably Kheta, but proposed connecting it with Biblical Kittim rather than with the Biblical Hittites. Sayce's identification came to be widely accepted over the course of the early 20th century; and the name "Hittite" has become attached to the civilization uncovered at Boğazköy.
During sporadic excavations at Boğazköy (Hattusa) that began in 1906, the archaeologist Hugo Winckler found a royal archive with 10,000 tablets, inscribed in cuneiform Akkadian and the same unknown language as the Egyptian letters from Kheta—thus confirming the identity of the two names. He also proved that the ruins at Boğazköy were the remains of the capital of an empire that, at one point, controlled northern Syria.
Under the direction of the German Archaeological Institute, excavations at Hattusa have been under way since 1907, with interruptions during the world wars. Kültepe was successfully excavated by Professor Tahsin Özgüç from 1948 until his death in 2005. Smaller scale excavations have also been carried out in the immediate surroundings of Hattusa, including the rock sanctuary of Yazılıkaya, which contains numerous rock reliefs portraying the Hittite rulers and the gods of the Hittite pantheon.
Writings
The Hittites used a variation of cuneiform called Hittite cuneiform. Archaeological expeditions to Hattusa have discovered entire sets of royal archives on cuneiform tablets, written either in Akkadian, the diplomatic language of the time, or in the various dialects of the Hittite confederation.The Hittite Empire. Chapter V. Vahan Kurkjian
Museums
The Museum of Anatolian Civilizations in Ankara, Turkey houses the richest collection of Hittite and Anatolian artifacts.
Geography
The Hittite kingdom was centered on the lands surrounding Hattusa and Neša (Kültepe), known as "the land Hatti" (). After Hattusa was made the capital, the area encompassed by the bend of the Kızılırmak River (Hittite Marassantiya, Greek Halys) was considered the core of the Empire, and some Hittite laws make a distinction between "this side of the river" and "that side of the river". For example, the bounty for an escaped slave who had fled beyond the river is higher than for a slave caught on the near side.
To the west and south of the core territory lay the region known as Luwiya in the earliest Hittite texts. This terminology was replaced by the names Arzawa and Kizzuwatna with the rise of those kingdoms.John Marangozis (2003) A Short Grammar of Hieroglyphic Luwian Nevertheless, the Hittites continued to refer to the language that originated in these areas as Luwian. Prior to the rise of Kizzuwatna, the heart of that territory in Cilicia was first referred to by the Hittites as Adaniya. Upon its revolt from the Hittites during the reign of Ammuna, it assumed the name of Kizzuwatna and successfully expanded northward to encompass the lower Anti-Taurus Mountains as well. To the north lived the mountain people called the Kaskians. To the southeast of the Hittites lay the Hurrian empire of Mitanni.
At its peak during the reign of Muršili II, the Hittite empire stretched from Arzawa in the west to Mitanni in the east, and included many of the Kaskian territories north as far as Hayasa-Azzi in the far north-east, as well as south into Canaan near the southern border of Lebanon.
History
Origins
The ancestors of the Hittites came into Anatolia between 4400 and 4100 BC, when the Anatolian language family split from (Proto)-Indo-European.Kloekhorst, Alwin, (2022). "Anatolian" , in: Thomas Olander (ed.), The Indo-European Language Family: A Phylogenetic Perspective, Cambridge University Press, p. 78: "...the Anatolian split may be dated to the period between 4400–4100 BCE. If Proto-Anatolian indeed first broke up into its daughter languages around the thirty-first century BCE...it would mean that it had some 1,300–1000 years to undergo the specific innovations that define Anatolian as a separate branch..." Recent genetic and archaeological research has indicated that Proto-Anatolian speakers arrived in this region sometime between 5000 and 3000 BC.Lazaridis, Iosif, et al., (2022). "The genetic history of the Southern Arc: A bridge between West Asia and Europe", in: Science, 26 Aug 2022, Vol 377, Issue 6609, [Research Article Summary, p. 1]: "Around 7000-5000 years ago, people with ancestry from the Caucasus [...] moved west into Anatolia [...] Some of these migrants may have spoken ancestral forms of Anatolian [...]" The Proto-Hittite language developed around 2100 BC,Kloekhorst, Alwin, (2022). "Anatolian" , in: Thomas Olander (ed.), The Indo-European Language Family: A Phylogenetic Perspective, Cambridge University Press, p. 75: "...a Proto-Hittite ancestor language that may have been spoken only a few generations before the oldest attestations of Kanišite Hittite (twentieth century BCE), i.e. around 2100 BCE..." and the Hittite language itself is believed to have been in use in Central Anatolia between the 20th and 12th centuries BC.Kroonen, Guus, et al., (2018). "Linguistic supplement to Damgaard et al. 2018: Early Indo-European languages, Anatolian, Tocharian and Indo-Iranian" , in Zenodo 2018, p. 3: "...The Anatolian branch is an extinct subclade of the Indo-European language family attested from the 25th century BCE onwards (see below) that consists of Hittite (known 20th–12th centuries BCE), Luwian (known 20th–7th centuries BCE), and a number of less well-attested members, such as Carian, Lycian, Lydian, and Palai..."
The Hittites are first associated with the kingdom of Kussara sometime prior to 1750 BC.
Hittites in Anatolia during the Bronze Age coexisted with Hattians and Hurrians, either by means of conquest or by gradual assimilation... In archaeological terms, relationships of the Hittites to the Ezero culture of the Balkans and Maykop culture of the Caucasus had previously been considered within the migration framework.
Analyses by David W. Anthony in 2007 concluded that steppe herders who were archaic Indo-European speakers spread into the lower Danube valley about 4200–4000 BC, either causing or taking advantage of the collapse of Old Europe. He thought their languages "probably included archaic Proto-Indo-European dialects of the kind partly preserved later in Anatolian," and that their descendants later moved into Anatolia at an unknown time but maybe as early as 3000 BC.
J. P. Mallory also thought it was likely that the Anatolians reached the Near East from the north either via the Balkans or the Caucasus in the 3rd millennium BC. According to Parpola, the appearance of Indo-European speakers from Europe into Anatolia, and the appearance of Hittite, was related to later migrations of Proto-Indo-European speakers from the Yamnaya culture into the Danube Valley at c. 2800 BC, which was in line with the "customary" assumption that the Anatolian Indo-European language was introduced into Anatolia sometime in the third millennium BC.
However, Petra Goedegebuure has shown that the Hittite language has borrowed many words related to agriculture from cultures on their eastern borders, which is evidence of having taken a route across the Caucasus.
A team at the David Reich Lab demonstrated that the Hittite route must have been via the Caucasus and not the Balkans, since Yamnaya expansion into the Balkans carried a component of Eastern Hunter Gatherer ancestry that does not exist in any ancient Anatolian DNA samples, which indicates also that Hittites and their cousin groups split off from the Proto Indo Europeans before the formation of the Yamnaya which did admix with Eastern Hunter Gatherers.
The dominant indigenous inhabitants in central Anatolia were Hurrians and Hattians who spoke non-Indo-European languages. Some have argued that Hattic was a Northwest Caucasian language, but its affiliation remains uncertain, whilst the Hurrian language was a near-isolate (i.e. it was one of only two or three languages in the Hurro-Urartian family). There were also Assyrian colonies in the region during the Old Assyrian Empire (2025–1750 BC); it was from the Assyrian speakers of Upper Mesopotamia that the Hittites adopted the cuneiform script. It took some time before the Hittites established themselves following the collapse of the Old Assyrian Empire in the mid-18th century BC, as is clear from some of the texts included here. For several centuries there were separate Hittite groups, usually centered on various cities. But then strong rulers with their center in Hattusa (modern Boğazkale) succeeded in bringing these together and conquering large parts of central Anatolia to establish the Hittite kingdom.
Early period
The Hittite state was formed from many small polities in North-Central Anatolia, at the banks of the Kızılırmak River, during the Middle Bronze Age (c. 1900–1650 BC).Matessi, Alvise, (2021). "The ways of an empire: Continuity and change of route landscapes across the Taurus during the Hittite Period (ca. 1650–1200 BCE)" , in: Journal of Anthropological Archaeology, Volume 62, June 2021: "...the Hittite state emerged in Hatti, in the bend of the Kızılırmak, from a mosaic of canton polities occupying North-Central Anatolia during the Middle Bronze Age (MBA; ca. 1900–1650 BCE)." The early history of the Hittite kingdom is known through four "cushion-shaped" tablets, (classified as KBo 3.22, KBo 17.21+, KBo 22.1, and KBo 22.2), not made in Ḫattuša, but probably created in Kussara, Nēša, or another site in Anatolia, that may first have been written in the 18th century BC, in Old Hittite language, and three of them using the so-called "Old Script" (OS);Kloekhorst, Alwin, and Willemijn Waal, (2019). "A Hittite scribal tradition predating the tablet collections of Ḫattuša?: The origin of the 'cushion-shaped' tablets KBo 3.22, KBo 17.21+, KBo 22.1, and KBo 22.2.", in: Zeitschrift Für Assyriologie Und Vorderasiatische Archäologie, 109(2), p. 190: "...Three of the four documents that have this peculiar 'cushion-shape' are generally regarded as showing Old Script (OS): KBo 3.22, KBo 17.21+, and KBo 22.1..." although most of the remaining tablets survived only as Akkadian copies made in the 14th and 13th centuries BC. These reveal a rivalry within two branches of the royal family up to the Middle Kingdom; a northern branch first based in Zalpuwa and secondarily Hattusa, and a southern branch based in Kussara (still not found) and the former Assyrian colony of Kanesh. These are distinguishable by their names; the northerners retained language isolate Hattian names, and the southerners adopted Indo-European Hittite and Luwian names.
Zalpuwa first attacked Kanesh under Uhna in 1833 BC. And during this kārum period, when the merchant colony of the Old Assyrian Empire was flourishing in the site, and before the conquest of Pithana, the following local kings reigned in Kaneš: Ḫurmili (prior to 1790 BC), Paḫanu (a short time in 1790 BC), Inar (–1775 BC), and Waršama (–1750 BC).Kloekhorst, Alwin, (2021). "A new interpretation of the Old Hittite Zalpa-text (CTH 3.1): Nēša as the capital under Ḫuzzii̯a I, Labarna I, and Ḫattušili I" , in Journal of the American Oriental Society, Vol.141, No. 3, p. 564.
One set of tablets, known collectively as the Anitta text,ed. StBoT 18 begin by telling how Pithana the king of Kussara conquered neighbouring Neša (Kanesh), this conquest took place around 1750 BC.Kloekhorst, Alwin, (2021). "A new interpretation of the Old Hittite Zalpa-text (CTH 3.1): Nēša as the capital under Ḫuzzii̯a I, Labarna I, and Ḫattušili I" , in Journal of the American Oriental Society, Vol. 141, No. 3, p. 564: "...Around 1750 BCE, Pitḫāna, king of Kuššara, conquered Nēša and took over power. He was succeeded by his son Anitta..." However, the real subject of these tablets is Pithana's son Anitta ( BC), who continued where his father left off and conquered several northern cities: including Hattusa, which he cursed, and also Zalpuwa. This was likely propaganda for the southern branch of the royal family, against the northern branch who had fixed on Hattusa as capital. Another set, the Tale of Zalpuwa, supports Zalpuwa and exonerates the later Ḫattušili I from the charge of sacking Kanesh.
Anitta was succeeded by Zuzzu ( BC); but sometime in 1710–1705 BC, Kanesh was destroyed, taking the long-established Assyrian merchant trading system with it. A Kussaran noble family survived to contest the Zalpuwan/Hattusan family, though whether these were of the direct line of Anitta is uncertain.
Meanwhile, the lords of Zalpa lived on. Huzziya I( the "elder" Huzziya), descendant of a Huzziya of Zalpa, took over Hatti. His son-in-law Labarna I, a southerner from Hurma usurped the throne but made sure to adopt Huzziya's grandson Ḫattušili as his own son and heir. The location of the land of Hurma is believed to be in the mountains south of Kussara.Joost Blasweiler (2020), The kingdom of Hurma during the reign of Labarna and Hattusili. Part I. academia.edu
Old Kingdom
The founding of the Hittite Kingdom is attributed to either Labarna I or Hattusili I (the latter might also have had Labarna as a personal name), who conquered the area south and north of Hattusa. Hattusili I campaigned as far as the Semitic Amorite kingdom of Yamkhad in Syria, where he attacked, but did not capture, its capital of Aleppo. Hattusili I did eventually capture Hattusa and was credited for the foundation of the Hittite Empire.
Hattusili was king, and his sons, brothers, in-laws, family members, and troops were all united. Wherever he went on campaign he controlled the enemy land with force. He destroyed the lands one after the other, took away their power, and made them the borders of the sea. When he came back from campaign, however, each of his sons went somewhere to a country, and in his hand the great cities prospered. But, when later the princes' servants became corrupt, they began to devour the properties, conspired constantly against their masters, and began to shed their blood.
This excerpt from The Edict of Telepinu, dating to the 16th century BC, is supposed to illustrate the unification, growth, and prosperity of the Hittites under his rule. It also illustrates the corruption of "the princes", believed to be his sons. The lack of sources leads to uncertainty of how the corruption was addressed. On Hattusili I's deathbed, he chose his grandson, Mursili I (or Murshilish I), as his heir.
Mursili continued the conquests of Hattusili I. In 1595 BC (middle chronology) or 1587 BC (low middle chronology), Mursili I conducted a great raid down the Euphrates River, bypassing Assyria and sacking Mari and Babylon, ejecting the Amorite rulers of the Old Babylonian Empire in the process. Rather than incorporate Babylonia into Hittite domains, Mursili seems to have instead turned control of Babylonia over to his Kassite allies, who were to rule it for the next four centuries. Due to fear of revolts at home, he did not remain in Babylon for long. This lengthy campaign strained the resources of Hatti, and left the capital in a state of near-anarchy. Mursili was assassinated by his brother-in-law Hantili I during his journey back to Hattusa or shortly after his return home, and the Hittite Kingdom was plunged into chaos. Hantili took the throne. He was able to escape multiple murder attempts on himself, however, his family did not. His wife, Harapsili and her son were murdered. In addition, other members of the royal family were killed by Zidanta I, who was then murdered by his own son, Ammuna. All of the internal unrest among the Hittite royal family led to a decline of power. The Hurrians, a people living in the mountainous region along the upper Tigris and Euphrates rivers in modern south east Turkey, took advantage of the situation to seize Aleppo and the surrounding areas for themselves, as well as the coastal region of Adaniya, renaming it Kizzuwatna (later Cilicia). Throughout the remainder of the 16th century BC, the Hittite kings were held to their homelands by dynastic quarrels and warfare with the Hurrians. The Hurrians became the center of power in Anatolia. The campaigns into Amurru and southern Mesopotamia may be responsible for the reintroduction of cuneiform writing into Anatolia, since the Hittite script is quite different from that of the preceding Assyrian colonial period.
The Hittites entered a weak phase of obscure records, insignificant rulers, and reduced domains. This pattern of expansion under strong kings followed by contraction under weaker ones, was to be repeated over and over through the Hittite Kingdom's 500-year history, making events during the waning periods difficult to reconstruct. The political instability of these years of the Old Hittite Kingdom can be explained in part by the nature of the Hittite kingship at that time. During the Old Hittite Kingdom prior to 1400 BC, the king of the Hittites was not viewed by his subjects as a "living god" like the pharaohs of Egypt, but rather as a first among equals. Only in the later period from 1400 BC until 1200 BC did the Hittite kingship become more centralized and powerful. Also in earlier years the succession was not legally fixed, enabling "War of the Roses"-style rivalries between northern and southern branches.
The next monarch of note following Mursili I was Telepinu (), who won a few victories to the southwest, apparently by allying himself with one Hurrian state (Kizzuwatna) against another. Telepinu also attempted to secure the lines of succession.
Middle Kingdom
The last monarch of the Old Kingdom, Telepinu, reigned until about 1500 BC. Telepinu's reign marked the end of the "Old Kingdom" and the beginning of the lengthy weak phase known as the "Middle Kingdom". The period of the 15th century BC is largely unknown with few surviving records. Part of the reason for both the weakness and the obscurity is that the Hittites were under constant attack, mainly from the Kaskians, a non-Indo-European people settled along the shores of the Black Sea. The capital once again went on the move, first to Sapinuwa and then to Samuha. There is an archive in Sapinuwa, but it has not been adequately translated to date.
It segues into the "Hittite Empire period" proper, which dates from the reign of Tudhaliya I from .
One innovation that can be credited to these early Hittite rulers is the practice of conducting treaties and alliances with neighboring states; the Hittites were thus among the earliest known pioneers in the art of international politics and diplomacy. This is also when the Hittite religion adopted several gods and rituals from the Hurrians.
New Kingdom
With the reign of Tudhaliya I (who may actually not have been the first of that name; see also Tudhaliya), the Hittite Kingdom re-emerged from the fog of obscurity and entered the "Hittite Empire period". Many changes were afoot during this time, not the least of which was a strengthening of the kingship. Settlement of the Hittites progressed in the Empire period. However, the Hittite people tended to settle in the older lands of south Anatolia rather than the lands of the Aegean. As this settlement progressed, treaties were signed with neighboring peoples. During the Hittite Empire period the kingship became hereditary and the king took on a "superhuman aura" and began to be referred to by the Hittite citizens as "My Sun". The kings of the Empire period began acting as a high priest for the whole kingdommaking an annual tour of the Hittite holy cities, conducting festivals and supervising the upkeep of the sanctuaries.
During his reign (), King Tudhaliya I, again allied with Kizzuwatna, then vanquished the Hurrian states of Aleppo and Mitanni, and expanded to the west at the expense of Arzawa (a Luwian state).
Another weak phase followed Tudhaliya I, and the Hittites' enemies from all directions were able to advance even to Hattusa and raze it. However, the kingdom recovered its former glory under Šuppiluliuma I (), who again conquered Aleppo. Mitanni was reduced to vassalage by the Assyrians under his son-in-law, and he defeated Carchemish, another Amorite city-state. With his own sons placed over all of these new conquests and Babylonia still in the hands of the allied Kassites, this left Šuppiluliuma the supreme power broker in the known world, alongside Assyria and Egypt, and it was not long before Egypt was seeking an alliance by marriage of another of his sons with the widow of Tutankhamen. That son was evidently murdered before reaching his destination, and this alliance was never consummated. However, the Middle Assyrian Empire (1365–1050 BC) once more began to grow in power with the ascension of Ashur-uballit I in 1365 BC. Ashur-uballit I attacked and defeated Mattiwaza the Mitanni king despite attempts by the Hittite king Šuppiluliuma I, now fearful of growing Assyrian power, attempting to preserve his throne with military support. The lands of the Mitanni and Hurrians were duly appropriated by Assyria, enabling it to encroach on Hittite territory in eastern Asia Minor, and Adad-nirari I annexed Carchemish and northeast Syria from the control of the Hittites.
While Šuppiluliuma I reigned, the Hittite Empire was devastated by an epidemic of tularemia. The epidemic afflicted the Hittites for decades and tularemia killed Šuppiluliuma I and his successor, Arnuwanda II. After Šuppiluliuma I's rule, and the brief reign of his eldest son, Arnuwanda II, another son, Mursili II, became king (). Having inherited a position of strength in the east, Mursili was able to turn his attention to the west, where he attacked Arzawa. At a point when the Hittites were weakened by the tularemia epidemic, the Arzawans attacked the Hittites, who repelled the attack by sending infected rams to the Arzawans. This was the first recorded use of biological warfare. Mursili also attacked a city known as Millawanda (Miletus), which was under the control of Ahhiyawa. More recent research based on new readings and interpretations of the Hittite texts, as well as of the material evidence for Mycenaean contacts with the Anatolian mainland, came to the conclusion that Ahhiyawa referred to Mycenaean Greece, or at least to a part of it.
Battle of Kadesh
Hittite prosperity was mostly dependent on control of the trade routes and metal sources. Because of the importance of Northern Syria to the vital routes linking the Cilician gates with Mesopotamia, defense of this area was crucial, and was soon put to the test by Egyptian expansion under Pharaoh Ramesses II. The outcome of the Battle of Kadesh is uncertain, though it seems that the timely arrival of Egyptian reinforcements prevented total Hittite victory. The Egyptians forced the Hittites to take refuge in the fortress of Kadesh, but their own losses prevented them from sustaining a siege. This battle took place in the 5th year of Ramesses ( by the most commonly used chronology).
Downfall and demise of the kingdom
After this date, the power of both the Hittites and Egyptians began to decline yet again because of the power of the Assyrians. The Assyrian king Shalmaneser I had seized the opportunity to vanquish Hurria and Mitanni, occupy their lands, and expand up to the head of the Euphrates, while Muwatalli was preoccupied with the Egyptians. The Hittites had vainly tried to preserve the Mitanni Kingdom with military support. Assyria now posed just as great a threat to Hittite trade routes as Egypt ever had. Muwatalli's son, Urhi-Teshub, took the throne and ruled as king for seven years as Mursili III before being ousted by his uncle, Hattusili III after a brief civil war. In response to increasing Assyrian annexation of Hittite territory, he concluded a peace and alliance with Ramesses II (also fearful of Assyria), presenting his daughter's hand in marriage to the Pharaoh. The Treaty of Kadesh, one of the oldest completely surviving treaties in history, fixed their mutual boundaries in southern Canaan, and was signed in the 21st year of Rameses (c. 1258 BC). Terms of this treaty included the marriage of one of the Hittite princesses to Ramesses.
Hattusili's son, Tudhaliya IV, was the last strong Hittite king able to keep the Assyrians out of the Hittite heartland to some degree at least, though he too lost much territory to them, and was heavily defeated by Tukulti-Ninurta I of Assyria in the Battle of Nihriya. He even temporarily annexed the island of Cyprus, before that too fell to Assyria. The last king, Šuppiluliuma II also managed to win some victories, including a naval battle against Alashiya off the coast of Cyprus.Horst Nowacki, Wolfgang Lefèvre Creating Shapes in Civil and Naval Architecture: A Cross-Disciplinary Comparison Brill, 2009
Bryce sees the Great Kingdom's end as a gradual disintegration. Pointing to the death of Hattusili as a starting point. Tudhaliya would have to put down rebellions and plots against his rule. This was not abnormal. However the Hittite military were stretched thin, due to a lack of manpower and hits to the population of the Empire. Putting down revolts and civil wars with brute force was not something Hatti could do to the same extent anymore. Every soldier was also a worker away from the economy, such as food production. Thus, casualties from war became ever more costly and unsustainable.
The Sea Peoples had already begun their push down the Mediterranean coastline, starting from the Aegean, and continuing all the way to Canaan, founding the state of Philistiataking Cilicia and Cyprus away from the Hittites en route and cutting off their coveted trade routes. This left the Hittite homelands vulnerable to attack from all directions, and Hattusa was burnt to the ground sometime around 1180 BC following a combined onslaught from new waves of invaders: the Kaskians, Phrygians and Bryges. The Hittite Kingdom thus vanished from historical records, much of the territory being seized by Assyria. Alongside these attacks, many internal issues also led to the end of the Hittite Kingdom. The end of the kingdom was part of the larger Bronze Age Collapse. A study of tree rings of juniper trees growing in the region showed a change to drier conditions from the 13th century BC into the 12th century BC with drought for three consecutive years in 1198, 1197 and 1196 BC.
Post-Hittite period
By 1160 BC, the political situation in Asia Minor looked vastly different from that of only 25 years earlier. In that year, the Assyrian king Tiglath-Pileser I was defeating the Mushki (Phrygians) who had been attempting to press into Assyrian colonies in southern Anatolia from the Anatolian highlands, and the Kaska people, the Hittites' old enemies from the northern hill-country between Hatti and the Black Sea, seem to have joined them soon after. The Phrygians had apparently overrun Cappadocia from the West, with recently discovered epigraphic evidence confirming their origins as the Balkan "Bryges" tribe, forced out by the Macedonians.
Although the Hittite Kingdom disappeared from Anatolia at this point, there emerged a number of so-called Syro-Hittite states in Anatolia and northern Syria. They were the successors of the Hittite Kingdom. The most notable Syro-Hittite kingdoms were those at Carchemish and Melid. With the ruling family in Carchemish believed to have been a cadet branch of the then defunct central ruling Hittite line. These Syro-Hittite states gradually fell under the control of the Neo-Assyrian Empire (911–608 BC). Carchemish and Melid were made vassals of Assyria under Shalmaneser III (858–823 BC), and fully incorporated into Assyria during the reign of Sargon II (722–705 BC).
A large and powerful state known as Tabal occupied much of southern Anatolia. Known as Greek Tibarenoi (), Latin Tibareni, Thobeles in Josephus, their language may have been Luwian,Barnett, R.D., "Phrygia and the Peoples of Anatolia in the Iron Age", The Cambridge Ancient History, Vol. II, Part 2 (1975) p. 422 testified to by monuments written using Anatolian hieroglyphs.The Georgian historian Ivane Javakhishvili considered Tabal, Tubal, Jabal and Jubal to be ancient Georgian tribal designations, and argued that they spoke Kartvelian languages, a non-Indo-European language This state too was conquered and incorporated into the vast Neo-Assyrian Empire.
Ultimately, both Luwian hieroglyphs and cuneiform were rendered obsolete by an innovation, the alphabet, which seems to have entered Anatolia simultaneously from the Aegean (with the Bryges, who changed their name to Phrygians), and from the Phoenicians and neighboring peoples in Syria.
Government
The earliest known constitutional monarchy was developed by the Hittites.
The head of the Hittite state was the king, followed by the heir-apparent. The king was the supreme ruler of the land, in charge of being a military commander, judicial authority, as well as a high priest. However, some officials exercised independent authority over various branches of the government. One of the most important of these posts in the Hittite society was that of the gal mesedi (Chief of the Royal Bodyguards). It was superseded by the rank of the gal gestin (chief of the wine stewards), who, like the gal mesedi, was generally a member of the royal family. The kingdom's bureaucracy was headed by the gal dubsar (chief of the scribes), whose authority did not extend over the lugal dubsar, the king's personal scribe.
Egyptian monarchs engaged in diplomacy with two chief Hittite seats, located at Kadesh (a city located on the Orontes River) and Carchemish (located on the Euphrates river in Southern Anatolia).
Religion of the early Hittites
In the Central Anatolian settlement of Ankuwa, home of the pre-Hittite goddess Kattaha and the worship of other Hattic deities illustrates the ethnic differences in the areas the Hittites tried to control. Kattaha was originally given the name Hannikkun. The usage of the term Kattaha over Hannikkun, according to Ronald Gorny (head of the Alisar regional project in Turkey), was a device to downgrade the pre-Hittite identity of this female deity, and to bring her more in touch with the Hittite tradition. Their reconfiguration of gods throughout their early history such as with Kattaha was a way of legitimizing their authority and to avoid conflicting ideologies in newly included regions and settlements. By transforming local deities to fit their own customs, the Hittites hoped that the traditional beliefs of these communities would understand and accept the changes to become better suited for the Hittite political and economic goals.
The Pankus
King Telipinu (reigned BC) is considered to be the last king of the Old Kingdom of the Hittites. He seized power during a dynastic power struggle. During his reign, he wanted to take care of lawlessness and regulate royal succession. He thus issued the Edict of Telipinus. In this edict, he designated the Pankus, which was a general assembly, as the high court for constitutional crimes. Crimes such as murder were observed and judged by the Pankus. Kings themselves were also subject to jurisdiction under the Pankus. The Pankus also served as an advisory council for the king. The rules and regulations set out by the edict, and the establishment of the Pankus proved to be very successful and lasted all the way through to end of the New Kingdom.
The Pankus established a legal code where violence was not a punishment for a crime. Crimes such as a murder and theft, which at the time were punishable by death, in other southwest Asian kingdoms, were not capital crimes under the Hittite law code. Most criminal penalties involved restitution. For example, in cases of thievery, the punishment of that crime would to be to repay what was stolen in equal value.
Foreign Policy and Wars
The Hittite Great Kingdom frequently took booty people during its wars, which were an important source of labor in food production and replacement of population losses. While they had frequent dealings with foreign powers, such as, Bryce thinks they may have had an non-aggression pact with Ahhiyawa, having taken then traded back Millawanda in negotiations. While the Hittites had a troubled relationship with Egypt, culminating in the famous battle of Kadesh Hittite Queens often were influential wielders of power in foreign policy, such as via establishment of marriage alliances. An example of this is Queen Puduhepa. With the Hittites internationally being part of the Club of great powers with Hatti maintaining an alliance with Egypt after the Egyptian–Hittite peace treaty. Hatti's northern and eastern frontiers were often unstable as evidenced by Battle of Ganuvara and Hittite Wars of Survival. While its relationship with Assyria was often troublesome, like around the time of Battle of Nihriya or to its south with Battles of Alashiya.
Economy
The Hittite economy was an Agro-Pastoral one, growing fruits and vegetables with cattle and sheep being common. Grain silos where usually placed in administrative centers such as Hattusa. In theory the land was owned by the gods, while in practice the King controlled the best lands, with a variety of other ownership forms after this. Land could be granted by the King to individuals in exchange for military service. The workforce working in food production was critical to the economy, thus wars taking men away from this could impact the food output of the Great Kingdom. Temples were an important part of the economy.
Shekels, minas and talents were the standard form of 'currency'. They were weights in either, copper, bronze, silver or gold. With the ratio being 40 Shekels equaling 1 Mina, which is different from other great kingdoms where it could be 60 to 1. One shekel being 8.3 gram. A silver shekel being worth 150l of wheat, you could buy 3,600 square meters land plot for 2-3 shekels silver, with an similar sized vineyard going up to 40 shekels of silver. A male laborer could earn one silver shekel per month, with women half that. They could also be paid in kind, taking a part of the harvest, which could be better than a wage. Bryce notes that men also did the most physically demanding work.
Population
Bryce cites a previous population estimate of 9,000–15,000 for Hattuša, but states that research by Jürgen Seeher now suggests the city would have had a population of 2,300–4,600, with a peak of 5,000 during special occasions. The total population of the kingdom was estimated at 140,000–150,000 by Zsolt Simon, whereas Bryce gives a figure of more than 200,000. Hatti was able to muster 47,500 thousand troops for Kadesh and might have mustered as many as 100,000 for military service, assuming that not all would necessarily participate in battle and some may have provided supporting labour. Thus military campaigns that were costly in lives resulted in difficulties maintaining Hittite food production and economy. Bryce speculates that 'booty-people' taken from foreign lands during campaigns would have been important to covering population depletion from wars. While Bryce does not give exact dates for his population estimates, they can be understood as covering reign of Hattusili III, as this is the context in which Bryce provides them.
Language
The Hittite language is recorded fragmentarily from about the 19th century BC (in the Kültepe texts, see Ishara). It remained in use until about 1100 BC. Hittite is the best attested member of the Anatolian branch of the Indo-European language family, and the Indo-European language for which the earliest surviving written attestation exists, with isolated Hittite loanwords and numerous personal names appearing in an Old Assyrian context from as early as the 20th century BC.
The language of the Hattusa tablets was eventually deciphered by a Czech linguist, Bedřich Hrozný (1879–1952), who, on 24 November 1915, announced his results in a lecture at the Near Eastern Society of Berlin. His book about the discovery was printed in Leipzig in 1917, under the title The Language of the Hittites; Its Structure and Its Membership in the Indo-European Linguistic Family.Hrozný, Bedřich, Die Sprache der Hethiter: ihr Bau und ihre Zugehörigkeit zum indogermanischen Sprachstamm: ein Entzifferungsversuch (Leipzig, Germany: J.C. Hinrichs, 1917). The preface of the book begins with:
The present work undertakes to establish the nature and structure of the hitherto mysterious language of the Hittites, and to decipher this language [...] It will be shown that Hittite is in the main an Indo-European language.
The decipherment famously led to the confirmation of the laryngeal theory in Indo-European linguistics, which had been predicted several decades before. Due to its marked differences in its structure and phonology, some early philologists, most notably Warren Cowgill, had even argued that it should be classified as a sister language to Indo-European languages (Indo-Hittite), rather than a daughter language. By the end of the Hittite Empire, the Hittite language had become a written language of administration and diplomatic correspondence. The population of most of the Hittite Empire by this time spoke Luwian, another Indo-European language of the Anatolian family that had originated to the west of the Hittite region.
According to Craig Melchert, the current tendency is to suppose that Proto-Indo-European evolved, and that the "prehistoric speakers" of Anatolian became isolated "from the rest of the PIE speech community, so as not to share in some common innovations." Hittite, as well as its Anatolian cousins, split off from Proto-Indo-European at an early stage, thereby preserving archaisms that were later lost in the other Indo-European languages.
In Hittite there are many loanwords, particularly religious vocabulary, from the non-Indo-European Hurrian and Hattic languages. The latter was the language of the Hattians, the local inhabitants of the land of Hatti before being absorbed or displaced by the Hittites. Sacred and magical texts from Hattusa were often written in Hattic, Hurrian, and Luwian, even after Hittite became the norm for other writings.
Art
Given the size of the empire, there are relatively few remains of Hittite art. These include some impressive monumental carvings, a number of rock reliefs, as well as metalwork, in particular the Alaca Höyük bronze standards, carved ivory, and ceramics, including the Hüseyindede vases. The Sphinx Gates of Alaca Höyük and Hattusa, with the monument at the spring of Eflatun Pınar, are among the largest constructed sculptures, along with a number of large recumbent lions, of which the Lion of Babylon statue at Babylon is the largest, if it is indeed Hittite. Nearly all are notably worn. Rock reliefs include the Hanyeri relief, and Hemite relief. The Niğde Stele from the end of the 8th century BC is a Luwian monument, from the Post-Hittite period, found in the modern Turkish city of Niğde.
Religion and mythology
Hittite religion and mythology were heavily influenced by their Hattic, Mesopotamian, Canaanite, and Hurrian counterparts. In earlier times, Indo-European elements may still be clearly discerned.
Storm gods were prominent in the Hittite pantheon. Tarhunt (Hurrian's Teshub) was referred to as 'The Conqueror', 'The king of Kummiya', 'King of Heaven', 'Lord of the land of Hatti'. He was chief among the gods and his symbol is the bull. As Teshub he was depicted as a bearded man astride two mountains and bearing a club. He was the god of battle and victory, especially when the conflict involved a foreign power. Teshub was also known for his conflict with the serpent Illuyanka.
The Hittite gods are also honoured with festivals, such as Puruli in the spring, the nuntarriyashas festival in the autumn, and the KI.LAM festival of the gate house where images of the Storm God and up to thirty other idols were paraded through the streets.
Law
Hittite laws, much like other records of the empire, are recorded on cuneiform tablets made from baked clay. What is understood to be the Hittite Law Code comes mainly from two clay tablets, each containing 186 articles, and are a collection of practiced laws from across the early Hittite Kingdom. In addition to the tablets, monuments bearing Hittite cuneiform inscriptions can be found in central Anatolia describing the government and law codes of the empire. The tablets and monuments date from the Old Hittite Kingdom (1650–1500 BC) to what is known as the New Hittite Kingdom (1500–1180 BC). Between these time periods, different translations can be found that modernize the language and create a series of legal reforms in which many crimes are given more humane punishments. These changes could possibly be attributed to the rise of new and different kings throughout the history empire or to the new translations that change the language used in the law codes. In either case, the law codes of the Hittites provide very specific fines or punishments that are to be issued for specific crimes and have many similarities to Biblical laws found in the books of Exodus and Deuteronomy. In addition to criminal punishments, the law codes also provide instruction on certain situations such as inheritance and death.
Use of laws
The law articles used by the Hittites most often outline very specific crimes or offenses, either against the state or against other individuals, and provide a sentence for these offenses. The laws carved in the tablets are an assembly of established social conventions from across the empire. Hittite laws at this time have a prominent lack of equality in punishments in many cases, distinct punishments or compensations for men and women are listed. Free men most often received more compensation for offenses against them than free women did. Slaves, male or female, had very few rights, and could easily be punished or executed by their masters for crimes. Most articles describe destruction of property and personal injury, to which the most common sentence was payment for compensation of the lost property. Again, in these cases men oftentimes receive a greater amount of compensation than women. Other articles describe how marriage of slaves and free individuals should be handled. In any case of separation or estrangement, the free individual, male or female, would keep all but one child that resulted from the marriage.
Cases in which capital punishment is recommended in the articles most often seem to come from pre-reform sentences for severe crimes and prohibited sexual pairings. Many of these cases include public torture and execution as punishment for serious crimes against religion. Most of these sentences would begin to go away in the later stages of the Hittite Empire as major law reforms began to occur.
Law reform
While different translations of laws can be seen throughout the history of the empire, the Hittite outlook of law was originally founded on religion and were intended to preserve the authority of the state. Additionally, punishments had the goal of crime prevention and the protection of individual property rights. The goals of crime prevention can be seen in the severity of the punishments issued for certain crimes. Capital punishment and torture are specifically mentioned as punishment for more severe crimes against religion and harsh fines for the loss of private property or life. The tablets also describe the ability of the king to pardon certain crimes, but specifically prohibit an individual being pardoned for murder.
At some point in the 16th or 15th century BC, Hittite law codes move away from torture and capital punishment and to more humanitarian forms of punishments, such as fines. Where the old law system was based on retaliation and retribution for crimes, the new system saw punishments that were much more mild, favoring monetary compensation over physical or capital punishment. Why these drastic reforms happened is not exactly clear, but it is likely that punishing murder with execution was deemed not to benefit any individual or family involved. These reforms were not just seen in the realm of capital punishment. Where major fines were to be paid, a severe reduction in penalty can be seen. For example, prior to these major reforms, the payment to be made for the theft of an animal was thirty times the animal's value; after the reforms, the penalty was reduced to half the original fine. Simultaneously, attempts to modernize the language and change the verbiage used in the law codes can be seen during this period of reform.
Examples of laws
Under both the old and reformed Hittite law codes, three main types of punishment can be seen: Death, torture, or compensation/fines. The articles outlined on the cuneiform tablets provide very specific punishments for crimes committed against the Hittite religion or against individuals. In many, but not all cases, articles describing similar laws are grouped together. More than a dozen consecutive articles describe what are known to be permitted and prohibited sexual pairings. These pairings mostly describe men (sometimes specifically referred to as free men, sometimes just men in general) having relations, be they consensual or not, with animals, step-family, relatives of spouses, or concubines. Many of these articles do not provide specific punishments but, prior to the law reforms, crimes against religion were most often punishable by death. These include incestuous marriages and sexual relations with certain animals. For example, one article states, "If a man has sexual relations with a cow, it is an unpermitted sexual pairing: he will be put to death." Similar relations with horses and mules were not subject to capital punishment, but the offender could not become a priest afterwards. Actions at the expense of other individuals most often see the offender paying some sort of compensation, be it in the form money, animals, or land. These actions could include the destruction of farmlands, death or injury of livestock, or assault of an individual. Several articles also specifically mention acts of the gods. If an animal were to die by certain circumstances, the individual could claim that it died by the hand of a god. Swearing that what they claim was true, it seems that they were exempt from paying compensation to the animal's owner. Injuries inflicted upon animals owned by another individual are almost always compensated with either direct payment, or trading the injured animal with a healthy one owned by the offender.
Not all laws prescribed in the tablets deal with criminal punishment. For example, the instructions of how the marriage of slaves and division of their children are given in a group of articles, "The slave woman shall take most of the children, with the male slave taking one child." Similar instructions are given to the marriage of free individuals and slaves. Other actions include how breaking of engagements are to be handled.
Biblical Hittites
The Bible refers to people as "Hittites" in several passages. The relationship between these peoples and the Bronze Age Hittite Empire is unclear. In some passages, the Biblical Hittites appear to have their own kingdoms, apparently located outside geographic Canaan, and were powerful enough to defeat Syrian armies in battle. In these passages, the Biblical Hittites appear to refer to the Iron Age Syro-Hittite states. However, in most of their appearances, the Biblical Hittites are depicted as a people living among the IsraelitesAbraham purchases the Patriarchal burial-plot of Machpelah from Ephron the Hittite and Hittites serve as high military officers in David's army. The nature of this ethnic group is unclear, but has sometimes been interpreted as a local Canaanite tribe who had absorbed Hittite cultural influence from the Syro-Hittite kingdoms to the north.
Other biblical scholars (following Max Müller) have argued that the Bronze Age Hittites appear in Hebrew Bible literature and apocrypha as "Kittim", a people said to be named for a son of Javan.
In ancient Greek mythology
One single mention of a Trojan ally named Keteians () is made by Homer in the Odyssey. Some scholars have proposed that the Homeric Keteians correspond to the Bronze Age Hittites.
See also
Hittite plague
List of Hittite kings
List of artifacts significant to the Bible
Short chronology timeline
References
Sources
Listed as "to appear" on his website last updated on 29 May 2020.
Studien zu den Bogazkoy-Texten 18.
Studien zu den Bogazkoy-Texten 49.
Further reading
Jacques Freu and Michel Mazoyer, Des origines à la fin de l'ancien royaume hittite, Les Hittites et leur histoire Tome 1, Collection Kubaba, L'Harmattan, Paris, 2007
Jacques Freu et Michel Mazoyer, Les débuts du nouvel empire hittite, Les Hittites et leur histoire Tome 2, Collection Kubaba, L'Harmattan, Paris, 2007
Jacques Freu et Michel Mazoyer, L'apogée du nouvel empire hittite, Les Hittites et leur histoire Tome 3, Collection Kubaba, L'Harmattan, Paris, 2008
Jacques Freu et Michel Mazoyer, Le déclin et la chute de l'empire Hittite, Les Hittites et leur histoire Tome 4, Collection Kubaba, L'Harmattan, Paris, 2010
Jacques Freu et Michel Mazoyer, Les royaumes Néo-Hittites, Les Hittites et leur histoire Tome 5, Collection Kubaba, L'Harmattan, Paris, 2012
Imparati, Fiorella. "Aspects De L'organisation De L'État Hittite Dans Les Documents Juridiques Et Administratifs." Journal of the Economic and Social History of the Orient 25, no. 3 (1982): 225–67.
Stone, Damien. The Hittites: Lost Civilizations. United Kingdom, Reaktion Books, 2023.
External links
New research suggests drought accelerated Hittite Empire collapse - Phys.org February 8, 2023
Video lecture at Oriental Institute – Tracking the Frontiers of the Hittite Empire
Pictures of Boğazköy, one of a group of important sites
Pictures of Yazılıkaya, one of a group of important sites
Der Anitta Text (at TITUS)
Tahsin Ozguc
Hethitologieportal Mainz, by the Akademie der Wissenschaften, Mainz, corpus of texts and extensive bibliographies on all things Hittite
Map of Hittite Anatolia
Category:States and territories established in the 17th century BC
Category:States and territories disestablished in the 12th century BC
Category:Ancient peoples of Anatolia
Category:Ancient Syria
Category:History of the Mediterranean
Category:Former kingdoms
Category:Former empires
|
ancient_medieval
| 9,139
|
13311
|
Hormone
|
https://en.wikipedia.org/wiki/Hormone
|
A hormone (from the Greek participle , "setting in motion") is a class of signaling molecules in multicellular organisms that are sent to distant organs or tissues by complex biological processes to regulate physiology and behavior. Hormones are required for the normal development of animals, plants and fungi. Due to the broad definition of a hormone (as a signaling molecule that exerts its effects far from its site of production), numerous kinds of molecules can be classified as hormones. Among the substances that can be considered hormones, are eicosanoids (e.g. prostaglandins and thromboxanes), steroids (e.g. oestrogen and brassinosteroid), amino acid derivatives (e.g. epinephrine and auxin), protein or peptides (e.g. insulin and CLE peptides), and gases (e.g. ethylene and nitric oxide).
Hormones are used to communicate between organs and tissues. In vertebrates, hormones are responsible for regulating a wide range of processes including both physiological processes and behavioral activities such as digestion, metabolism, respiration, sensory perception, sleep, excretion, lactation, stress induction, growth and development, movement, reproduction, and mood manipulation. In plants, hormones modulate almost all aspects of development, from germination to senescence.
Hormones affect distant cells by binding to specific receptor proteins in the target cell, resulting in a change in cell function. When a hormone binds to the receptor, it results in the activation of a signal transduction pathway that typically activates gene transcription, resulting in increased expression of target proteins. Hormones can also act in non-genomic pathways that synergize with genomic effects. Water-soluble hormones (such as peptides and amines) generally act on the surface of target cells via second messengers. Lipid soluble hormones, (such as steroids) generally pass through the plasma membranes of target cells (both cytoplasmic and nuclear) to act within their nuclei. Brassinosteroids, a type of polyhydroxysteroids, are a sixth class of plant hormones and may be useful as an anticancer drug for endocrine-responsive tumors to cause apoptosis and limit plant growth. Despite being lipid soluble, they nevertheless attach to their receptor at the cell surface.
In vertebrates, endocrine glands are specialized organs that secrete hormones into the endocrine signaling system. Hormone secretion occurs in response to specific biochemical signals and is often subject to negative feedback regulation. For instance, high blood sugar (serum glucose concentration) promotes insulin synthesis. Insulin then acts to reduce glucose levels and maintain homeostasis, leading to reduced insulin levels. Upon secretion, water-soluble hormones are readily transported through the circulatory system. Lipid-soluble hormones must bind to carrier plasma glycoproteins (e.g., thyroxine-binding globulin (TBG)) to form ligand-protein complexes. Some hormones, such as insulin and growth hormones, can be released into the bloodstream already fully active. Other hormones, called prohormones, must be activated in certain cells through a series of steps that are usually tightly controlled. The endocrine system secretes hormones directly into the bloodstream, typically via fenestrated capillaries, whereas the exocrine system secretes its hormones indirectly using ducts. Hormones with paracrine function diffuse through the interstitial spaces to nearby target tissue.
Plants lack specialized organs for the secretion of hormones, although there is a spatial distribution of hormone production. For example, the hormone auxin is produced mainly at the tips of young leaves and in the shoot apical meristem. The lack of specialised glands means that the main site of hormone production can change throughout the life of a plant, and the site of production is dependent on the plant's age and environment.
Introduction and overview
Hormone producing cells are found in the endocrine glands, such as the thyroid gland, ovaries, and testes. Hormonal signaling involves the following steps:
Biosynthesis of a particular hormone in a particular tissue.
Storage and secretion of the hormone.
Transport of the hormone to the target cell(s).
Recognition of the hormone by an associated cell membrane or intracellular receptor protein.
Relay and amplification of the received hormonal signal via a signal transduction process: This then leads to a cellular response. The reaction of the target cells may then be recognized by the original hormone-producing cells, leading to a downregulation in hormone production. This is an example of a homeostatic negative feedback loop.
Breakdown of the hormone.
Exocytosis and other methods of membrane transport are used to secrete hormones when the endocrine glands are signaled. The hierarchical model is an oversimplification of the hormonal signaling process. Cellular recipients of a particular hormonal signal may be one of several cell types that reside within a number of different tissues, as is the case for insulin, which triggers a diverse range of systemic physiological effects. Different tissue types may also respond differently to the same hormonal signal.
Discovery
Arnold Adolph Berthold (1849)
Arnold Adolph Berthold was a German physiologist and zoologist, who, in 1849, had a question about the function of the testes. He noticed in castrated roosters that they did not have the same sexual behaviors as roosters with their testes intact. He decided to run an experiment on male roosters to examine this phenomenon. He kept a group of roosters with their testes intact, and saw that they had normally sized wattles and combs (secondary sexual organs), a normal crow, and normal sexual and aggressive behaviors. He also had a group with their testes surgically removed, and noticed that their secondary sexual organs were decreased in size, had a weak crow, did not have sexual attraction towards females, and were not aggressive. He realized that this organ was essential for these behaviors, but he did not know how. To test this further, he removed one testis and placed it in the abdominal cavity. The roosters acted and had normal physical anatomy. He was able to see that the location of the testes does not matter. He then wanted to see if it was a genetic factor that was involved in the testes that provided these functions. He transplanted a testis from another rooster to a rooster with one testis removed, and saw that they had normal behavior and physical anatomy as well. Berthold determined that the location or genetic factors of the testes do not matter in relation to sexual organs and behaviors, but that some chemical in the testes is being secreted is causing this phenomenon. It was later identified that this factor was the hormone testosterone.
Charles and Francis Darwin (1880)
Although known primarily for his work on the Theory of Evolution, Charles Darwin was also keenly interested in plants. Through the 1870s, he and his son Francis studied the movement of plants towards light. They were able to show that light is perceived at the tip of a young stem (the coleoptile), whereas the bending occurs lower down the stem. They proposed that a 'transmissible substance' communicated the direction of light from the tip down to the stem. The idea of a 'transmissible substance' was initially dismissed by other plant biologists, but their work later led to the discovery of the first plant hormone. In the 1920s Dutch scientist Frits Warmolt Went and Russian scientist Nikolai Cholodny (working independently of each other) conclusively showed that asymmetric accumulation of a growth hormone was responsible for this bending. In 1933 this hormone was finally isolated by Kögl, Haagen-Smit and Erxleben and given the name 'auxin'.
Oliver and Schäfer (1894)
British physician George Oliver and physiologist Edward Albert Schäfer, professor at University College London, collaborated on the physiological effects of adrenal extracts. They first published their findings in two reports in 1894, a full publication followed in 1895. Though frequently falsely attributed to secretin, found in 1902 by Bayliss and Starling, Oliver and Schäfer's adrenal extract containing adrenaline, the substance causing the physiological changes, was the first hormone to be discovered. The term hormone would later be coined by Starling.
Bayliss and Starling (1902)
William Bayliss and Ernest Starling, a physiologist and biologist, respectively, wanted to see if the nervous system had an impact on the digestive system. From the work of Martin Heidenhain and Claude Bernard,W M Bayliss, E H Starling The mechanism of pancreatic secretion J Physiol. 1902 Sep 12;28(5):325–353 they knew that the pancreas was involved in the secretion of digestive fluids after the passage of food from the stomach to the intestines, which they believed to be due to the nervous system. They cut the nerves to the pancreas in an animal model and discovered that it was not nerve impulses that controlled secretion from the pancreas. It was determined that a factor secreted from the intestines into the bloodstream was stimulating the pancreas to secrete digestive fluids. This was named secretin: a hormone.
In 1905 Starling coined he word hormone from the Greek to arouse or excite which he defined as “the chemical messengers which speeding from cell to cell along the blood stream, may coordinate the activities and growth of different parts of the body”.Jamshed R Tata One hundred years of hormones EMBO Rep. 2005 Jun;6(6):490–496. doi: 10.1038/sj.embor.7400444
Types of signaling
Hormonal effects are dependent on where they are released, as they can be released in different manners. Not all hormones are released from a cell and into the blood until it binds to a receptor on a target. The major types of hormone signaling are:
+Signaling Types - HormonesSNTypesDescription1EndocrineActs on the target cells after being released into the bloodstream.2ParacrineActs on the nearby cells and does not have to enter general circulation.3AutocrineAffects the cell types that secreted it and causes a biological effect.4IntracrineActs intracellularly on the cells that synthesized it.
Chemical classes
As hormones are defined functionally, not structurally, they may have diverse chemical structures. Hormones occur in multicellular organisms (plants, animals, fungi, brown algae, and red algae). These compounds occur also in unicellular organisms, and may act as signaling molecules however there is no agreement that these molecules can be called hormones.
Vertebrates
+Hormone types in VertebratesSNTypesDescription1Proteins/
PeptidesPeptide hormones are made of a chain of amino acids that can range from just 3 to hundreds. Examples include oxytocin and insulin. Their sequences are encoded in DNA and can be modified by alternative splicing and/or post-translational modification. They are packed in vesicles and are hydrophilic, meaning that they are soluble in water. Due to their hydrophilicity, they can only bind to receptors on the membrane, as travelling through the membrane is unlikely. However, some hormones can bind to intracellular receptors through an intracrine mechanism.2Amino Acid
DerivativesAmino acid hormones are derived from amino acids, most commonly Tyrosine. They are stored in vesicles. Examples include Melatonin and Thyroxine.3SteroidsSteroid hormones are derived from cholesterol. Examples include the sex hormones estradiol and testosterone as well as the stress hormone cortisol. Steroids contain four fused rings. They are lipophilic and hence can cross membranes to bind to intracellular nuclear receptors.4EicosanoidsEicosanoids hormones are derived from lipids such as arachidonic acid, lipoxins, thromboxanes and prostaglandins. Examples include prostaglandin and thromboxane. These hormones are produced by cyclooxygenases and lipoxygenases. They are hydrophobic and act on membrane receptors.5GasesEthylene and Nitric Oxidethumb|00px|Different types of hormones are secreted in the human body, with different biological roles and functions.
Invertebrates
Compared with vertebrates, insects and crustaceans possess a number of structurally unusual hormones such as the juvenile hormone, a sesquiterpenoid.
Plants
Examples include abscisic acid, auxin, cytokinin, ethylene, and gibberellin.
Receptors
Receptors for most peptide as well as many eicosanoid hormones are embedded in the cell membrane as cell surface receptors, and the majority of these belong to the G protein-coupled receptor (GPCR) class of seven alpha helix transmembrane proteins. The interaction of hormone and receptor typically triggers a cascade of secondary effects within the cytoplasm of the cell, described as signal transduction, often involving phosphorylation or dephosphorylation of various other cytoplasmic proteins, changes in ion channel permeability, or increased concentrations of intracellular molecules that may act as secondary messengers (e.g., cyclic AMP). Some protein hormones also interact with intracellular receptors located in the cytoplasm or nucleus by an intracrine mechanism.
For steroid or thyroid hormones, their receptors are located inside the cell within the cytoplasm of the target cell. These receptors belong to the nuclear receptor family of ligand-activated transcription factors. To bind their receptors, these hormones must first cross the cell membrane. They can do so because they are lipid-soluble. The combined hormone-receptor complex then moves across the nuclear membrane into the nucleus of the cell, where it binds to specific DNA sequences, regulating the expression of certain genes, and thereby increasing the levels of the proteins encoded by these genes. However, it has been shown that not all steroid receptors are located inside the cell. Some are associated with the plasma membrane.
Effects in humans
Hormones have the following effects on the body:
stimulation or inhibition of growth
wake-sleep cycle and other circadian rhythms
mood swings
induction or suppression of apoptosis (programmed cell death)
activation or inhibition of the immune system
regulation of metabolism
preparation of the body for mating, fighting, fleeing, and other activity
preparation of the body for a new phase of life, such as puberty, parenting, and menopause
control of the reproductive cycle
hunger cravings
A hormone may also regulate the production and release of other hormones. Hormone signals control the internal environment of the body through homeostasis.
Regulation
The rate of hormone biosynthesis and secretion is often regulated by a homeostatic negative feedback control mechanism. Such a mechanism depends on factors that influence the metabolism and excretion of hormones. Thus, higher hormone concentration alone cannot trigger the negative feedback mechanism. Negative feedback must be triggered by overproduction of an "effect" of the hormone.
Hormone secretion can be stimulated and inhibited by:
Other hormones (stimulating- or releasing -hormones)
Plasma concentrations of ions or nutrients, as well as binding globulins
Neurons and mental activity
Environmental changes, e.g., of light or temperature
One special group of hormones is the tropic hormones that stimulate the hormone production of other endocrine glands. For example, thyroid-stimulating hormone (TSH) causes growth and increased activity of another endocrine gland, the thyroid, which increases output of thyroid hormones.
To release active hormones quickly into the circulation, hormone biosynthetic cells may produce and store biologically inactive hormones in the form of pre- or prohormones. These can then be quickly converted into their active hormone form in response to a particular stimulus.
Eicosanoids are considered to act as local hormones. They are considered to be "local" because they possess specific effects on target cells close to their site of formation. They also have a rapid degradation cycle, making sure they do not reach distant sites within the body."Eicosanoids". www.rpi.edu. Retrieved 2017-02-08.
Hormones are also regulated by receptor agonists. Hormones are ligands, which are any kinds of molecules that produce a signal by binding to a receptor site on a protein. Hormone effects can be inhibited, thus regulated, by competing ligands that bind to the same target receptor as the hormone in question. When a competing ligand is bound to the receptor site, the hormone is unable to bind to that site and is unable to elicit a response from the target cell. These competing ligands are called antagonists of the hormone.
Therapeutic use
Many hormones and their structural and functional analogs are used as medication. The most commonly prescribed hormones are estrogens and progestogens (as methods of hormonal contraception and as HRT), thyroxine (as levothyroxine, for hypothyroidism) and steroids (for autoimmune diseases and several respiratory disorders). Insulin is used by many diabetics. Local preparations for use in otolaryngology often contain pharmacologic equivalents of adrenaline, while steroid and vitamin D creams are used extensively in dermatological practice.
A "pharmacologic dose" or "supraphysiological dose" of a hormone is a medical usage referring to an amount of a hormone far greater than naturally occurs in a healthy body. The effects of pharmacologic doses of hormones may be different from responses to naturally occurring amounts and may be therapeutically useful, though not without potentially adverse side effects. An example is the ability of pharmacologic doses of glucocorticoids to suppress inflammation.
Hormone-behavior interactions
At the neurological level, behavior can be inferred based on hormone concentration, which in turn are influenced by hormone-release patterns; the numbers and locations of hormone receptors; and the efficiency of hormone receptors for those involved in gene transcription. Hormone concentration does not incite behavior, as that would undermine other external stimuli; however, it influences the system by increasing the probability of a certain event to occur.Nelson, R. J. (2021). Hormones & behavior. In R. Biswas-Diener & E. Diener (Eds), Noba textbook series: Psychology. Champaign, IL: DEF publishers. Retrieved from http://noba.to/c6gvwu9m
Not only can hormones influence behavior, but also behavior and the environment can influence hormone concentration. Thus, a feedback loop is formed, meaning behavior can affect hormone concentration, which in turn can affect behavior, which in turn can affect hormone concentration, and so on. For example, hormone-behavior feedback loops are essential in providing constancy to episodic hormone secretion, as the behaviors affected by episodically secreted hormones directly prevent the continuous release of said hormones.
Three broad stages of reasoning may be used to determine if a specific hormone-behavior interaction is present within a system:Nelson, R. J. (2011). An Introduction to Behavioral Endocrinology (4th ed.). Sinauer Associates. ISBN 978-0-87893-244-6.
The frequency of occurrence of a hormonally dependent behavior should correspond to that of its hormonal source.
A hormonally dependent behavior is not expected if the hormonal source (or its types of action) is non-existent.
The reintroduction of a missing behaviorally dependent hormonal source (or its types of action) is expected to bring back the absent behavior.
Comparison with neurotransmitters
Though colloquially oftentimes used interchangeably, there are various clear distinctions between hormones and neurotransmitters:
A hormone can perform functions over a larger spatial and temporal scale than can a neurotransmitter, which often acts in micrometer-scale distances.
Hormonal signals can travel virtually anywhere in the circulatory system, whereas neural signals are restricted to pre-existing nerve tracts.
Assuming the travel distance is equivalent, neural signals can be transmitted much more quickly (in the range of milliseconds) than can hormonal signals (in the range of seconds, minutes, or hours). Neural signals can be sent at speeds up to 100 meters per second.
Neural signalling is an all-or-nothing (digital) action, whereas hormonal signalling is an action that can be continuously variable as it is dependent upon hormone concentration.
Neurohormones are a type of hormone that share a commonality with neurotransmitters. They are produced by endocrine cells that receive input from neurons, or neuroendocrine cells. Both classic hormones and neurohormones are secreted by endocrine tissue; however, neurohormones are the result of a combination between endocrine reflexes and neural reflexes, creating a neuroendocrine pathway. While endocrine pathways produce chemical signals in the form of hormones, the neuroendocrine pathway involves the electrical signals of neurons. In this pathway, the result of the electrical signal produced by a neuron is the release of a chemical, which is the neurohormone. Finally, like a classic hormone, the neurohormone is released into the bloodstream to reach its target.
Binding proteins
Hormone transport and the involvement of binding proteins is an essential aspect when considering the function of hormones. The formation of a complex with a binding protein has several benefits: the effective half-life of the bound hormone is increased, and a reservoir of bound hormones is created, which evens the variations in concentration of unbound hormones (bound hormones will replace the unbound hormones when these are eliminated).Boron WF, Boulpaep EL. Medical physiology: a cellular and molecular approach. Updated 2. Philadelphia, Pa: Saunders Elsevier; 2012. An example of the usage of hormone-binding proteins is in the thyroxine-binding protein which carries up to 80% of all thyroxine in the body, a crucial element in regulating the metabolic rate.
See also
Adipokine
Autocrine signaling
Cytokine
Endocrine disease
Endocrine system
Endocrinology
Environmental hormones
Growth factor
Hepatokine
Intracrine
List of human hormones
List of investigational sex-hormonal agents
Metabolomics
Myokine
Neohormone
Neuroendocrinology
Paracrine signaling
Plant hormones, a.k.a. plant growth regulators
Semiochemical
Sex-hormonal agent
Sexual motivation and hormones
Xenohormone
References
External links
HMRbase: A database of hormones and their receptors
*
Category:Physiology
Category:Endocrinology
Category:Cell signaling
Category:Signal transduction
Category:Human female endocrine system
|
medicine_health
| 3,310
|
13654
|
Heat engine
|
https://en.wikipedia.org/wiki/Heat_engine
|
A heat engine is a system that transfers thermal energy to do mechanical or electrical work.Fundamentals of Classical Thermodynamics, 3rd ed. p. 159, (1985) by G. J. Van Wylen and R. E. Sonntag: "A heat engine may be defined as a device that operates in thermodynamic cycle and does a certain amount of net positive work as a result of heat transfer from a high-temperature body to a low-temperature body. Often the term heat engine is used in a broader sense to include all devices that produce work, either through heat transfer or combustion, even though the device does not operate in a thermodynamic cycle. The internal-combustion engine and the gas turbine are examples of such devices, and calling these heat engines is an acceptable use of the term."Mechanical efficiency of heat engines, p. 1 (2007) by James R. Senf: "Heat engines are made to provide mechanical energy from thermal energy." While originally conceived in the context of mechanical energy, the concept of the heat engine has been applied to various other kinds of energy, particularly electrical, since at least the late 19th century. The heat engine does this by bringing a working substance from a higher state temperature to a lower state temperature. A heat source generates thermal energy that brings the working substance to the higher temperature state. The working substance generates work in the working body of the engine while transferring heat to the colder sink until it reaches a lower temperature state. During this process some of the thermal energy is converted into work by exploiting the properties of the working substance. The working substance can be any system with a non-zero heat capacity, but it usually is a gas or liquid. During this process, some heat is normally lost to the surroundings and is not converted to work. Also, some energy is unusable because of friction and drag.
In general, an engine is any machine that converts energy to mechanical work. Heat engines distinguish themselves from other types of engines by the fact that their efficiency is fundamentally limited by Carnot's theorem of thermodynamics.Thermal physics: entropy and free energies, by Joon Chang Lee (2002), Appendix A, p. 183: "A heat engine absorbs energy from a heat source and then converts it into work for us.... When the engine absorbs heat energy, the absorbed heat energy comes with entropy." (heat energy ), "When the engine performs work, on the other hand, no entropy leaves the engine. This is problematic. We would like the engine to repeat the process again and again to provide us with a steady work source. ... to do so, the working substance inside the engine must return to its initial thermodynamic condition after a cycle, which requires to remove the remaining entropy. The engine can do this only in one way. It must let part of the absorbed heat energy leave without converting it into work. Therefore the engine cannot convert all of the input energy into work!" Although this efficiency limitation can be a drawback, an advantage of heat engines is that most forms of energy can be easily converted to heat by processes like exothermic reactions (such as combustion), nuclear fission, absorption of light or energetic particles, friction, dissipation and resistance. Since the heat source that supplies thermal energy to the engine can thus be powered by virtually any kind of energy, heat engines cover a wide range of applications.
Heat engines are often confused with the cycles they attempt to implement. Typically, the term "engine" is used for a physical device and "cycle" for the models.
Overview
In thermodynamics, heat engines are often modeled using a standard engineering model such as the Otto cycle. The theoretical model can be refined and augmented with actual data from an operating engine, using tools such as an indicator diagram. Since very few actual implementations of heat engines exactly match their underlying thermodynamic cycles, one could say that a thermodynamic cycle is an ideal case of a mechanical engine. In any case, fully understanding an engine and its efficiency requires a good understanding of the (possibly simplified or idealised) theoretical model, the practical nuances of an actual mechanical engine and the discrepancies between the two.
In general terms, the larger the difference in temperature between the hot source and the cold sink, the larger is the potential thermal efficiency of the cycle. On Earth, the cold side of any heat engine is limited to being close to the ambient temperature of the environment, or not much lower than 300 kelvin, so most efforts to improve the thermodynamic efficiencies of various heat engines focus on increasing the temperature of the source, within material limits. The maximum theoretical efficiency of a heat engine (which no engine ever attains) is equal to the temperature difference between the hot and cold ends divided by the temperature at the hot end, each expressed in absolute temperature.
The efficiency of various heat engines proposed or used today has a large range:
3% (97 percent waste heat using low quality heat) for the ocean thermal energy conversion (OTEC) ocean power proposal
25% for most automotive gasoline enginesWhere the Energy Goes: Gasoline Vehicles, US Dept of Energy
49% for a supercritical coal-fired power station such as the Avedøre Power Station
50%+ for long stroke marine Diesel engines
60% for a combined cycle gas turbine
The efficiency of these processes is roughly proportional to the temperature drop across them. Significant energy may be consumed by auxiliary equipment, such as pumps, which effectively reduces efficiency.
Examples
Although some cycles have a typical combustion location (internal or external), they can often be implemented with the other. For example, John Ericsson developed an external heated engine running on a cycle very much like the earlier Diesel cycle. In addition, externally heated engines can often be implemented in open or closed cycles. In a closed cycle the working fluid is retained within the engine at the completion of the cycle whereas is an open cycle the working fluid is either exchanged with the environment together with the products of combustion in the case of the internal combustion engine or simply vented to the environment in the case of external combustion engines like steam engines and turbines.
Everyday examples
Everyday examples of heat engines include the thermal power station, internal combustion engine, firearms, refrigerators and heat pumps. Power stations are examples of heat engines run in a forward direction in which heat flows from a hot reservoir and flows into a cool reservoir to produce work as the desired product. Refrigerators, air conditioners and heat pumps are examples of heat engines that are run in reverse, i.e. they use work to take heat energy at a low temperature and raise its temperature in a more efficient way than the simple conversion of work into heat (either through friction or electrical resistance). Refrigerators remove heat from within a thermally sealed chamber at low temperature and vent waste heat at a higher temperature to the environment and heat pumps take heat from the low temperature environment and 'vent' it into a thermally sealed chamber (a house) at higher temperature.
In general heat engines exploit the thermal properties associated with the expansion and compression of gases according to the gas laws or the properties associated with phase changes between gas and liquid states.
Earth's heat engine
Earth's atmosphere and hydrosphere—Earth's heat engine—are coupled processes that constantly even out solar heating imbalances through evaporation of surface water, convection, rainfall, winds and ocean circulation, when distributing heat around the globe.
A Hadley cell is an example of a heat engine. It involves the rising of warm and moist air in the earth's equatorial region and the descent of colder air in the subtropics creating a thermally driven direct circulation, with consequent net production of kinetic energy.
Phase-change cycles
In phase change cycles and engines, the working fluids are gases and liquids. The engine converts the working fluid from a gas to a liquid, from liquid to gas, or both, generating work from the fluid expansion or compression.
Rankine cycle (classical steam engine)
Regenerative cycle (steam engine more efficient than Rankine cycle)
Organic Rankine cycle (Coolant changing phase in temperature ranges of ice and hot liquid water)
Vapor to liquid cycle (drinking bird, injector, Minto wheel)
Liquid to solid cycle (frost heaving – water changing from ice to liquid and back again can lift rock up to 60 cm.)
Solid to gas cycle (firearms – solid propellants combust to hot gases.)
Gas-only cycles
In these cycles and engines the working fluid is always a gas (i.e., there is no phase change):
Carnot cycle (Carnot heat engine)
Ericsson cycle (Caloric Ship John Ericsson)
Stirling cycle (Stirling engine, thermoacoustic devices)
Internal combustion engine (ICE):
Otto cycle (e.g. gasoline/petrol engine)
Diesel cycle (e.g. Diesel engine)
Atkinson cycle (Atkinson engine)
Brayton cycle or Joule cycle originally Ericsson cycle (gas turbine)
Lenoir cycle (e.g., pulse jet engine)
Miller cycle (Miller engine)
Liquid-only cycles
In these cycles and engines the working fluid are always like liquid:
Stirling cycle (Malone engine)
Electron cycles
Johnson thermoelectric energy converter
Thermoelectric (Peltier–Seebeck effect)
Thermogalvanic cell
Thermionic emission
Thermotunnel cooling
Magnetic cycles
Thermo-magnetic motor (Tesla)
Cycles used for refrigeration
A domestic refrigerator is an example of a heat pump: a heat engine in reverse. Work is used to create a heat differential. Many cycles can run in reverse to move heat from the cold side to the hot side, making the cold side cooler and the hot side hotter. Internal combustion engine versions of these cycles are, by their nature, not reversible.
Refrigeration cycles include:
Air cycle machine
Gas-absorption refrigerator
Magnetic refrigeration
Stirling cryocooler
Vapor-compression refrigeration
Vuilleumier cycle
Evaporative heat engines
The Barton evaporation engine is a heat engine based on a cycle producing power and cooled moist air from the evaporation of water into hot dry air.
Mesoscopic heat engines
Mesoscopic heat engines are nanoscale devices that may serve the goal of processing heat fluxes and perform useful work at small scales. Potential applications include e.g. electric cooling devices. In such mesoscopic heat engines, work per cycle of operation fluctuates due to thermal noise. There is exact equality that relates average of exponents of work performed by any heat engine and the heat transfer from the hotter heat bath. This relation transforms the Carnot's inequality into exact equality. This relation is also a Carnot cycle equality
Efficiency
The efficiency of a heat engine relates how much useful work is output for a given amount of heat energy input.
From the laws of thermodynamics, after a completed cycle:.
and therefore
where
is the net work extracted from the engine in one cycle. (It is negative, in the IUPAC convention, since work is done by the engine.)
is the heat energy taken from the high temperature heat source in the surroundings in one cycle. (It is positive since heat energy is added to the engine.)
is the waste heat given off by the engine to the cold temperature heat sink. (It is negative since heat is lost by the engine to the sink.)
In other words, a heat engine absorbs heat energy from the high temperature heat source, converting part of it to useful work and giving off the rest as waste heat to the cold temperature heat sink.
In general, the efficiency of a given heat transfer process is defined by the ratio of "what is taken out" to "what is put in". (For a refrigerator or heat pump, which can be considered as a heat engine run in reverse, this is the coefficient of performance and it is ≥ 1.) In the case of an engine, one desires to extract work and has to put in heat , for instance from combustion of a fuel, so the engine efficiency is reasonably defined as
The efficiency is less than 100% because of the waste heat unavoidably lost to the cold sink (and corresponding compression work put in) during the required recompression at the cold temperature before the power stroke of the engine can occur again.
The theoretical maximum efficiency of any heat engine depends only on the temperatures it operates between. This efficiency is usually derived using an ideal imaginary heat engine such as the Carnot heat engine, although other engines using different cycles can also attain maximum efficiency. Mathematically, after a full cycle, the overall change of entropy is zero:
Note that is positive because isothermal expansion in the power stroke increases the multiplicity of the working fluid while is negative since recompression decreases the multiplicity. If the engine is ideal and runs reversibly, and , and thus.
,
which gives and thus the Carnot limit for heat-engine efficiency,
where is the absolute temperature of the hot source and that of the cold sink, usually measured in kelvins.
The reasoning behind this being the maximal efficiency goes as follows. It is first assumed that if a more efficient heat engine than a Carnot engine is possible, then it could be driven in reverse as a heat pump. Mathematical analysis can be used to show that this assumed combination would result in a net decrease in entropy. Since, by the second law of thermodynamics, this is statistically improbable to the point of exclusion, the Carnot efficiency is a theoretical upper bound on the reliable efficiency of any thermodynamic cycle.
Empirically, no heat engine has ever been shown to run at a greater efficiency than a Carnot cycle heat engine.
Figure 2 and Figure 3 show variations on Carnot cycle efficiency with temperature. Figure 2 indicates how efficiency changes with an increase in the heat addition temperature for a constant compressor inlet temperature. Figure 3 indicates how the efficiency changes with an increase in the heat rejection temperature for a constant turbine inlet temperature.
Endo-reversible heat-engines
By its nature, any maximally efficient Carnot cycle must operate at an infinitesimal temperature gradient; this is because any transfer of heat between two bodies of differing temperatures is irreversible, therefore the Carnot efficiency expression applies only to the infinitesimal limit. The major problem is that the objective of most heat-engines is to output power, and infinitesimal power is seldom desired.
A different measure of ideal heat-engine efficiency is given by considerations of endoreversible thermodynamics, where the system is broken into reversible subsystems, but with non reversible interactions between them. A classical example is the Curzon–Ahlborn engine,F. L. Curzon, B. Ahlborn (1975). "Efficiency of a Carnot Engine at Maximum Power Output". Am. J. Phys., Vol. 43, pp. 24. very similar to a Carnot engine, but where the thermal reservoirs at temperature and are allowed to be different from the temperatures of the substance going through the reversible Carnot cycle: and . The heat transfers between the reservoirs and the substance are considered as conductive (and irreversible) in the form . In this case, a tradeoff has to be made between power output and efficiency. If the engine is operated very slowly, the heat flux is low, and the classical Carnot result is found
,
but at the price of a vanishing power output. If instead one chooses to operate the engine at its maximum output power, the efficiency becomes
(Note: T in units of K or °R)
This model does a better job of predicting how well real-world heat-engines can do (Callen 1985, see also endoreversible thermodynamics):
+Efficiencies of power stations Power station (°C) (°C) (Carnot) (Endoreversible) (Observed) West Thurrock (UK) coal-fired power station 25 565 0.64 0.40 0.36 CANDU (Canada) nuclear power station 25 300 0.48 0.28 0.30 Larderello (Italy) geothermal power station 80 250 0.33 0.178 0.16
As shown, the Curzon–Ahlborn efficiency much more closely models that observed.
History
Heat engines have been known since antiquity but were only made into useful devices at the time of the industrial revolution in the 18th century. They continue to be developed today.
Enhancements
Engineers have studied the various heat-engine cycles to improve the amount of usable work they could extract from a given power source. The Carnot cycle limit cannot be reached with any gas-based cycle, but engineers have found at least two ways to bypass that limit and one way to get better efficiency without bending any rules:
Increase the temperature difference in the heat engine. The simplest way to do this is to increase the hot side temperature, which is the approach used in modern combined-cycle gas turbines. Unfortunately, physical limits (such as the melting point of the materials used to build the engine) and environmental concerns regarding NOx production (if the heat source is combustion with ambient air) restrict the maximum temperature on workable heat-engines. Modern gas turbines run at temperatures as high as possible within the range of temperatures necessary to maintain acceptable NOx output . Another way of increasing efficiency is to lower the output temperature. One new method of doing so is to use mixed chemical working fluids, then exploit the changing behavior of the mixtures. One of the most famous is the so-called Kalina cycle, which uses a 70/30 mix of ammonia and water as its working fluid. This mixture allows the cycle to generate useful power at considerably lower temperatures than most other processes.
Exploit the physical properties of the working fluid. The most common such exploitation is the use of water above the critical point (supercritical water). The behavior of fluids above their critical point changes radically, and with materials such as water and carbon dioxide it is possible to exploit those changes in behavior to extract greater thermodynamic efficiency from the heat engine, even if it is using a fairly conventional Brayton or Rankine cycle. A newer and very promising material for such applications is supercritical CO2. SO2 and xenon have also been considered for such applications. Downsides include issues of corrosion and erosion, the different chemical behavior above and below the critical point, the needed high pressures and – in the case of sulfur dioxide and to a lesser extent carbon dioxide – toxicity. Among the mentioned compounds xenon is least suitable for use in a nuclear reactor due to the high neutron absorption cross section of almost all isotopes of xenon, whereas carbon dioxide and water can also double as a neutron moderator for a thermal spectrum reactor.
Exploit the chemical properties of the working fluid. A fairly new and novel exploit is to use exotic working fluids with advantageous chemical properties. One such is nitrogen dioxide (NO2), a toxic component of smog, which has a natural dimer as di-nitrogen tetraoxide (N2O4). At low temperature, the N2O4 is compressed and then heated. The increasing temperature causes each N2O4 to break apart into two NO2 molecules. This lowers the molecular weight of the working fluid, which drastically increases the efficiency of the cycle. Once the NO2 has expanded through the turbine, it is cooled by the heat sink, which makes it recombine into N2O4. This is then fed back by the compressor for another cycle. Such species as aluminium bromide (Al2Br6), NOCl, and Ga2I6 have all been investigated for such uses. To date, their drawbacks have not warranted their use, despite the efficiency gains that can be realized.
Heat engine processes
Each process is one of the following:
isothermal (at constant temperature, maintained with heat added or removed from a heat source or sink)
isobaric (at constant pressure)
isometric/isochoric (at constant volume), also referred to as iso-volumetric
adiabatic (no heat is added or removed from the system during adiabatic process)
isentropic (reversible adiabatic process, no heat is added or removed during isentropic process)
See also
Carnot heat engine
Cogeneration
Einstein refrigerator
Heat pump
Reciprocating engine for a general description of the mechanics of piston engines
Stirling engine
Thermosynthesis
Timeline of heat engine technology
References
Category:Energy conversion
Category:Engine technology
Category:Engines
Category:Heating, ventilation, and air conditioning
Category:Thermodynamics
|
physics
| 3,297
|
13833
|
Hash table
|
https://en.wikipedia.org/wiki/Hash_table
|
In computer science, a hash table is a data structure that implements an associative array, also called a dictionary or simply map; an associative array is an abstract data type that maps keys to values. A hash table uses a hash function to compute an index, also called a hash code, into an array of buckets or slots, from which the desired value can be found. During lookup, the key is hashed and the resulting hash indicates where the corresponding value is stored. A map implemented by a hash table is called a hash map.
Most hash table designs employ an imperfect hash function. Hash collisions, where the hash function generates the same index for more than one key, therefore typically must be accommodated in some way. Common strategies to handle hash collisions include chaining, which stores multiple elements in the same slot using linked lists, and open addressing, which searches for the next available slot according to a probing sequence.
In a well-dimensioned hash table, the average time complexity for each lookup is independent of the number of elements stored in the table. Many hash table designs also allow arbitrary insertions and deletions of key–value pairs, at amortized constant average cost per operation.
Hashing is an example of a space–time tradeoff. If memory is infinite, the entire key can be used directly as an index to locate its value with a single memory access. On the other hand, if infinite time is available, values can be stored without regard for their keys, and a binary search or linear search can be used to retrieve the element.
In many situations, hash tables turn out to be on average more efficient than search trees or any other table lookup structure. Hash tables are widely used in modern software systems for tasks such as database indexing, caching, and implementing associative arrays, due to their fast average-case performance. For this reason, they are widely used in many kinds of computer software, particularly for associative arrays, database indexing, caches, and sets. Many programming languages provide built-in hash table structures, such as Python’s dictionaries, Java’s HashMap, and C++’s unordered_map, which abstract the complexity of hashing from the programmer.
History
The idea of hashing arose independently in different places. In January 1953, Hans Peter Luhn wrote an internal IBM memorandum that used hashing with chaining. The first example of open addressing was proposed by A. D. Linh, building on Luhn's memorandum. Around the same time, Gene Amdahl, Elaine M. McGraw, Nathaniel Rochester, and Arthur Samuel of IBM Research implemented hashing for the IBM 701 assembler. Open addressing with linear probing is credited to Amdahl, although Andrey Ershov independently had the same idea. The term "open addressing" was coined by W. Wesley Peterson in his article which discusses the problem of search in large files.
The first published work on hashing with chaining is credited to Arnold Dumey, who discussed the idea of using remainder modulo a prime as a hash function. The word "hashing" was first published in an article by Robert Morris. A theoretical analysis of linear probing was submitted originally by Konheim and Weiss.
Overview
An associative array stores a set of (key, value) pairs and allows insertion, deletion, and lookup (search), with the constraint of unique keys. In the hash table implementation of associative arrays, an array of length is partially filled with elements, where . A key is hashed using a hash function to compute an index location in the hash table, where . The efficiency of a hash table depends on the load factor, defined as the ratio of the number of stored elements to the number of available slots, with lower load factors generally yielding faster operations. At this index, both the key and its associated value are stored. Storing the key alongside the value ensures that lookups can verify the key at the index to retrieve the correct value, even in the presence of collisions. Under reasonable assumptions, hash tables have better time complexity bounds on search, delete, and insert operations in comparison to self-balancing binary search trees.
Hash tables are also commonly used to implement sets, by omitting the stored value for each key and merely tracking whether the key is present.
Load factor
A load factor is a critical statistic of a hash table, and is defined as follows:
where
is the number of entries occupied in the hash table.
is the number of buckets.
The performance of the hash table deteriorates in relation to the load factor . In the limit of large and , each bucket statistically has a Poisson distribution with expectation for an ideally random hash function.
The software typically ensures that the load factor remains below a certain constant, . This helps maintain good performance. Therefore, a common approach is to resize or "rehash" the hash table whenever the load factor reaches . Similarly the table may also be resized if the load factor drops below .
Load factor for separate chaining
With separate chaining hash tables, each slot of the bucket array stores a pointer to a list or array of data.
Separate chaining hash tables suffer gradually declining performance as the load factor grows, and no fixed point beyond which resizing is absolutely needed.
With separate chaining, the value of that gives best performance is typically between 1 and 3.
Load factor for open addressing
With open addressing, each slot of the bucket array holds exactly one item. Therefore an open-addressed hash table cannot have a load factor greater than 1.James S. Plank and Brad Vander Zanden.
"CS140 Lecture notes -- Hashing".
The performance of open addressing becomes very bad when the load factor approaches 1.
Therefore a hash table that uses open addressing must be resized or rehashed if the load factor approaches 1.
With open addressing, acceptable figures of max load factor should range around 0.6 to 0.75.
Hash function
A hash function maps the universe of keys to indices or slots within the table, that is, for . The conventional implementations of hash functions are based on the integer universe assumption that all elements of the table stem from the universe , where the bit length of is confined within the word size of a computer architecture.
A hash function is said to be perfect for a given set if it is injective on , that is, if each element maps to a different value in . A perfect hash function can be created if all the keys are known ahead of time.
Integer universe assumption
The schemes of hashing used in integer universe assumption include hashing by division, hashing by multiplication, universal hashing, dynamic perfect hashing, and static perfect hashing. However, hashing by division is the commonly used scheme.
Hashing by division
The scheme in hashing by division is as follows:
where is the hash value of and is the size of the table.
Hashing by multiplication
The scheme in hashing by multiplication is as follows:
Where is a non-integer real-valued constant and is the size of the table. An advantage of the hashing by multiplication is that the is not critical. Although any value produces a hash function, Donald Knuth suggests using the golden ratio.
String hashing
Commonly a string is used as a key to the hash function. Stroustrup describes a simple hash function in which an unsigned integer that is initially zero is repeatedly left shifted one bit and then xor'ed with the integer value of the next character. This hash value is then taken modulo the table size. If the left shift is not circular, then the string length should be at least eight bits less than the size of the unsigned integer in bits. Another common way to hash a string to an integer is with a polynomial rolling hash function.
Choosing a hash function
Uniform distribution of the hash values is a fundamental requirement of a hash function. A non-uniform distribution increases the number of collisions and the cost of resolving them. Uniformity is sometimes difficult to ensure by design, but may be evaluated empirically using statistical tests, e.g., a Pearson's chi-squared test for discrete uniform distributions.
The distribution needs to be uniform only for table sizes that occur in the application. In particular, if one uses dynamic resizing with exact doubling and halving of the table size, then the hash function needs to be uniform only when the size is a power of two. Here the index can be computed as some range of bits of the hash function. On the other hand, some hashing algorithms prefer to have the size be a prime number.
For open addressing schemes, the hash function should also avoid clustering, the mapping of two or more keys to consecutive slots. Such clustering may cause the lookup cost to skyrocket, even if the load factor is low and collisions are infrequent. The popular multiplicative hash is claimed to have particularly poor clustering behavior.
K-independent hashing offers a way to prove a certain hash function does not have bad keysets for a given type of hashtable. A number of K-independence results are known for collision resolution schemes such as linear probing and cuckoo hashing. Since K-independence can prove a hash function works, one can then focus on finding the fastest possible such hash function.
Collision resolution
A search algorithm that uses hashing consists of two parts. The first part is computing a hash function which transforms the search key into an array index. The ideal case is such that no two search keys hash to the same array index. However, this is not always the case and impossible to guarantee for unseen given data. Hence the second part of the algorithm is collision resolution. The two common methods for collision resolution are separate chaining and open addressing.
Separate chaining
In separate chaining, the process involves building a linked list with key–value pair for each search array index. The collided items are chained together through a single linked list, which can be traversed to access the item with a unique search key. Collision resolution through chaining with linked list is a common method of implementation of hash tables. Let and be the hash table and the node respectively, the operation involves as follows:
Chained-Hash-Insert(T, k)
insert x at the head of linked list T[h(k)]
Chained-Hash-Search(T, k)
search for an element with key k in linked list T[h(k)]
Chained-Hash-Delete(T, k)
delete x from the linked list T[h(k)]
If the element is comparable either numerically or lexically, and inserted into the list by maintaining the total order, it results in faster termination of the unsuccessful searches.
Other data structures for separate chaining
If the keys are ordered, it could be efficient to use "self-organizing" concepts such as using a self-balancing binary search tree, through which the theoretical worst case could be brought down to , although it introduces additional complexities.
In dynamic perfect hashing, two-level hash tables are used to reduce the look-up complexity to be a guaranteed in the worst case. In this technique, the buckets of entries are organized as perfect hash tables with slots providing constant worst-case lookup time, and low amortized time for insertion. A study shows array-based separate chaining to be 97% more performant when compared to the standard linked list method under heavy load.
Techniques such as using fusion tree for each buckets also result in constant time for all operations with high probability..
Caching and locality of reference
The linked list of separate chaining implementation may not be cache-conscious due to spatial locality—locality of reference—when the nodes of the linked list are scattered across memory, thus the list traversal during insert and search may entail CPU cache inefficiencies.
In cache-conscious variants of collision resolution through separate chaining, a dynamic array found to be more cache-friendly is used in the place where a linked list or self-balancing binary search trees is usually deployed, since the contiguous allocation pattern of the array could be exploited by hardware-cache prefetchers—such as translation lookaside buffer—resulting in reduced access time and memory consumption.
Open addressing
Open addressing is another collision resolution technique in which every entry record is stored in the bucket array itself, and the hash resolution is performed through probing. When a new entry has to be inserted, the buckets are examined, starting with the hashed-to slot and proceeding in some probe sequence, until an unoccupied slot is found. When searching for an entry, the buckets are scanned in the same sequence, until either the target record is found, or an unused array slot is found, which indicates an unsuccessful search.
Well-known probe sequences include:
Linear probing, in which the interval between probes is fixed (usually 1).
Quadratic probing, in which the interval between probes is increased by adding the successive outputs of a quadratic polynomial to the value given by the original hash computation.
Double hashing, in which the interval between probes is computed by a secondary hash function.
The performance of open addressing may be slower compared to separate chaining since the probe sequence increases when the load factor approaches 1. The probing results in an infinite loop if the load factor reaches 1, in the case of a completely filled table. The average cost of linear probing depends on the hash function's ability to distribute the elements uniformly throughout the table to avoid clustering, since formation of clusters would result in increased search time.
Caching and locality of reference
Since the slots are located in successive locations, linear probing could lead to better utilization of CPU cache due to locality of references resulting in reduced memory latency.
Other collision resolution techniques based on open addressing
Coalesced hashing
Coalesced hashing is a hybrid of both separate chaining and open addressing in which the buckets or nodes link within the table. The algorithm is ideally suited for fixed memory allocation. The collision in coalesced hashing is resolved by identifying the largest-indexed empty slot on the hash table, then the colliding value is inserted into that slot. The bucket is also linked to the inserted node's slot which contains its colliding hash address.
Cuckoo hashing
Cuckoo hashing is a form of open addressing collision resolution technique which guarantees worst-case lookup complexity and constant amortized time for insertions. The collision is resolved through maintaining two hash tables, each having its own hashing function, and collided slot gets replaced with the given item, and the preoccupied element of the slot gets displaced into the other hash table. The process continues until every key has its own spot in the empty buckets of the tables; if the procedure enters into infinite loop—which is identified through maintaining a threshold loop counter—both hash tables get rehashed with newer hash functions and the procedure continues.
Hopscotch hashing
Hopscotch hashing is an open addressing based algorithm which combines the elements of cuckoo hashing, linear probing and chaining through the notion of a neighbourhood of buckets—the subsequent buckets around any given occupied bucket, also called a "virtual" bucket. The algorithm is designed to deliver better performance when the load factor of the hash table grows beyond 90%; it also provides high throughput in concurrent settings, thus well suited for implementing resizable concurrent hash table. The neighbourhood characteristic of hopscotch hashing guarantees a property that, the cost of finding the desired item from any given buckets within the neighbourhood is very close to the cost of finding it in the bucket itself; the algorithm attempts to be an item into its neighbourhood—with a possible cost involved in displacing other items.
Each bucket within the hash table includes an additional "hop-information"—an H-bit bit array for indicating the relative distance of the item which was originally hashed into the current virtual bucket within H − 1 entries. Let and be the key to be inserted and bucket to which the key is hashed into respectively; several cases are involved in the insertion procedure such that the neighbourhood property of the algorithm is vowed: if is empty, the element is inserted, and the leftmost bit of bitmap is set to 1; if not empty, linear probing is used for finding an empty slot in the table, the bitmap of the bucket gets updated followed by the insertion; if the empty slot is not within the range of the neighbourhood, i.e. H − 1, subsequent swap and hop-info bit array manipulation of each bucket is performed in accordance with its neighbourhood invariant properties.
Robin Hood hashing
Robin Hood hashing is an open addressing based collision resolution algorithm; the collisions are resolved through favouring the displacement of the element that is farthest—or longest probe sequence length (PSL)—from its "home location" i.e. the bucket to which the item was hashed into. It is named after Robin Hood, a mythical heroic outlaw who stole from the rich to give to the poor.
Although Robin Hood hashing does not change the theoretical search cost, it significantly affects the variance of the distribution of the items on the buckets, i.e. dealing with cluster formation in the hash table. Each node within the hash table that uses Robin Hood hashing should be augmented to store an extra PSL value. Let be the key to be inserted, be the (incremental) PSL length of , be the hash table and be the index, the insertion procedure is as follows:
If : the iteration goes into the next bucket without attempting an external probe.
If : insert the item into the bucket ; swap with —let it be ; continue the probe from the th bucket to insert ; repeat the procedure until every element is inserted.
Dynamic resizing
Repeated insertions cause the number of entries in a hash table to grow, which consequently increases the load factor; to maintain the amortized performance of the lookup and insertion operations, a hash table is dynamically resized and the items of the tables are rehashed into the buckets of the new hash table, since the items cannot be copied over as varying table sizes results in different hash value due to modulo operation. If a hash table becomes "too empty" after deleting some elements, resizing may be performed to avoid excessive memory usage.
Resizing by moving all entries
Generally, a new hash table with a size double that of the original hash table gets allocated privately and every item in the original hash table gets moved to the newly allocated one by computing the hash values of the items followed by the insertion operation. Rehashing is simple, but computationally expensive.
Alternatives to all-at-once rehashing
Some hash table implementations, notably in real-time systems, cannot pay the price of enlarging the hash table all at once, because it may interrupt time-critical operations. If one cannot avoid dynamic resizing, a solution is to perform the resizing gradually to avoid storage blip—typically at 50% of new table's size—during rehashing and to avoid memory fragmentation that triggers heap compaction due to deallocation of large memory blocks caused by the old hash table. In such case, the rehashing operation is done incrementally through extending prior memory block allocated for the old hash table such that the buckets of the hash table remain unaltered. A common approach for amortized rehashing involves maintaining two hash functions and . The process of rehashing a bucket's items in accordance with the new hash function is termed as cleaning, which is implemented through command pattern by encapsulating the operations such as , and through a wrapper such that each element in the bucket gets rehashed and its procedure involve as follows:
Clean bucket.
Clean bucket.
The command gets executed.
Linear hashing
Linear hashing is an implementation of the hash table which enables dynamic growths or shrinks of the table one bucket at a time.
Performance
The performance of a hash table is dependent on the hash function's ability in generating quasi-random numbers () for entries in the hash table where , and denotes the key, number of buckets and the hash function such that . If the hash function generates the same for distinct keys (), this results in collision, which is dealt with in a variety of ways. The constant time complexity () of the operation in a hash table is presupposed on the condition that the hash function doesn't generate colliding indices; thus, the performance of the hash table is directly proportional to the chosen hash function's ability to disperse the indices. However, construction of such a hash function is practically infeasible, that being so, implementations depend on case-specific collision resolution techniques in achieving higher performance.
The best performance is obtained in the case that the hash function distributes the elements of the universe uniformaly, and the elements stored at the table are drawn at random from the universe. In this case, in hashing with chaining, the expected time for a successful search is , and the expected time for an unsuccessful search is .
Applications
Associative arrays
Hash tables are commonly used to implement many types of in-memory tables. They are used to implement associative arrays..
Database indexing
Hash tables may also be used as disk-based data structures and database indices (such as in dbm) although B-trees are more popular in these applications.
Caches
Hash tables can be used to implement caches, auxiliary data tables that are used to speed up the access to data that is primarily stored in slower media. In this application, hash collisions can be handled by discarding one of the two colliding entries—usually erasing the old item that is currently stored in the table and overwriting it with the new item, so every item in the table has a unique hash value.
Sets
Hash tables can be used in the implementation of set data structure, which can store unique values without any particular order; set is typically used in testing the membership of a value in the collection, rather than element retrieval.
Transposition table
A transposition table to a complex Hash Table which stores information about each section that has been searched.
Implementations
Many programming languages provide hash table functionality, either as built-in associative arrays or as standard library modules.
In JavaScript, an "object" is a mutable collection of key–value pairs (called "properties"), where each key is either a string or a guaranteed-unique "symbol"; any other value, when used as a key, is first coerced to a string. Aside from the seven "primitive" data types, every value in JavaScript is an object. ECMAScript 2015 also added the Map data structure, which accepts arbitrary values as keys.
C++11 includes unordered_map in its standard library for storing keys and values of arbitrary types.
Go's built-in map implements a map type in the form of a type, which is often (but not guaranteed to be) a hash table.
Java programming language includes the HashSet, HashMap, LinkedHashSet, and LinkedHashMap generic collections.
Python's built-in dict implements a hash table in the form of a type.
Ruby's built-in Hash uses the open addressing model from Ruby 2.4 onwards.
Rust programming language includes HashMap, HashSet as part of the Rust Standard Library.
The .NET standard library includes HashSet and Dictionary, so it can be used from languages such as C# and VB.NET.
See also
Bloom filter
Consistent hashing
Distributed hash table
Extendible hashing
Hash array mapped trie
Lazy deletion
Pearson hashing
PhotoDNA
Rabin–Karp string search algorithm
Search data structure
Stable hashing
Succinct hash table
Hash function
Notes
References
Further reading
External links
NIST entry on hash tables
Open Data Structures – Chapter 5 – Hash Tables, Pat Morin
MIT's Introduction to Algorithms: Hashing 1 MIT OCW lecture Video
MIT's Introduction to Algorithms: Hashing 2 MIT OCW lecture Video
Category:Articles with example C code
Category:Hash-based data structures
Category:1953 in computing
|
computer_science
| 3,940
|
14627
|
Isaac Newton
|
https://en.wikipedia.org/wiki/Isaac_Newton
|
Sir Isaac Newton (; ) was an English polymath active as a mathematician, physicist, astronomer, alchemist, theologian, author, and inventor. He was a key figure in the Scientific Revolution and the Enlightenment that followed. His book (Mathematical Principles of Natural Philosophy), first published in 1687, achieved the first great unification in physics and established classical mechanics. Newton also made seminal contributions to optics, and shares credit with German mathematician Gottfried Wilhelm Leibniz for formulating infinitesimal calculus, though he developed calculus years before Leibniz. Newton contributed to and refined the scientific method, and his work is considered the most influential in bringing forth modern science.
In the , Newton formulated the laws of motion and universal gravitation that formed the dominant scientific viewpoint for centuries until it was superseded by the theory of relativity. He used his mathematical description of gravity to derive Kepler's laws of planetary motion, account for tides, the trajectories of comets, the precession of the equinoxes and other phenomena, eradicating doubt about the Solar System's heliocentricity. Newton solved the two-body problem and introduced the three-body problem. He demonstrated that the motion of objects on Earth and celestial bodies could be accounted for by the same principles. Newton's inference that the Earth is an oblate spheroid was later confirmed by the geodetic measurements of Alexis Clairaut, Charles Marie de La Condamine, and others, convincing most European scientists of the superiority of Newtonian mechanics over earlier systems. He was also the first to calculate the age of Earth by experiment, and described a precursor to the modern wind tunnel. Further, he was the first to provide a quantitative estimate of the solar mass.
Newton built the first reflecting telescope and developed a sophisticated theory of colour based on the observation that a prism separates white light into the colours of the visible spectrum. His work on light was collected in his book Opticks, published in 1704. He originated prisms as beam expanders and multiple-prism arrays, which would later become integral to the development of tunable lasers. He also anticipated wave–particle duality and was the first to theorise the Goos–Hänchen effect. He further formulated an empirical law of cooling, which was the first heat transfer formulation and serves as the formal basis of convective heat transfer, made the first theoretical calculation of the speed of sound, and introduced the notions of a Newtonian fluid and a black body. He was also the first to explain the Magnus effect. Furthermore, he made early studies into electricity. In addition to his creation of calculus, Newton's work on mathematics was extensive. He generalised the binomial theorem to any real number, introduced the Puiseux series, was the first to state Bézout's theorem, classified most of the cubic plane curves, contributed to the study of Cremona transformations, developed a method for approximating the roots of a function, and originated the Newton–Cotes formulas for numerical integration and the polar coordinate system in its analytic form. He also initiated the field of calculus of variations, devised an early form of regression analysis, and was a pioneer of vector analysis.
Newton was a fellow of Trinity College and the second Lucasian Professor of Mathematics at the University of Cambridge; he was appointed at the age of 26. He was a devout but unorthodox Christian who privately rejected the doctrine of the Trinity. He refused to take holy orders in the Church of England, unlike most members of the Cambridge faculty of the day. Beyond his work on the mathematical sciences, Newton dedicated much of his time to the study of alchemy and biblical chronology, but most of his work in those areas remained unpublished until long after his death. Politically and personally tied to the Whig party, Newton served two brief terms as Member of Parliament for the University of Cambridge, in 1689–1690 and 1701–1702. He was knighted by Queen Anne in 1705 and spent the last three decades of his life in London, serving as Warden (1696–1699) and Master (1699–1727) of the Royal Mint, in which he increased the accuracy and security of British coinage. He was the president of the Royal Society (1703–1727).
Early life
Isaac Newton was born (according to the Julian calendar in use in England at the time) on Christmas Day, 25 December 1642 (NS 4 January 1643) at Woolsthorpe Manor in Woolsthorpe-by-Colsterworth, a hamlet in the county of Lincolnshire. His father, also named Isaac Newton, had died three months before. Born prematurely, Newton was a small child; his mother Hannah Ayscough reportedly said that he could have fit inside a quart mug. When Newton was three, his mother remarried and went to live with her new husband, the Reverend Barnabas Smith, leaving her son in the care of his maternal grandmother, Margery Ayscough (née Blythe). Newton disliked his stepfather and maintained some enmity towards his mother for marrying him, as revealed by this entry in a list of sins committed up to the age of 19: "Threatening my father and mother Smith to burn them and the house over them." Newton's mother had three children (Mary, Benjamin, and Hannah) from her second marriage.
The King's School
From the age of about twelve until he was seventeen, Newton was educated at The King's School in Grantham, which taught Latin and Ancient Greek and probably imparted a significant foundation of mathematics. He was removed from school by his mother and returned to Woolsthorpe-by-Colsterworth by October 1659. His mother, widowed for the second time, attempted to make him a farmer, an occupation he hated. Henry Stokes, master at The King's School, and Reverend William Ayscough (Newton's Uncle) persuaded his mother to send him back to school. Motivated partly by a desire for revenge against a schoolyard bully, he became the top-ranked student, distinguishing himself mainly by building sundials and models of windmills.
University of Cambridge
In June 1661, Newton was admitted to Trinity College at the University of Cambridge. His uncle the Reverend William Ayscough, who had studied at Cambridge, recommended him to the university. At Cambridge, Newton started as a subsizar, paying his way by performing valet duties until he was awarded a scholarship in 1664, which covered his university costs for four more years until the completion of his MA. At the time, Cambridge's teachings were based on those of Aristotle, whom Newton read along with then more modern philosophers, including René Descartes and astronomers such as Galileo Galilei and Thomas Street. He set down in his notebook a series of "Quaestiones" about mechanical philosophy as he found it. In 1665, he discovered the generalised binomial theorem and began to develop a mathematical theory that later became calculus. Soon after Newton obtained his BA degree at Cambridge in August 1665, the university temporarily closed as a precaution against the Great Plague.
Although he had been undistinguished as a Cambridge student, his private studies and the years following his bachelor's degree have been described as "the richest and most productive ever experienced by a scientist". The next two years alone saw the development of theories on calculus, optics, and the law of gravitation, at his home in Woolsthorpe. The physicist Louis Trenchard More suggesting that "There are no other examples of achievement in the history of science to compare with that of Newton during those two golden years."
Newton has been described as an "exceptionally organized" person when it came to note-taking, further dog-earing pages he saw as important. Furthermore, Newton's "indexes look like present-day indexes: They are alphabetical, by topic." His books showed his interests to be wide-ranging, with Newton himself described as a "Janusian thinker, someone who could mix and combine seemingly disparate fields to stimulate creative breakthroughs." William Stukeley wrote that Newton "was not only very expert with his mechanical tools, but he was equally so with his pen", and further illustrated how Newton's lodging room wall at Grantham was covered in drawings of "birds, beasts, men, ships & mathematical schemes. & very well designed". He also noted his "uncommon skill & industry in mechanical works".
In April 1667, Newton returned to the University of Cambridge, and in October he was elected as a fellow of Trinity. Fellows were required to take holy orders and be ordained as Anglican priests, although this was not enforced in the Restoration years, and an assertion of conformity to the Church of England was sufficient. He made the commitment that "I will either set Theology as the object of my studies and will take holy orders when the time prescribed by these statutes [7 years] arrives, or I will resign from the college." Up until this point he had not thought much about religion and had twice signed his agreement to the Thirty-nine Articles, the basis of Church of England doctrine. By 1675 the issue could not be avoided, and his unconventional views stood in the way.
His academic work impressed the Lucasian Professor Isaac Barrow, who was anxious to develop his own religious and administrative potential (he became master of Trinity College two years later); in 1669, Newton succeeded him, only one year after receiving his MA. Newton argued that this should exempt him from the ordination requirement, and King Charles II, whose permission was needed, accepted this argument; thus, a conflict between Newton's religious views and Anglican orthodoxy was averted. He was appointed at the age of 26.
As accomplished as Newton was as a theoretician he was less effective as a teacher as his classes were almost always empty. Humphrey Newton, his sizar (assistant), noted that Newton would arrive on time and, if the room was empty, he would reduce his lecture time in half from 30 to 15 minutes, talk to the walls, then retreat to his experiments, thus fulfilling his contractual obligations. For his part Newton enjoyed neither teaching nor students. Over his career he was only assigned three students to tutor and none were noteworthy.
Newton was elected a Fellow of the Royal Society (FRS) in 1672.
Revision of Geographia Generalis
The Lucasian Professor of Mathematics at Cambridge position included the responsibility of instructing geography. In 1672, and again in 1681, Newton published a revised, corrected, and amended edition of the Geographia Generalis, a geography textbook first published in 1650 by the then-deceased Bernhardus Varenius. In the Geographia Generalis, Varenius attempted to create a theoretical foundation linking scientific principles to classical concepts in geography, and considered geography to be a mix between science and pure mathematics applied to quantifying features of the Earth. While it is unclear if Newton ever lectured in geography, the 1733 Dugdale and Shaw English translation of the book stated Newton published the book to be read by students while he lectured on the subject. The Geographia Generalis is viewed by some as the dividing line between ancient and modern traditions in the history of geography, and Newton's involvement in the subsequent editions is thought to be a large part of the reason for this enduring legacy.
Scientific studies
Mathematics
Newton's work has been said "to distinctly advance every branch of mathematics then studied". His work on calculus, usually referred to as fluxions, began in 1664, and by 20 May 1665 as seen in a manuscript, Newton "had already developed the calculus to the point where he could compute the tangent and the curvature at any point of a continuous curve". Another manuscript of October 1666, is now published among Newton's mathematical papers. He recorded a definitive tract of calculus in what is called his "Waste Book". Newton was self-taught in mathematics and did his research without help, as according to Richard S. Westfall, "By every indication we have, Newton carried out his education in mathematics and his program of research entirely on his own." His work De analysi per aequationes numero terminorum infinitas, sent by Isaac Barrow to John Collins in June 1669, was identified by Barrow in a letter sent to Collins that August as the work "of an extraordinary genius and proficiency in these things". Newton later became involved in a dispute with the German polymath Gottfried Wilhelm Leibniz over priority in the development of calculus. Both are now credited with independently developing calculus, though with very different mathematical notations. However, it is established that Newton came to develop calculus much earlier than Leibniz. The notation of Leibniz is recognised as the more convenient notation, being adopted by continental European mathematicians, and after 1820, by British mathematicians.
Historian of science A. Rupert Hall notes that while Leibniz deserves credit for his independent formulation of calculus, Newton was undoubtedly the first to develop it, stating:Hall further notes that in Principia, Newton was able to "formulate and resolve problems by the integration of differential equations" and "in fact, he anticipated in his book many results that later exponents of the calculus regarded as their own novel achievements." Hall notes Newton's rapid development of calculus in comparison to his contemporaries, stating that Newton "well before 1690 . . . had reached roughly the point in the development of the calculus that Leibniz, the two Bernoullis, L’Hospital, Hermann and others had by joint efforts reached in print by the early 1700s".
Despite the convenience of Leibniz's notation, it has been noted that Newton's notation could also have developed multivariate techniques, with his dot notation still widely used in physics. Some academics have noted the richness and depth of Newton's work, such as physicist Roger Penrose, stating "in most cases Newton’s geometrical methods are not only more concise and elegant, they reveal deeper principles than would become evident by the use of those formal methods of calculus that nowadays would seem more direct." Mathematician Vladimir Arnold states "Comparing the texts of Newton with the comments of his successors, it is striking how Newton’s original presentation is more modern, more understandable and richer in ideas than the translation due to commentators of his geometrical ideas into the formal language of the calculus of Leibniz."
His work extensively uses calculus in geometric form based on limiting values of the ratios of vanishingly small quantities: in the Principia itself, Newton gave demonstration of this under the name of "the method of first and last ratios"Newton, Principia, 1729 English translation, p. 41 . and explained why he put his expositions in this form,Newton, Principia, 1729 English translation, p. 54 . remarking also that "hereby the same thing is performed as by the method of indivisibles." Because of this, the Principia has been called "a book dense with the theory and application of the infinitesimal calculus" in modern times and in Newton's time "nearly all of it is of this calculus."In the preface to the Marquis de L'Hospital's Analyse des Infiniment Petits (Paris, 1696). His use of methods involving "one or more orders of the infinitesimally small" is present in his De motu corporum in gyrum of 1684Starting with De motu corporum in gyrum, see also (Latin) Theorem 1 . and in his papers on motion "during the two decades preceding 1684".Whiteside, D.T., ed. (1970). "The Mathematical principles underlying Newton's Principia Mathematica". Journal for the History of Astronomy. 1. Cambridge University Press. pp. 116–138.
Newton had been reluctant to publish his calculus because he feared controversy and criticism. He was close to the Swiss mathematician Nicolas Fatio de Duillier. In 1691, Duillier started to write a new version of Newton's Principia, and corresponded with Leibniz. In 1693, the relationship between Duillier and Newton deteriorated and the book was never completed. Starting in 1699, Duillier accused Leibniz of plagiarism. Mathematician John Keill accused Leibniz of plagiarism in 1708 in the Royal Society journal, thereby deteriorating the situation even more. The dispute then broke out in full force in 1711 when the Royal Society proclaimed in a study that it was Newton who was the true discoverer and labelled Leibniz a fraud; it was later found that Newton wrote the study's concluding remarks on Leibniz. Thus began the bitter controversy which marred the lives of both men until Leibniz's death in 1716.
Newton's first major mathematical discovery was the generalised binomial theorem, valid for any exponent, in 1664-5, which has been called "one of the most powerful and significant in the whole of mathematics." He discovered Newton's identities (probably without knowing of earlier work by Albert Girard in 1629), Newton's method, the Newton polygon, and classified cubic plane curves (polynomials of degree three in two variables). Newton is also a founder of the theory of Cremona transformations, and he made substantial contributions to the theory of finite differences, with Newton regarded as "the single most significant contributor to finite difference interpolation", with many formulas created by Newton. He was the first to state Bézout's theorem, and was also the first to use fractional indices and to employ coordinate geometry to derive solutions to Diophantine equations. He approximated partial sums of the harmonic series by logarithms (a precursor to Euler's summation formula) and was the first to use power series with confidence and to revert power series. He introduced the Puisseux series. He also provided the earliest explicit formulation of the general Taylor series, which appeared in a 1691-1692 draft of his De Quadratura Curvarum. He originated the Newton-Cotes formulas for numerical integration. Newton's work on infinite series was inspired by Simon Stevin's decimals. He also initiated the field of calculus of variations, being the first to clearly formulate and correctly solve a problem in the field, that being Newton's minimal resistance problem, which he posed and solved in 1685, and then later published in Principia in 1687. It is regarded as one of the most difficult problems tackled by variational methods prior to the twentieth century. He then used calculus of variations in his solving of the brachistochrone curve problem in 1697, which was posed by Johann Bernoulli in 1696, and which he famously solved in a night, thus pioneering the field with his work on the two problems. He was also a pioneer of vector analysis, as he demonstrated how to apply the parallelogram law for adding various physical quantities and realised that these quantities could be broken down into components in any direction. He is credited with introducing the notion of the vector in his Principia, by proposing that physical quantities like velocity, acceleration, momentum, and force be treated as directed quantities, thereby making Newton the "true originator of this mathematical object".
Newton was the first to develop a system of polar coordinates in a strictly analytic sense, with his work in relation to the topic being superior, in both generality and flexibility, to any other during his lifetime. His 1671 Method of Fluxions work preceded the earliest publication on the subject by Jacob Bernoulli in 1691. He is also credited as the originator of bipolar coordinates in a strict sense.
A private manuscript of Newton's which dates to 1664-1666, contains what is the earliest known problem in the field of geometric probability. The problem dealt with the likelihood of a negligible ball landing in one of two unequal sectors of a circle. In analyzing this problem, he proposed substituting the enumeration of occurrences with their quantitative assessment, and replacing the estimation of an area's proportion with a tally of points, which has led to him being credited as founding stereology.
Newton was responsible for the modern origin of Gaussian elimination in Europe. In 1669 to 1670, Newton wrote that all the algebra books known to him lacked a lesson for solving simultaneous equations, which he then supplied. His notes lay unpublished for decades, but once released, his textbook became the most influential of its kind, establishing the method of substitution and the key terminology of 'extermination' (now known as elimination).
In the 1660s and 1670s, Newton found 72 of the 78 "species" of cubic curves and categorised them into four types, systemising his results in later publications. However, a 1690s manuscript later analyzed showed that Newton had identified all 78 cubic curves, but chose not to publish the remaining six for unknown reasons. In 1717, and probably with Newton's help, James Stirling proved that every cubic was one of these four types. He claimed that the four types could be obtained by plane projection from one of them, and this was proved in 1731, four years after his death.
Newton briefly dabbled in probability. In letters with Samuel Pepys in 1693, they corresponded over the Newton–Pepys problem, which was a problem about the probability of throwing sixes from a certain number of dice. For it, outcome A was that six dice are tossed with at least one six appearing, outcome B that twelve dice are tossed with at least two sixes appearing, and outcome C in which eighteen dice are tossed with at least three sixes appearing. Newton solved it correctly, choosing outcome A, Pepys incorrectly chose the wrong outcome of C. However, Newton's intuitive explanation for the problem was flawed.
Optics
In 1666, Newton observed that the spectrum of colours exiting a prism in the position of minimum deviation is oblong, even when the light ray entering the prism is circular, which is to say, the prism refracts different colours by different angles. This led him to conclude that colour is a property intrinsic to light – a point which had, until then, been a matter of debate.
From 1670 to 1672, Newton lectured on optics. During this period he investigated the refraction of light, demonstrating that the multicoloured image produced by a prism, which he named a spectrum, could be recomposed into white light by a lens and a second prism. Modern scholarship has revealed that Newton's analysis and resynthesis of white light owes a debt to corpuscular alchemy.William R. Newman, "Newton's Early Optical Theory and its Debt to Chymistry", in Danielle Jacquart and Michel Hochmann, eds., Lumière et vision dans les sciences et dans les arts (Geneva: Droz, 2010), pp. 283–307. (PDF)
In his work on Newton's rings in 1671, he used a method that was unprecedented in the 17th century, as "he averaged all of the differences, and he then calculated the difference between the average and the value for the first ring", in effect introducing a now standard method for reducing noise in measurements, and which does not appear elsewhere at the time. He extended his "error-slaying method" to studies of equinoxes in 1700, which was described as an "altogether unprecedented method" but differed in that here "Newton required good values for each of the original equinoctial times, and so he devised a method that allowed them to, as it were, self-correct." Newton wrote down the first of the two 'normal equations' known from ordinary least squares, and devised an early form of regression analysis, as he averaged a set of data, 50 years before Tobias Mayer and he also summed the residuals to zero, forcing the regression line through the average point. He differentiated between two uneven sets of data and may have considered an optimal solution regarding bias, although not in terms of effectiveness.
He showed that coloured light does not change its properties by separating out a coloured beam and shining it on various objects, and that regardless of whether reflected, scattered, or transmitted, the light remains the same colour. Thus, he observed that colour is the result of objects interacting with already-coloured light rather than objects generating the colour themselves. This is known as Newton's theory of colour. His 1672 paper on the nature of white light and colours forms the basis for all work that followed on colour and colour vision.
From this work, he concluded that the lens of any refracting telescope would suffer from the dispersion of light into colours (chromatic aberration). As a proof of the concept, he constructed a telescope using reflective mirrors instead of lenses as the objective to bypass that problem. Building the design, the first known functional reflecting telescope, today known as a Newtonian telescope, involved solving the problem of a suitable mirror material and shaping technique. Previous designs for the reflecting telescope were never put into practice or ended in failure, thereby making Newton's telescope the first one truly created. . Newton grounded his own mirrors out of a custom composition of highly reflective speculum metal, using Newton's rings to judge the quality of the optics for his telescopes. In late 1668, he was able to produce this first reflecting telescope. It was about eight inches long and it gave a clearer and larger image. In 1671, he was asked for a demonstration of his reflecting telescope by the Royal Society. Their interest encouraged him to publish his notes, Of Colours, which he later expanded into the work Opticks. When Robert Hooke criticised some of Newton's ideas, Newton was so offended that he withdrew from public debate. However, the two had brief exchanges in 1679–80, when Hooke, who had been appointed Secretary of the Royal Society, opened a correspondence intended to elicit contributions from Newton to Royal Society transactions, which had the effect of stimulating Newton to work out a proof that the elliptical form of planetary orbits would result from a centripetal force inversely proportional to the square of the radius vector.
In astronomy, Newton is further credited with the realization that high-altitude sites are superior for observation because they provide the "most serene and quiet Air" above the dense, turbulent atmosphere ("grosser Clouds"), thereby reducing star twinkling.
Newton argued that light is composed of particles or corpuscles, which were refracted by accelerating into a denser medium. He verged on soundlike waves to explain the repeated pattern of reflection and transmission by thin films (Opticks Bk. II, Props. 12), but still retained his theory of 'fits' that disposed corpuscles to be reflected or transmitted (Props.13). Physicists later favoured a purely wavelike explanation of light to account for the interference patterns and the general phenomenon of diffraction. Despite his known preference of a particle theory, Newton in fact noted that light had both particle-like and wave-like properties in Opticks, and was the first to attempt to reconcile the two theories, thereby anticipating later developments of wave-particle duality, which is the modern understanding of light. Physicist David Finkelstein called him "the first quantum physicist" as a result.
In his Hypothesis of Light of 1675, Newton posited the existence of the ether to transmit forces between particles. The contact with the Cambridge Platonist philosopher Henry More revived his interest in alchemy. He replaced the ether with occult forces based on Hermetic ideas of attraction and repulsion between particles. His contributions to science cannot be isolated from his interest in alchemy. This was at a time when there was no clear distinction between alchemy and science.
Newton contributed to the study of astigmatism by helping to erect its mathematical foundation through his discovery that when oblique pencils of light undergo refraction, two distinct image points are created. This would later stimulate the work of Thomas Young.
In 1704, Newton published Opticks, in which he expounded his corpuscular theory of light, and included a set of queries at the end, which were posed as unanswered questions and positive assertions. In line with his corpuscle theory, he thought that normal matter was made of grosser corpuscles and speculated that through a kind of alchemical transmutation, with query 30 stating "Are not gross Bodies and Light convertible into one another, and may not Bodies receive much of their Activity from the Particles of Light which enter their Composition?" Query 6 introduced the concept of a black body.
In 1699, Newton presented an improved version of his reflecting quadrant, or octant, that he had previously designed to the Royal Society. His design was probably built as early as 1677. It is notable for being the first quadrant to use two mirrors, which greatly improved the accuracy of measurements since it provided a stable view of both the horizon and the celestial body at the same time. His quadrant was built but appears to have not survived to the present. John Hadley would later construct his own double-reflecting quadrant that was nearly identical to the one invented by Newton. However, Hadley likely did not know of Newton's original invention, causing confusion regarding originality.
In 1704, Newton constructed and presented a burning mirror to the Royal Society. It consisted of seven concave glass mirrors, each about one foot in diameter. It is estimated that it reached a maximum possible radiant energy of 460 W cm⁻², which has been described as "certainly brighter thermally than a thousand Suns (1,000 × 0.065 W cm⁻²)" based on estimating that the intensity of the Sun's radiation in London in May of 1704 was 0.065 W cm⁻². As a result of the maximum radiant intensity possibly achieved with his mirror he "may have produced the greatest intensity of radiation brought about by human agency before the arrival of nuclear weapons in 1945." David Gregory reported that it caused metals to smoke, boiled gold and brought about the vitrification of slate. William Derham thought it be to the most powerful burning mirror in Europe at the time.
Newton also made early studies into electricity, as he constructed a primitive form of a frictional electrostatic generator using a glass globe,Opticks, 2nd Ed 1706. Query 8. the first to do so with glass instead of sulfur, which had previously been used by scientists such as Otto von Guericke to construct their globes. He detailed an experiment in 1675 that showed when one side of a glass sheet is rubbed to create an electric charge, it attracts "light bodies" to the opposite side. He interpreted this as evidence that electric forces could pass through glass. Newton also reported to the Royal Society that glass was effective for generating static electricity, classifying it as a "good electric" decades before this property was widely known. His idea in Opticks that optical reflection and refraction arise from interactions across the entire surface is seen as a precursor to the field theory of the electric force. He also recognised the crucial role of electricity in nature, believing it to be responsible for various phenomena, including the emission, reflection, refraction, inflection, and heating effects of light. He proposed that electricity was involved in the sensations experienced by the human body, affecting everything from muscle movement to brain function. His theory of nervous transmission had an immense influence on the work of Luigi Galvani, as Newton's theory focused on electricity as a possible mediator of nervous transmission, which went against the prevailing Cartesian hydraulic theory of the time. He was also the first to present a clear and balanced theory for how both electrical and chemical mechanisms could work together in the nervous system. Newton's mass-dispersion model, ancestral to the successful use of the least action principle, provided a credible framework for understanding refraction, particularly in its approach to refraction in terms of momentum.
In Opticks, he was the first to show a diagram using a prism as a beam expander, and also the use of multiple-prism arrays. Some 278 years after Newton's discussion, multiple-prism beam expanders became central to the development of narrow-linewidth tunable lasers. The use of these prismatic beam expanders led to the multiple-prism dispersion theory.
Newton was also the first to propose the Goos–Hänchen effect, an optical phenomenon in which linearly polarised light undergoes a small lateral shift when totally internally reflected. He provided both experimental and theoretical explanations for the effect using a mechanical model.
Science came to realise the difference between perception of colour and mathematisable optics. The German poet and scientist, Johann Wolfgang von Goethe, could not shake the Newtonian foundation but "one hole Goethe did find in Newton's armour, ... Newton had committed himself to the doctrine that refraction without colour was impossible. He, therefore, thought that the object-glasses of telescopes must forever remain imperfect, achromatism and refraction being incompatible. This inference was proved by Dollond to be wrong."Tyndall, John. (1880). Popular Science Monthly Volume 17, July. s:Popular Science Monthly/Volume 17/July 1880/Goethe's Farbenlehre: Theory of Colors II
Gravity
Newton had been developing his theory of gravitation as far back as 1665. In 1679, he returned to his work on celestial mechanics by considering gravitation and its effect on the orbits of planets with reference to Kepler's laws of planetary motion. Newton's reawakening interest in astronomical matters received further stimulus by the appearance of a comet in the winter of 1680–1681, on which he corresponded with John Flamsteed. After the exchanges with Hooke, Newton worked out a proof that the elliptical form of planetary orbits would result from a centripetal force inversely proportional to the square of the radius vector. He shared his results with Edmond Halley and the Royal Society in , a tract written on about nine sheets which was copied into the Royal Society's Register Book in December 1684.Whiteside, D. T., ed. (1974). Mathematical Papers of Isaac Newton, 1684–1691. 6. Cambridge University Press. p. 30. This tract contained the nucleus that Newton developed and expanded to form the Principia.
The was published on 5 July 1687 with encouragement and financial help from Halley. In this work, Newton stated the three universal laws of motion. Together, these laws describe the relationship between any object, the forces acting upon it and the resulting motion, laying the foundation for classical mechanics. They contributed to numerous advances during the Industrial Revolution and were not improved upon for more than 200 years. Many of these advances still underpin non-relativistic technologies today. Newton used the Latin word gravitas (weight) for the effect that would become known as gravity, and defined the law of universal gravitation. His work achieved the first great unification in physics. He solved the two-body problem, and introduced the three-body problem.
In the same work, Newton presented a calculus-like method of geometrical analysis using 'first and last ratios', gave the first analytical determination (based on Boyle's law) of the speed of sound in air, inferred the oblateness of Earth's spheroidal figure, accounted for the precession of the equinoxes as a result of the Moon's gravitational attraction on the Earth's oblateness, initiated the gravitational study of the irregularities in the motion of the Moon, provided a theory for the determination of the orbits of comets, and much more. Newton's biographer David Brewster reported that the complexity of applying his theory of gravity to the motion of the moon was so great it affected Newton's health: "[H]e was deprived of his appetite and sleep" during his work on the problem in 1692–93, and told the astronomer John Machin that "his head never ached but when he was studying the subject". According to Brewster, Halley also told John Conduitt that when pressed to complete his analysis Newton "always replied that it made his head ache, and kept him awake so often, that he would think of it no more". [Emphasis in original] He provided the first calculation of the age of Earth by experiment, and also described a precursor to the modern wind tunnel.
In Principia, Newton provided the first quantitative estimate of the solar mass, with later editions incorporating more accurate measurements, bringing his Sun-to-Earth mass ratio calculation close to the modern value. He further determined the masses and densities of Jupiter and Saturn, putting all four celestial bodies (Sun, Earth, Jupiter, and Saturn) on the same comparative scale. This achievement by Newton has been called "a supreme expression of the doctrine that one set of physical concepts and principles applies to all bodies on earth, the earth itself, and bodies anywhere throughout the universe".
Newton made clear his heliocentric view of the Solar System—developed in a somewhat modern way because already in the mid-1680s he recognised the "deviation of the Sun" from the centre of gravity of the Solar System. For Newton, it was not precisely the centre of the Sun or any other body that could be considered at rest, but rather "the common centre of gravity of the Earth, the Sun and all the Planets is to be esteem'd the Centre of the World", and this centre of gravity "either is at rest or moves uniformly forward in a right line". (Newton adopted the "at rest" alternative in view of common consent that the centre, wherever it was, was at rest.)Text quotations are from 1729 translation of Newton's Principia, Book 3 (1729 vol.2) at pp. 232–33 [233].
Newton was criticised for introducing "occult agencies" into science because of his postulate of an invisible force able to act over vast distances.Edelglass et al., Matter and Mind, . p. 54 Later, in the second edition of the Principia (1713), Newton firmly rejected such criticisms in a concluding General Scholium, writing that it was enough that the phenomenon implied a gravitational attraction, as they did; but they did not so far indicate its cause, and it was both unnecessary and improper to frame hypotheses of things that were not implied by the phenomenon. (Here he used what became his famous expression .On the meaning and origins of this expression, see Kirsten Walsh, Does Newton feign an hypothesis? , Early Modern Experimental Philosophy , 18 October 2010.)
With the , Newton became internationally recognised. He acquired a circle of admirers, including the Swiss-born mathematician Nicolas Fatio de Duillier.
Other significant work
Newton studied heat and energy flow, formulating an empirical law of cooling which states that the rate at which an object cools is proportional to the temperature difference between the object and its surrounding environment. It was first formulated in 1701, being the first heat transfer formulation and serves as the formal basis of convective heat transfer, later being incorporated by Joseph Fourier into his work.
Newton introduced the notion of a Newtonian fluid with his formulation of his law of viscosity in Principia in 1687. It states that the shear stress between two fluid layers is directly proportional to the velocity gradient between them. He also discussed the circular motion of fluids and was the first to discuss Couette flow.
Newton was the first to observe and qualitatively describe what would much later be formalised as the Magnus effect, nearly two centuries before Heinrich Magnus's experimental studies. In a 1672 text, Newton recounted watching tennis players at Cambridge college and noted how a tennis ball struck obliquely with a spinning motion curved in flight. He explained that the ball’s combination of circular and progressive motion caused one side to "press and beat the contiguous air more violently" than the other, thereby producing "a reluctancy and reaction of the air proportionably greater", an astute observation of the pressure differential responsible for lateral deflection.Newton I. 40. Newton to Oldenburg, 6 February 1671/2. In: Turnball HW, ed. The Correspondence of Isaac Newton. Cambridge University Press; 1959:92–107.
Philosophy of science
Newton's role as a philosopher was deeply influential, and understanding the philosophical landscape of the late seventeenth and early eighteenth centuries requires recognising his central contributions. Historically, Newton was widely regarded as a core figure in modern philosophy. For example, Johann Jakob Brucker’s Historia Critica Philosophiae (1744), considered the first comprehensive modern history of philosophy, prominently positioned Newton as a central philosophical figure. This portrayal notably shaped the perception of modern philosophy among leading Enlightenment intellectuals, including figures such as Denis Diderot, Jean le Rond d'Alembert, and Immanuel Kant.Andrew Janiak, "Newton's Philosophy," Stanford Encyclopedia of Philosophy (2023). https://plato.stanford.edu/entries/newton-philosophy/
Starting with the second edition of his Principia, Newton included a final section on science philosophy or method. It was here that he wrote his famous line, in Latin, "hypotheses non fingo", which can be translated as "I don't make hypotheses," (the direct translation of "fingo" is "frame", but in context he was advocating against the use of hypotheses in science).
Newton's rejection of hypotheses ("hypotheses non fingo") emphasised that he refused to speculate on causes not directly supported by phenomena. Harper explains that Newton's experimental philosophy involves clearly distinguishing hypotheses-unverified conjectures-from propositions established through phenomena and generalised by induction. According to Newton, true scientific inquiry requires grounding explanations strictly on observable data rather than speculative reasoning. Thus, for Newton, proposing hypotheses without empirical backing undermines the integrity of experimental philosophy, as hypotheses should serve merely as tentative suggestions subordinate to observational evidence.
In Latin, he wrote:
This is translated as:
Newton contributed to and refined the scientific method. In his work on the properties of light in the 1670s, he showed his rigorous method, which was conducting experiments, taking detailed notes, making measurements, conducting more experiments that grew out of the initial ones, he formulated a theory, created more experiments to test it, and finally described the entire process so other scientists could replicate every step.
In his 1687 Principia, he outlined four rules: the first is, 'Admit no more causes of natural things than are both true and sufficient to explain their appearances'; the second is, 'To the same natural effect, assign the same causes'; the third is, 'Qualities of bodies, which are found to belong to all bodies within experiments, are to be esteemed universal'; and lastly, 'Propositions collected from observation of phenomena should be viewed as accurate or very nearly true until contradicted by other phenomena'. These rules have become the basis of the modern approaches to science.
Newton's scientific method went beyond simple prediction in three critical ways, thereby enriching the basic hypothetico-deductive model. First, it established a richer ideal of empirical success, requiring phenomena to accurately measure theoretical parameters. Second, it transformed theoretical questions into ones empirically solvable by measurement. Third, it used provisionally accepted propositions to guide research, enabling the method of successive approximations where deviations drive the creation of more accurate models. This robust method of theory-mediated measurements was adopted by his successors for extensions of his theory to astronomy and remains a foundational element in modern physics.
Later life
Royal Mint
In the 1690s, Newton wrote a number of religious tracts dealing with the literal and symbolic interpretation of the Bible. A manuscript Newton sent to John Locke in which he disputed the fidelity of 1 John 5:7—the Johannine Comma—and its fidelity to the original manuscripts of the New Testament, remained unpublished until 1785.; and John C. Attig, John Locke Bibliography — Chapter 5, Religion, 1751–1900
Newton was also a member of the Parliament of England for Cambridge University in 1689 and 1701, but according to some accounts his only comments were to complain about a cold draught in the chamber and request that the window be closed. He was, however, noted by Cambridge diarist Abraham de la Pryme to have rebuked students who were frightening locals by claiming that a house was haunted.
Newton moved to London to take up the post of warden of the Royal Mint during the reign of King William III in 1696, a position that he had obtained through the patronage of Charles Montagu, 1st Earl of Halifax, then Chancellor of the Exchequer. He took charge of England's great recoining, fought Lord Lucas, Governor of the Tower, and secured the job of deputy comptroller of the temporary Chester branch for Edmond Halley. Newton became perhaps the best-known Master of the Mint upon the death of Thomas Neale in 1699, a position he held for the last 30 years of his life. These appointments were intended as sinecures, but Newton took them seriously. He retired from his Cambridge duties in 1701, and exercised his authority to reform the currency and punish clippers and counterfeiters.
As Warden, and afterwards as Master, of the Royal Mint, Newton estimated that 20 percent of the coins taken in during the Great Recoinage of 1696 were counterfeit. Counterfeiting was high treason, punishable by the felon being hanged, drawn and quartered. Despite this, convicting even the most flagrant criminals could be extremely difficult, but Newton proved equal to the task.
Disguised as a habitué of bars and taverns, he gathered much of that evidence himself. For all the barriers placed to prosecution, and separating the branches of government, English law still had ancient and formidable customs of authority. Newton had himself made a justice of the peace in all the home counties. A draft letter regarding the matter is included in Newton's personal first edition of Philosophiæ Naturalis Principia Mathematica, which he must have been amending at the time. Then he conducted more than 100 cross-examinations of witnesses, informers, and suspects between June 1698 and Christmas 1699. He successfully prosecuted 28 coiners, including serial counterfeiter William Chaloner, who was hanged.
Beyond prosecuting counterfeiters, he improved minting technology and reduced the standard deviation of the weight of guineas from 1.3 grams to 0.75 grams. Starting in 1707, Newton introduced the practice of testing a small sample of coins, a pound in weight, in the trial of the pyx, which helped to reduce the size of admissible error. He ultimately saved the Treasury a then £41,510, roughly £3 million in 2012, with his improvements lasting until the 1770s, thereby increasing the accuracy of British coinage. He greatly increased the productivity of the Mint, as he raised the weekly output of coin from 15,000 pounds to 100,000 pounds. Newton has also been credited with pioneering time and motion studies, although his work was a theoretical calculation of physical capability rather than a standardized industrial productivity model.
Newton's activities at the Mint influenced rising scientific and commercial interests in fields such as numismatics, geology, mining, metallurgy, and metrology in the early 18th century.
Newton held a surprisingly modern view on economics, believing that paper credit, such as government debt, was a practical and wise solution to the limitations of a currency based solely on metal. He argued that increasing the supply of this paper credit could lower interest rates, which would in turn stimulate trade and create employment. Newton also held a radical minority opinion that the value of both metal and paper currency was set by public opinion and trust.
Newton was made president of the Royal Society in 1703 and an associate of the French Académie des Sciences. In his position at the Royal Society, Newton made an enemy of John Flamsteed, the Astronomer Royal, by prematurely publishing Flamsteed's Historia Coelestis Britannica, which Newton had used in his studies.
Knighthood
In April 1705, Queen Anne knighted Newton during a royal visit to Trinity College, Cambridge. The knighthood is likely to have been motivated by political considerations connected with the parliamentary election in May 1705, rather than any recognition of Newton's scientific work or services as Master of the Mint."The Queen's 'great Assistance' to Newton's election was his knighting, an honor bestowed not for his contributions to science, nor for his service at the Mint, but for the greater glory of party politics in the election of 1705." Newton was the second scientist to be knighted, after Francis Bacon.
As a result of a report written by Newton on 21 September 1717 to the Lords Commissioners of His Majesty's Treasury, the bimetallic relationship between gold coins and silver coins was changed by royal proclamation on 22 December 1717, forbidding the exchange of gold guineas for more than 21 silver shillings.On the Value of Gold and Silver in European Currencies and the Consequences on the Worldwide Gold- and Silver-Trade , Sir Isaac Newton, 21 September 1717; "By The King, A Proclamation Declaring the Rates at which Gold shall be current in Payments". Royal Numismatic Society. V. April 1842 – January 1843. This inadvertently resulted in a silver shortage as silver coins were used to pay for imports, while exports were paid for in gold, effectively moving Britain from the silver standard to its first gold standard. It is a matter of debate as to whether he intended to do this or not. It has been argued that Newton viewed his work at the Mint as a continuation of his alchemical work.
Newton was invested in the South Sea Company and lost at least £10,000, and plausibly more than £20,000 (£4.4 million in 2020Eric W. Nye, Pounds Sterling to Dollars: Historical Conversion of Currency . Retrieved: 5 October 2020) when it collapsed in around 1720. Since he was already rich before the bubble, he still died rich, at estate value around £30,000.
Toward the end of his life, Newton spent some time at Cranbury Park, near Winchester, the country residence of his niece and her husband, though he primarily lived in London. His half-niece, Catherine Barton, served as his hostess in social affairs at his house on Jermyn Street in London. In a surviving letter written in 1700 while she was recovering from smallpox, Newton closed with the phrase "your very loving uncle", expressing familial concern in a manner typical of seventeenth-century epistolary style. Historian Patricia Fara notes that the letter's tone is warm and paternal, including medical advice and attention to her appearance during convalescence, rather than conveying any romantic implication.
Death
alt=Isaac Newton's death mask|thumb|upright|Death mask of Newton, photographed
Newton died in his sleep in London on 20 March 1727 (NS 31 March 1727). He was given a ceremonial funeral, attended by nobles, scientists, and philosophers, and was buried in Westminster Abbey among kings and queens. He was the first scientist to be buried in the abbey. Voltaire may have been present at his funeral.Dobre and Nyden suggest that there is no clear evidence that Voltaire was present; see p. 89 of A bachelor, he had divested much of his estate to relatives during his last years, and died intestate. His papers went to John Conduitt and Catherine Barton.
Shortly after his death, a plaster death mask was moulded of Newton. It was used by Flemish sculptor John Michael Rysbrack in making a sculpture of Newton. It is now held by the Royal Society.
Newton's hair was posthumously examined and found to contain mercury, probably resulting from his alchemical pursuits. Mercury poisoning could explain Newton's eccentricity in late life.
Personality
Newton has been described as an incredibly driven and disciplined man who dedicated his life to his work. He is known for having a prodigious appetite for work, which he prioritized above his personal health. Newton also maintained strict control over his physical appetites, being sparing with food and drink and becoming a vegetarian later in life. While Newton was a secretive and neurotic individual, he is not considered to have been psychotic or bipolar. He has been described as an "incredible polymath" who was "immensely versatile", with some of his earliest investigations involving a phonetic alphabet and a universal language.
Although it was claimed that he was once engaged, Newton never married. The French writer and philosopher Voltaire, who was in London at the time of Newton's funeral, said that he "was never sensible to any passion, was not subject to the common frailties of mankind, nor had any commerce with women—a circumstance which was assured me by the physician and surgeon who attended him in his last moments."
Newton had a close friendship with the Swiss mathematician Nicolas Fatio de Duillier, whom he met in London around 1689; some of their correspondence has survived. Their relationship came to an abrupt and unexplained end in 1693, and at the same time Newton suffered a nervous breakdown, on the friendship with Fatio, pp. 531–540 on Newton's breakdown. which included sending wild accusatory letters to his friends Samuel Pepys and John Locke. His note to the latter included the charge that Locke had endeavoured to "embroil" him with "woemen & by other means".
Newton appeared to be relatively modest about his achievements, writing in a later memoir, "I do not know what I may appear to the world, but to myself I seem to have been only like a boy playing on the sea-shore, and diverting myself in now and then finding a smoother pebble or a prettier shell than ordinary, whilst the great ocean of truth lay all undiscovered before me."Memoirs of the Life, Writings, and Discoveries of Sir Isaac Newton (1855) by Sir David Brewster (Volume II. Ch. 27) Nonetheless, he could be fiercely competitive and did on occasion hold grudges against his intellectual rivals, not abstaining from personal attacks when it suited him—a common trait found in many of his contemporaries. In a letter to Robert Hooke in February 1675, for instance, he confessed "If I have seen further it is by standing on the shoulders of giants." Some historians argued that this, written at a time when Newton and Hooke were disputing over optical discoveries, was an oblique attack on Hooke who was presumably short and hunchbacked, rather than (or in addition to) a statement of modesty. On the other hand, the widely known proverb about standing on the shoulders of giants, found in 17th century poet George Herbert's (1651) among others, had as its main point that "a dwarf on a giant's shoulders sees farther of the two", and so in effect place Newton himself rather than Hooke as the 'dwarf' who saw farther.
Theology
Religious views
Although born into an Anglican family, by his thirties Newton had developed unorthodox beliefs,Richard S. Westfall – Indiana University with historian Stephen Snobelen labelling him a heretic. Despite this, Newton in his time was considered a knowledgeable and insightful theologian who was respected by his contemporaries, with Thomas Tenison, the then Archbishop of Canterbury, telling him "You know more divinity than all of us put together", and philosopher John Locke describing him as "a very valuable man not onely for his wonderful skill in Mathematicks but in divinity too and his great knowledg in the Scriptures where in I know few his equals".
By 1672, he had started to record his theological researches in notebooks which he showed to no one and which have only been available for public examination since 1972. Over half of what Newton wrote concerned theology and alchemy, and most has never been printed. His writings show extensive knowledge of early Church texts and reveal that he sided with Arius, who rejected the conventional view of the Trinity and was the losing party in the conflict with Athanasius over the Creed. Newton "recognized Christ as a divine mediator between God and man, who was subordinate to the Father who created him." He was especially interested in prophecy, but for him, "the great apostasy was trinitarianism."
Newton tried unsuccessfully to obtain one of the two fellowships that exempted the holder from the ordination requirement. At the last moment in 1675, he received a government dispensation that excused him and all future holders of the Lucasian chair.
Worshipping Jesus Christ as God was, in Newton's eyes, idolatry, an act he believed to be the fundamental sin. In 1999, Snobelen wrote, that "Isaac Newton was a heretic. But ... he never made a public declaration of his private faith—which the orthodox would have deemed extremely radical. He hid his faith so well that scholars are still unraveling his personal beliefs." Snobelen concludes that Newton was at least a Socinian sympathiser (he owned and had thoroughly read at least eight Socinian books), possibly an Arian and almost certainly an anti-trinitarian.
Although the laws of motion and universal gravitation became Newton's best-known discoveries, he warned against using them to view the Universe as a mere machine, as if akin to a great clock. He said, "So then gravity may put the planets into motion, but without the Divine Power it could never put them into such a circulating motion, as they have about the sun".
Along with his scientific fame, Newton's studies of the Bible and of the early Church Fathers were also noteworthy. Newton wrote works on textual criticism, most notably An Historical Account of Two Notable Corruptions of Scripture and Observations upon the Prophecies of Daniel, and the Apocalypse of St. John.Observations upon the Prophecies of Daniel, and the Apocalypse of St. John 1733 He placed the crucifixion of Jesus Christ at 3 April, AD 33, which agrees with one traditionally accepted date.John P. Meier, A Marginal Jew, I, pp. 382–402, Yale University Press, 1991. After narrowing the years to 30 or 33, provisionally judges 30 most likely.
He believed in a rationally immanent world, but he rejected the hylozoism implicit in Gottfried Wilhelm Leibniz and Baruch Spinoza. The ordered and dynamically informed Universe could be understood, and must be understood, by an active reason. In his correspondence, he claimed that in writing the Principia "I had an eye upon such Principles as might work with considering men for the belief of a Deity".Newton to Richard Bentley 10 December 1692, in Turnbull et al. (1959–77), vol 3, p. 233. He saw evidence of design in the system of the world: "Such a wonderful uniformity in the planetary system must be allowed the effect of choice". But Newton insisted that divine intervention would eventually be required to reform the system, due to the slow growth of instabilities.Opticks, 2nd Ed 1706. Query 31. For this, Leibniz lampooned him: "God Almighty wants to wind up his watch from time to time: otherwise it would cease to move. He had not, it seems, sufficient foresight to make it a perpetual motion."
Newton's position was defended by his follower Samuel Clarke in a famous correspondence. A century later, Pierre-Simon Laplace's work Celestial Mechanics had a natural explanation for why the planet orbits do not require periodic divine intervention. The contrast between Laplace's mechanistic worldview and Newton's one is the most strident considering the famous answer which the French scientist gave Napoleon, who had criticised him for the absence of the Creator in the Mécanique céleste: "Sire, j'ai pu me passer de cette hypothèse" ("Sir, I can do without this hypothesis").Dijksterhuis, E. J. The Mechanization of the World Picture, IV 329–330, Oxford University Press, 1961. The author's final comment on this episode is: "The mechanization of the world picture led with irresistible coherence to the conception of God as a sort of 'retired engineer', and from here to God's complete elimination it took just one more step".
Scholars long debated whether Newton disputed the doctrine of the Trinity. His first biographer, David Brewster, who compiled his manuscripts, interpreted Newton as questioning the veracity of some passages used to support the Trinity, but never denying the doctrine of the Trinity as such.Brewster states that Newton was never known as an Arian during his lifetime, it was William Whiston, an Arian, who first argued that "Sir Isaac Newton was so hearty for the Baptists, as well as for the Eusebians or Arians, that he sometimes suspected these two were the two witnesses in the Revelations," while others like Hopton Haynes (a Mint employee and Humanitarian), "mentioned to Richard Baron, that Newton held the same doctrine as himself". David Brewster. Memoirs of the Life, Writings, and Discoveries of Sir Isaac Newton. p. 268. In the twentieth century, encrypted manuscripts written by Newton and bought by John Maynard Keynes (among others) were deciphered and it became known that Newton did indeed reject Trinitarianism.
Religious thought
Newton and Robert Boyle's approach to mechanical philosophy was promoted by rationalist pamphleteers as a viable alternative to pantheism and enthusiasm. It was accepted hesitantly by orthodox preachers as well as dissident preachers like the latitudinarians. The clarity and simplicity of science was seen as a way to combat the emotional and metaphysical superlatives of both superstitious enthusiasm and the threat of atheism, and at the same time, the second wave of English deists used Newton's discoveries to demonstrate the possibility of a "Natural Religion".
The attacks made against pre-Enlightenment "magical thinking", and the mystical elements of Christianity, were given their foundation with Boyle's mechanical conception of the universe. Newton gave Boyle's ideas their completion through mathematical proofs and, perhaps more importantly, was very successful in popularising them.
Alchemy
Of an estimated ten million words of writing in Newton's papers, about one million deal with alchemy. Many of Newton's writings on alchemy are copies of other manuscripts, with his own annotations. Alchemical texts mix artisanal knowledge with philosophical speculation, often hidden behind layers of wordplay, allegory, and imagery to protect craft secrets. Some of the content contained in Newton's papers could have been considered heretical by the church.
In 1888, after spending sixteen years cataloguing Newton's papers, Cambridge University kept a small number and returned the rest to the Earl of Portsmouth. In 1936, a descendant offered the papers for sale at Sotheby's. The collection was broken up and sold for a total of about £9,000. John Maynard Keynes was one of about three dozen bidders who obtained part of the collection at auction. Keynes went on to reassemble an estimated half of Newton's collection of papers on alchemy before donating his collection to Cambridge University in 1946.
All of Newton's known writings on alchemy are currently being put online in a project undertaken by Indiana University: "The Chymistry of Isaac Newton" and has been summarised in a book.
In June 2020, two unpublished pages of Newton's notes on Jan Baptist van Helmont's book on plague, De Peste, were being auctioned online by Bonhams. Newton's analysis of this book, which he made in Cambridge while protecting himself from London's 1665–1666 infection, is the most substantial written statement he is known to have made about the plague, according to Bonhams. As far as the therapy is concerned, Newton writes that "the best is a toad suspended by the legs in a chimney for three days, which at last vomited up earth with various insects in it, on to a dish of yellow wax, and shortly after died. Combining powdered toad with the excretions and serum made into lozenges and worn about the affected area drove away the contagion and drew out the poison".
Legacy
Recognition
The mathematician and astronomer Joseph-Louis Lagrange frequently asserted that Newton was the greatest genius who ever lived, and once added that Newton was also "the most fortunate, for we cannot find more than once a system of the world to establish."Fred L. Wilson, History of Science: Newton citing: Delambre, M. "Notice sur la vie et les ouvrages de M. le comte J.L. Lagrange", Oeuvres de Lagrange I. Paris, 1867, p. xx. English poet Alexander Pope wrote the famous epitaph:
But this was not allowed to be inscribed in Newton's monument at Westminster. The epitaph added is as follows:
which can be translated as follows:
Newton has been called "the most influential figure in the history of Western science", and has been regarded as "the central figure in the history of science", who "more than anyone else is the source of our great confidence in the power of science." New Scientist called Newton "the supreme genius and most enigmatic character in the history of science". The philosopher and historian David Hume also declared that Newton was "the greatest and rarest genius that ever arose for the ornament and instruction of the species". In his home of Monticello, Thomas Jefferson, a Founding Father and President of the United States, kept portraits of John Locke, Sir Francis Bacon, and Newton, whom he described as "the three greatest men that have ever lived, without any exception", and who he credited with laying "the foundation of those superstructures which have been raised in the Physical and Moral sciences". The writer and philosopher Voltaire wrote of Newton that "If all the geniuses of the universe were assembled, Newton should lead the band". The neurologist and psychoanalyst Ernest Jones wrote of Newton as "the greatest genius of all times". The mathematician Guillaume de l'Hôpital had a mythical reverence for Newton, which he expressed with a profound question and statement: "Does Mr. Newton eat, or drink, or sleep like other men? I represent him to myself as a celestial genius, entirely disengaged from matter."
Newton has further been called "the towering figure of the Scientific Revolution" and that "In a period rich with outstanding thinkers, Newton was simply the most outstanding." The polymath Johann Wolfgang von Goethe labelled the year in which Galileo Galilei died and Newton was born, 1642, as the "Christmas of the modern age". In the Italian polymath Vilfredo Pareto's estimation, Newton was the greatest human being who ever lived. On the bicentennial of Newton's death in 1927, astronomer James Jeans stated that he "was certainly the greatest man of science, and perhaps the greatest intellect, the human race has seen". Physicist Peter Rowlands also notes that Newton was "possibly possessed of the most powerful intellect in the whole of human history". Newton ultimately conceived four revolutions—in optics, mathematics, mechanics, and gravity—but also foresaw a fifth in electricity, though he lacked the time and energy in old age to fully accomplish it. Newton's work is considered the most influential in bringing forth modern science.
The physicist Ludwig Boltzmann called Newton's Principia "the first and greatest work ever written about theoretical physics". Physicist Stephen Hawking similarly called Principia "probably the most important single work ever published in the physical sciences". Lagrange called Principia "the greatest production of the human mind", and noted that "he felt dazed at such an illustration of what man's intellect might be capable".
Physicist Edward Andrade stated that Newton "was capable of greater sustained mental effort than any man, before or since". He also noted the place of Newton in history, stating:The French physicist and mathematician Jean-Baptiste Biot praised Newton's genius, stating that:
Despite his rivalry with Gottfried Wilhem Leibniz, Leibniz still praised the work of Newton, with him responding to a question at a dinner in 1701 from Sophia Charlotte, the Queen of Prussia, about his view of Newton with:
Mathematician E.T. Bell ranked Newton alongside Carl Friedrich Gauss and Archimedes as the three greatest mathematicians of all time, with the mathematician Donald M. Davis also noting that Newton is generally ranked with the other two as the greatest mathematicians ever. In his 1962 paper from the journal The Mathematics Teacher, the mathematician Walter Crosby Eells sought to objectively create a list that classified the most eminent mathematicians of all time; Newton was ranked first out of a list of the top 100, a position that was statistically confirmed even after taking probable error into account in the study. In his book Wonders of Numbers in 2001, the science editor and author Clifford A. Pickover ranked his top ten most influential mathematicians that ever lived, placing Newton first in the list. In The Cambridge Companion to Isaac Newton (2016), he is described as being "from a very young age, an extraordinary problem-solver, as good, it would appear, as humanity has ever produced". He is ultimately ranked among the top two or three greatest theoretical scientists ever, alongside James Clerk Maxwell and Albert Einstein, the greatest mathematician ever alongside Carl F. Gauss, and in the first rank of experimentalists, thereby putting "Newton in a class by himself among empirical scientists, for one has trouble in thinking of any other candidate who was in the first rank of even two of these categories." Also noted is "At least in comparison to subsequent scientists, Newton was also exceptional in his ability to put his scientific effort in much wider perspective". Gauss himself had Archimedes and Newton as his heroes, and used terms such as clarissimus or magnus to describe other intellectuals such as great mathematicians and philosophers, but reserved summus for Newton only, and once realizing the immense influence of Newton's work on scientists such as Lagrange and Pierre-Simon Laplace, Gauss then exclaimed that "Newton remains forever the master of all masters!"
In his book Great Physicists, chemist William H. Cropper highlighted the unparalleled genius of Newton, stating:
Albert Einstein kept a picture of Newton on his study wall alongside ones of Michael Faraday and of James Clerk Maxwell. Einstein stated that Newton's creation of calculus in relation to his laws of motion was "perhaps the greatest advance in thought that a single individual was ever privileged to make." He also noted the influence of Newton, stating that:In 1999, an opinion poll of 100 of the day's leading physicists voted Einstein the "greatest physicist ever," with Newton the runner-up, while a parallel survey of rank-and-file physicists ranked Newton as the greatest. In 2005, a dual survey of the public and members of Britain's Royal Society asked two questions: who made the bigger overall contributions to science and who made the bigger positive contributions to humankind, with the candidates being Newton or Einstein. In both groups, and for both questions, the consensus was that Newton had made the greater overall contributions.
In 1999, Time named Newton the Person of the Century for the 17th century. Newton placed sixth in the 100 Greatest Britons poll conducted by BBC in 2002. However, in 2003, he was voted as the greatest Briton in a poll conducted by BBC World, with Winston Churchill second. He was voted as the greatest Cantabrigian by University of Cambridge students in 2009.
Physicist Lev Landau ranked physicists on a logarithmic scale of productivity and genius ranging from 0 to 5. The highest ranking, 0, was assigned to Newton. Einstein was ranked 0.5. A rank of 1 was awarded to the fathers of quantum mechanics, such as Werner Heisenberg and Paul Dirac. Landau, a Nobel prize winner and the discoverer of superfluidity, ranked himself as 2.
The SI derived unit of force is named the newton in his honour.
Most of Newton's surviving scientific and technical papers are kept at Cambridge University. Cambridge University Library has the largest collection and there are also papers in Kings College, Trinity College, and the Fitzwilliam Museum. There is an archive of theological and alchemical papers in the National Library of Israel, and smaller collections at the Smithsonian Institution, Stanford University Library, and the Huntington Library. The Royal Society in London also has some manuscripts. The Israel collection was inscribed by UNESCO on its Memory of the World International Register in 2015, recognising the global significance of the documents. The Cambridge and Royal Society collections were added to this inscription in 2017.
Apple incident
Newton often told the story that he was inspired to formulate his theory of gravitation by watching the fall of an apple from a tree. The story is believed to have passed into popular knowledge after being related by Catherine Barton, Newton's niece, to Voltaire. Voltaire then wrote in his Essay on Epic Poetry (1727), "Sir Isaac Newton walking in his gardens, had the first thought of his system of gravitation, upon seeing an apple falling from a tree." From p. 104: 'In the like Manner Pythagoras ow'd the Invention of Musik to the noise of the Hammer of a Blacksmith. And thus in our Days Sir Isaak Newton walking in his Garden had the first Thought of his System of Gravitation, upon seeing an apple falling from a Tree.'Voltaire (1786) heard the story of Newton and the apple tree from Newton's niece, Catherine Conduit (née Barton) (1679–1740): From p. 175: "Un jour en l'année 1666, Newton retiré à la campagne, et voyant tomber des fruits d'un arbre, à ce que m'a conté sa nièce, (Mme Conduit) se laissa aller à une méditation profonde sur la cause qui entraine ainsi tous les corps dans une ligne, qui, si elle était prolongée, passerait à peu près par le centre de la terre." (One day in the year 1666 Newton withdrew to the country, and seeing the fruits of a tree fall, according to what his niece (Madame Conduit) told me, he entered into a deep meditation on the cause that draws all bodies in a [straight] line, which, if it were extended, would pass very near to the centre of the Earth.)
Although some question the veracity of the apple story, acquaintances of Newton attribute the story to Newton himself, though not the apocryphal version that the apple actually hit Newton's head. William Stukeley, whose manuscript account of 1752 has been made available by the Royal Society, recorded a conversation with Newton in Kensington on 15 April 1726:
John Conduitt, Newton's assistant at the Royal Mint and husband of Newton's niece, also described the event when he wrote about Newton's life:
It is known from his notebooks that Newton was grappling in the late 1660s with the idea that terrestrial gravity extends, in an inverse-square proportion, to the Moon,I. Bernard Cohen and George E. Smith, eds. The Cambridge Companion to Newton (2002) p. 6 as other scientists had already conjectured. Around 1665, Newton made quantitative analysis, considering the period and distance of the Moon's orbit and considering the timing of objects falling on Earth. Newton did not publish these results at the time because he could not prove that the Earth's gravity acts as if all its mass were concentrated at its center. That proof took him twenty years.
Detailed analysis of historical accounts backed up by dendrochronology and DNA analysis indicate that the sole apple tree in a garden at Woolsthorpe Manor was the tree Newton described. The tree blew over in at storm sometime around 1816, regrew from is roots, and continues as a tourist attraction under the care of the National Trust.
A descendant of the original tree can be seen growing outside the main gate of Trinity College, Cambridge, below the room Newton lived in when he studied there. The National Fruit Collection at Brogdale in Kent can supply grafts from their tree, which appears identical to Flower of Kent, a coarse-fleshed cooking variety.
Commemorations
Newton's monument (1731) can be seen in Westminster Abbey, at the north of the entrance to the choir against the choir screen, near his tomb. It was executed by the sculptor Michael Rysbrack (1694–1770) in white and grey marble with design by the architect William Kent.'The Abbey Scientists' Hall, A.R. p13: London; Roger & Robert Nicholson; 1966 The monument features a figure of Newton reclining on top of a sarcophagus, his right elbow resting on several of his great books and his left hand pointing to a scroll with a mathematical design. Above him is a pyramid and a celestial globe showing the signs of the Zodiac and the path of the comet of 1680. A relief panel depicts putti using instruments such as a telescope and prism.
From 1978 until 1988, an image of Newton designed by Harry Ecclestone appeared on Series D £1 banknotes issued by the Bank of England (the last £1 notes to be issued by the Bank of England). Newton was shown on the reverse of the notes holding a book and accompanied by a telescope, a prism and a map of the Solar System.
A statue of Isaac Newton, looking at an apple at his feet, can be seen at the Oxford University Museum of Natural History. A large bronze statue, Newton, after William Blake, by Eduardo Paolozzi, dated 1995 and inspired by Blake's etching, dominates the piazza of the British Library in London. A bronze statue of Newton was erected in 1858 in the centre of Grantham where he went to school, prominently standing in front of Grantham Guildhall.
The still-surviving farmhouse at Woolsthorpe By Colsterworth is a Grade I listed building by Historic England through being his birthplace and "where he discovered gravity and developed his theories regarding the refraction of light".
The Institute of Physics, or IOP, has its highest and most prestigious award, the Isaac Newton Medal, named after Newton, which is given for world-leading contributions to physics. It was first awarded in 2008.
The Enlightenment
It is held by European philosophers of the Enlightenment and by historians of the Enlightenment that Newton's publication of the Principia was a turning point in the Scientific Revolution and started the Enlightenment. It was Newton's conception of the universe based upon natural and rationally understandable laws that became one of the seeds for Enlightenment ideology. John Locke and Voltaire applied concepts of natural law to political systems advocating intrinsic rights; the physiocrats and Adam Smith applied natural conceptions of psychology and self-interest to economic systems; and sociologists criticised the current social order for trying to fit history into natural models of progress. James Burnett, Lord Monboddo and Samuel Clarke resisted elements of Newton's work, but eventually rationalised it to conform with their strong religious views of nature.
Works
Published in his lifetime
De analysi per aequationes numero terminorum infinitas (1669, published 1711)Anders Hald 2003 – A history of probability and statistics and their applications before 1750 – 586 pages Volume 501 of Wiley series in probability and statistics Wiley-IEEE, 2003 Retrieved 27 January 2012
Of Natures Obvious Laws & Processes in Vegetation (unpublished, –75) Transcribed and online at Indiana University.
De motu corporum in gyrum (1684)Whiteside, D.T., ed. (1974). Mathematical Papers of Isaac Newton, 1684–1691. 6. Cambridge University Press. pp. 30–91.
Philosophiæ Naturalis Principia Mathematica (1687)
Scala graduum Caloris. Calorum Descriptiones & signa (1701)Published anonymously as "Scala graduum Caloris. Calorum Descriptiones & signa." in Philosophical Transactions, 1701, 824 –829;
ed. Joannes Nichols, Isaaci Newtoni Opera quae exstant omnia, vol. 4 (1782), 403 –407.
Mark P. Silverman, A Universe of Atoms, An Atom in the Universe, Springer, 2002, p. 49.
Opticks (1704)
Reports as Master of the Mint (1701–1725)
Arithmetica Universalis (1707)
Published posthumously
De mundi systemate (The System of the World) (1728)
Optical Lectures (1728)
The Chronology of Ancient Kingdoms Amended (1728)
Observations on Daniel and The Apocalypse of St. John (1733)
Method of Fluxions (1671, published 1736)
An Historical Account of Two Notable Corruptions of Scripture (1754)
See also
Elements of the Philosophy of Newton, a book by Voltaire
List of multiple discoveries: seventeenth century
List of presidents of the Royal Society
List of things named after Isaac Newton
References
Notes
Citations
Bibliography
Reprinted, Dover Publications, 1960, , and Project Gutenberg, 2010.
Further reading
Primary
Newton, Isaac. The Principia: Mathematical Principles of Natural Philosophy. University of California Press, (1999)
Brackenridge, J. Bruce. The Key to Newton's Dynamics: The Kepler Problem and the Principia: Containing an English Translation of Sections 1, 2, and 3 of Book One from the First (1687) Edition of Newton's Mathematical Principles of Natural Philosophy, University of California Press (1996)
Newton, Isaac. The Optical Papers of Isaac Newton. Vol. 1: The Optical Lectures, 1670–1672, Cambridge University Press (1984)
Newton, Isaac. Opticks (4th ed. 1730) online edition
Newton, I. (1952). Opticks, or A Treatise of the Reflections, Refractions, Inflections & Colours of Light. New York: Dover Publications.
Newton, I. Sir Isaac Newton's Mathematical Principles of Natural Philosophy and His System of the World, tr. A. Motte, rev. Florian Cajori. Berkeley: University of California Press (1934)
– 8 volumes.
Newton, Isaac. The correspondence of Isaac Newton, ed. H.W. Turnbull and others, 7 vols (1959–77)
Newton's Philosophy of Nature: Selections from His Writings edited by H.S. Thayer (1953; online edition)
Isaac Newton, Sir; J Edleston; Roger Cotes, Correspondence of Sir Isaac Newton and Professor Cotes, including letters of other eminent men, London, John W. Parker, West Strand; Cambridge, John Deighton (1850, Google Books)
Maclaurin, C. (1748). An Account of Sir Isaac Newton's Philosophical Discoveries, in Four Books. London: A. Millar and J. Nourse
Newton, I. (1958). Isaac Newton's Papers and Letters on Natural Philosophy and Related Documents, eds. I.B. Cohen and R.E. Schofield. Cambridge: Harvard University Press
Newton, I. (1962). The Unpublished Scientific Papers of Isaac Newton: A Selection from the Portsmouth Collection in the University Library, Cambridge, ed. A.R. Hall and M.B. Hall. Cambridge: Cambridge University Press
Newton, I. (1975). Isaac Newton's 'Theory of the Moon's Motion''' (1702). London: Dawson
Alchemy further reading
Keynes took a close interest in Newton and owned many of Newton's private papers.
Religion
Dobbs, Betty Jo Tetter. The Janus Faces of Genius: The Role of Alchemy in Newton's Thought. (1991), links the alchemy to Arianism
Force, James E., and Richard H. Popkin, eds. Newton and Religion: Context, Nature, and Influence. (1999), pp. xvii, 325.; 13 papers by scholars using newly opened manuscripts
Science
Berlinski, David. Newton's Gift: How Sir Isaac Newton Unlocked the System of the World. (2000);
Cohen, I. Bernard and Smith, George E., ed. The Cambridge Companion to Newton. (2002). Focuses on philosophical issues only; excerpt and text search; complete edition online
This well documented work provides, in particular, valuable information regarding Newton's knowledge of Patristics
Hawking, Stephen, ed. On the Shoulders of Giants. Places selections from Newton's Principia in the context of selected writings by Copernicus, Kepler, Galileo and Einstein
Newton, Isaac. Papers and Letters in Natural Philosophy'', edited by I. Bernard Cohen. Harvard University Press, 1958, 1978; .
Reprinted, Dover Publications, 1987, .
External links
Digital archives
The Newton Project from University of Oxford
Newton's papers in the Royal Society archives
The Newton Manuscripts at the National Library of Israel
Newton Papers (currently offline) from Cambridge Digital Library
Bernhardus Varenius, Geographia Generalis, ed. Isaac Newton, 2nd ed. (Cambridge: Joann. Hayes, 1681) from the Internet Archive
Category:1642 births
Category:1727 deaths
Category:17th-century alchemists
Category:17th-century apocalypticists
Category:17th-century English astronomers
Category:17th-century English mathematicians
Category:17th-century English male writers
Category:17th-century English writers
Category:17th-century writers in Latin
Category:18th-century alchemists
Category:18th-century apocalypticists
Category:18th-century English astronomers
Category:18th-century British scientists
Category:18th-century English mathematicians
Category:18th-century English male writers
Category:18th-century English writers
Category:18th-century writers in Latin
Category:Alumni of Trinity College, Cambridge
Category:Antitrinitarians
Category:Ballistics experts
Category:British critics of atheism
Category:British experimental physicists
Category:British geometers
Category:British optical physicists
Category:British science communicators
Category:Latin-language British writers
Category:Burials at Westminster Abbey
Category:Color scientists
Category:Copernican Revolution
Category:Creators of temperature scales
Category:English alchemists
Category:English Anglicans
Category:English Christians
Category:English inventors
Category:English justices of the peace
Category:17th-century English knights
Category:English MPs 1689–1690
Category:English MPs 1701–1702
Category:English philosophers of science
Category:English physicists
Category:English scientific instrument makers
Category:Enlightenment scientists
Category:Fellows of the Royal Society
Category:Fellows of Trinity College, Cambridge
Category:Fluid dynamicists
Category:Linear algebraists
Category:Hermeticists
Category:History of calculus
Category:Knights Bachelor
Category:Lucasian Professors of Mathematics
Category:Masters of the Mint
Category:Members of the pre-1707 Parliament of England for the University of Cambridge
Category:Natural philosophers
Category:Nontrinitarian Christians
Category:People educated at The King's School, Grantham
Category:People from South Kesteven District
Category:Post-Reformation Arian Christians
Category:Presidents of the Royal Society
Category:Theoretical physicists
Category:Writers about religion and science
Category:Independent scientists
Category:English theologians
Category:17th-century theologians
Category:18th-century theologians
Category:17th-century inventors
Category:18th-century inventors
|
biographies
| 13,610
|
14838
|
Inertial frame of reference
|
https://en.wikipedia.org/wiki/Inertial_frame_of_reference
|
In classical physics and special relativity, an inertial frame of reference (also called an inertial space or a Galilean reference frame) is a frame of reference in which objects exhibit inertia: they remain at rest or in uniform motion relative to the frame until acted upon by external forces. In such a frame, the laws of nature can be observed without the need to correct for acceleration.
All frames of reference with zero acceleration are in a state of constant rectilinear motion (straight-line motion) with respect to one another. In such a frame, an object with zero net force acting on it, is perceived to move with a constant velocity, or, equivalently, Newton's first law of motion holds. Such frames are known as inertial. Some physicists, like Isaac Newton, originally thought that one of these frames was absolute — the one approximated by the fixed stars. However, this is not required for the definition, and it is now known that those stars are in fact moving, relative to one another.
According to the principle of special relativity, all physical laws look the same in all inertial reference frames, and no inertial frame is privileged over another. Measurements of objects in one inertial frame can be converted to measurements in another by a simple transformation — the Galilean transformation in Newtonian physics or the Lorentz transformation (combined with a translation) in special relativity; these approximately match when the relative speed of the frames is low, but differ as it approaches the speed of light.
By contrast, a non-inertial reference frame is accelerating. In such a frame, the interactions between physical objects vary depending on the acceleration of that frame with respect to an inertial frame. Viewed from the perspective of classical mechanics and special relativity, the usual physical forces caused by the interaction of objects have to be supplemented by fictitious forces caused by inertia.
Viewed from the perspective of general relativity theory, the fictitious (i.e. inertial) forces are attributed to geodesic motion in spacetime.
Due to Earth's rotation, its surface is not an inertial frame of reference. The Coriolis effect can deflect certain forms of motion as seen from Earth, and the centrifugal force will reduce the effective gravity at the equator. Nevertheless, for many applications the Earth is an adequate approximation of an inertial reference frame.
Introduction
The motion of a body can only be described relative to something else—other bodies, observers, or a set of spacetime coordinates. These are called frames of reference. According to the first postulate of special relativity, all physical laws take their simplest form in an inertial frame, and there exist multiple inertial frames interrelated by uniform translation:
This simplicity manifests itself in that inertial frames have self-contained physics without the need for external causes, while physics in non-inertial frames has external causes. The principle of simplicity can be used within Newtonian physics as well as in special relativity:
However, this definition of inertial frames is understood to apply in the Newtonian realm and ignores relativistic effects.
In practical terms, the equivalence of inertial reference frames means that scientists within a box moving with a constant absolute velocity cannot determine this velocity by any experiment. Otherwise, the differences would set up an absolute standard reference frame. According to this definition, supplemented with the constancy of the speed of light, inertial frames of reference transform among themselves according to the Poincaré group of symmetry transformations, of which the Lorentz transformations are a subgroup. In Newtonian mechanics, inertial frames of reference are related by the Galilean group of symmetries.
Newton's inertial frame of reference
Absolute space
Newton posited an absolute space considered well-approximated by a frame of reference stationary relative to the fixed stars. An inertial frame was then one in uniform translation relative to absolute space. However, some "relativists", even at the time of Newton, felt that absolute space was a defect of the formulation, and should be replaced.
The expression inertial frame of reference () was coined by Ludwig Lange in 1885, to replace Newton's definitions of "absolute space and time" with a more operational definition:
The inadequacy of the notion of "absolute space" in Newtonian mechanics is spelled out by Blagojevich:
The utility of operational definitions was carried much further in the special theory of relativity. Some historical background including Lange's definition is provided by DiSalle, who says in summary:
Newtonian mechanics
Classical theories that use the Galilean transformation postulate the equivalence of all inertial reference frames. The Galilean transformation transforms coordinates from one inertial reference frame, , to another, , by simple addition or subtraction of coordinates:
where r0 and t0 represent shifts in the origin of space and time, and v is the relative velocity of the two inertial reference frames. Under Galilean transformations, the time t2 − t1 between two events is the same for all reference frames and the distance between two simultaneous events (or, equivalently, the length of any object, |r2 − r1|) is also the same.
250px|thumbnail|Figure 1: Two frames of reference moving with relative velocity . Frame S' has an arbitrary but fixed rotation with respect to frame S. They are both inertial frames provided a body not subject to forces appears to move in a straight line. If that motion is seen in one frame, it will also appear that way in the other.
Within the realm of Newtonian mechanics, an inertial frame of reference, or inertial reference frame, is one in which Newton's first law of motion is valid. However, the principle of special relativity generalizes the notion of an inertial frame to include all physical laws, not simply Newton's first law.
Newton viewed the first law as valid in any reference frame that is in uniform motion (neither rotating nor accelerating) relative to absolute space; as a practical matter, "absolute space" was considered to be the fixed starsFor a discussion of the role of fixed stars, see In the theory of relativity the notion of absolute space or a privileged frame is abandoned, and an inertial frame in the field of classical mechanics is defined as:
Hence, with respect to an inertial frame, an object or body accelerates only when a physical force is applied, and (following Newton's first law of motion), in the absence of a net force, a body at rest will remain at rest and a body in motion will continue to move uniformly—that is, in a straight line and at constant speed. Newtonian inertial frames transform among each other according to the Galilean group of symmetries.
If this rule is interpreted as saying that straight-line motion is an indication of zero net force, the rule does not identify inertial reference frames because straight-line motion can be observed in a variety of frames. If the rule is interpreted as defining an inertial frame, then being able to determine when zero net force is applied is crucial. The problem was summarized by Einstein:
There are several approaches to this issue. One approach is to argue that all real forces drop off with distance from their sources in a known manner, so it is only needed that a body is far enough away from all sources to ensure that no force is present. A possible issue with this approach is the historically long-lived view that the distant universe might affect matters (Mach's principle). Another approach is to identify all real sources for real forces and account for them. A possible issue with this approach is the possibility of missing something, or accounting inappropriately for their influence, perhaps, again, due to Mach's principle and an incomplete understanding of the universe. A third approach is to look at the way the forces transform when shifting reference frames. Fictitious forces, those that arise due to the acceleration of a frame, disappear in inertial frames and have complicated rules of transformation in general cases. Based on the universality of physical law and the request for frames where the laws are most simply expressed, inertial frames are distinguished by the absence of such fictitious forces.
Newton enunciated a principle of relativity himself in one of his corollaries to the laws of motion:See the Principia on line at Andrew Motte Translation
This principle differs from the special principle in two ways: first, it is restricted to mechanics, and second, it makes no mention of simplicity. It shares the special principle of the invariance of the form of the description among mutually translating reference frames.However, in the Newtonian system the Galilean transformation connects these frames and in the special theory of relativity the Lorentz transformation connects them. The two transformations agree for speeds of translation much less than the speed of light. The role of fictitious forces in classifying reference frames is pursued further below.
Special relativity
Einstein's theory of special relativity, like Newtonian mechanics, postulates the equivalence of all inertial reference frames. However, because special relativity postulates that the speed of light in free space is invariant, the transformation between inertial frames is the Lorentz transformation, not the Galilean transformation which is used in Newtonian mechanics.
The invariance of the speed of light leads to counter-intuitive phenomena, such as time dilation, length contraction, and the relativity of simultaneity. The predictions of special relativity have been extensively verified experimentally. Extract of page 27 The Lorentz transformation reduces to the Galilean transformation as the speed of light approaches infinity or as the relative velocity between frames approaches zero.
Examples
Simple example
Consider a situation common in everyday life. Two cars travel along a road, both moving at constant velocities. See Figure 1. At some particular moment, they are separated by 200 meters. The car in front is traveling at 22 meters per second and the car behind is traveling at 30 meters per second. If we want to find out how long it will take the second car to catch up with the first, there are three obvious "frames of reference" that we could choose.
First, we could observe the two cars from the side of the road. We define our "frame of reference" S as follows. We stand on the side of the road and start a stop-clock at the exact moment that the second car passes us, which happens to be when they are a distance apart. Since neither of the cars is accelerating, we can determine their positions by the following formulas, where is the position in meters of car one after time t in seconds and is the position of car two after time t.
Notice that these formulas predict at t = 0 s the first car is 200m down the road and the second car is right beside us, as expected. We want to find the time at which . Therefore, we set and solve for , that is:
Alternatively, we could choose a frame of reference S′ situated in the first car. In this case, the first car is stationary and the second car is approaching from behind at a speed of . To catch up to the first car, it will take a time of , that is, 25 seconds, as before. Note how much easier the problem becomes by choosing a suitable frame of reference. The third possible frame of reference would be attached to the second car. That example resembles the case just discussed, except the second car is stationary and the first car moves backward towards it at .
It would have been possible to choose a rotating, accelerating frame of reference, moving in a complicated manner, but this would have served to complicate the problem unnecessarily. One can convert measurements made in one coordinate system to another. For example, suppose that your watch is running five minutes fast compared to the local standard time. If you know that this is the case, when somebody asks you what time it is, you can deduct five minutes from the time displayed on your watch to obtain the correct time. The measurements that an observer makes about a system depend therefore on the observer's frame of reference (you might say that the bus arrived at 5 past three, when in fact it arrived at three).
Additional example
For a simple example involving only the orientation of two observers, consider two people standing, facing each other on either side of a north-south street. See Figure 2. A car drives past them heading south. For the person facing east, the car was moving to the right. However, for the person facing west, the car was moving to the left. This discrepancy is because the two people used two different frames of reference from which to investigate this system.
For a more complex example involving observers in relative motion, consider Alfred, who is standing on the side of a road watching a car drive past him from left to right. In his frame of reference, Alfred defines the spot where he is standing as the origin, the road as the -axis, and the direction in front of him as the positive -axis. To him, the car moves along the axis with some velocity in the positive -direction. Alfred's frame of reference is considered an inertial frame because he is not accelerating, ignoring effects such as Earth's rotation and gravity.
Now consider Betsy, the person driving the car. Betsy, in choosing her frame of reference, defines her location as the origin, the direction to her right as the positive -axis, and the direction in front of her as the positive -axis. In this frame of reference, it is Betsy who is stationary and the world around her that is moving – for instance, as she drives past Alfred, she observes him moving with velocity in the negative -direction. If she is driving north, then north is the positive -direction; if she turns east, east becomes the positive -direction.
Finally, as an example of non-inertial observers, assume Candace is accelerating her car. As she passes by him, Alfred measures her acceleration and finds it to be in the negative -direction. Assuming Candace's acceleration is constant, what acceleration does Betsy measure? If Betsy's velocity is constant, she is in an inertial frame of reference, and she will find the acceleration to be the same as Alfred in her frame of reference, in the negative -direction. However, if she is accelerating at rate in the negative -direction (in other words, slowing down), she will find Candace's acceleration to be in the negative -direction—a smaller value than Alfred has measured. Similarly, if she is accelerating at rate A in the positive -direction (speeding up), she will observe Candace's acceleration as in the negative -direction—a larger value than Alfred's measurement.
Non-inertial frames
Here the relation between inertial and non-inertial observational frames of reference is considered. The basic difference between these frames is the need in non-inertial frames for fictitious forces, as described below.
General relativity
General relativity is based upon the principle of equivalence:
This idea was introduced in Einstein's 1907 article "Principle of Relativity and Gravitation" and later developed in 1911.A. Einstein, "On the influence of gravitation on the propagation of light ", Annalen der Physik, vol. 35, (1911) : 898–908 Support for this principle is found in the Eötvös experiment, which determines whether the ratio of inertial to gravitational mass is the same for all bodies, regardless of size or composition. To date no difference has been found to a few parts in 1011. For some discussion of the subtleties of the Eötvös experiment, such as the local mass distribution around the experimental site (including a quip about the mass of Eötvös himself), see Franklin.
Einstein's general theory modifies the distinction between nominally "inertial" and "non-inertial" effects by replacing special relativity's "flat" Minkowski Space with a metric that produces non-zero curvature. In general relativity, the principle of inertia is replaced with the principle of geodesic motion, whereby objects move in a way dictated by the curvature of spacetime. As a consequence of this curvature, it is not a given in general relativity that inertial objects moving at a particular rate with respect to each other will continue to do so. This phenomenon of geodesic deviation means that inertial frames of reference do not exist globally as they do in Newtonian mechanics and special relativity.
However, the general theory reduces to the special theory over sufficiently small regions of spacetime, where curvature effects become less important and the earlier inertial frame arguments can come back into play. Extract of page 154 Extract of page 116 Consequently, modern special relativity is now sometimes described as only a "local theory". Extract of page 329 "Local" can encompass, for example, the entire Milky Way galaxy: The astronomer Karl Schwarzschild observed the motion of pairs of stars orbiting each other. He found that the two orbits of the stars of such a system lie in a plane, and the perihelion of the orbits of the two stars remains pointing in the same direction with respect to the Solar System. Schwarzschild pointed out that that was invariably seen: the direction of the angular momentum of all observed double star systems remains fixed with respect to the direction of the angular momentum of the Solar System. These observations allowed him to conclude that inertial frames inside the galaxy do not rotate with respect to one another, and that the space of the Milky Way is approximately Galilean or Minkowskian.In the Shadow of the Relativity Revolution Section 3: The Work of Karl Schwarzschild (2.2 MB PDF-file)
Inertial frames and rotation
In an inertial frame, Newton's first law, the law of inertia, is satisfied: Any free motion has a constant magnitude and direction. Newton's second law for a particle takes the form:
with F the net force (a vector), m the mass of a particle and a the acceleration of the particle (also a vector) which would be measured by an observer at rest in the frame. The force F is the vector sum of all "real" forces on the particle, such as contact forces, electromagnetic, gravitational, and nuclear forces.
In contrast, Newton's second law in a rotating frame of reference (a non-inertial frame of reference), rotating at angular rate Ω about an axis, takes the form:
which looks the same as in an inertial frame, but now the force F′ is the resultant of not only F, but also additional terms (the paragraph following this equation presents the main points without detailed mathematics):
where the angular rotation of the frame is expressed by the vector Ω pointing in the direction of the axis of rotation, and with magnitude equal to the angular rate of rotation Ω, symbol × denotes the vector cross product, vector xB locates the body and vector vB is the velocity of the body according to a rotating observer (different from the velocity seen by the inertial observer).
The extra terms in the force F′ are the "fictitious" forces for this frame, whose causes are external to the system in the frame. The first extra term is the Coriolis force, the second the centrifugal force, and the third the Euler force. These terms all have these properties: they vanish when Ω = 0; that is, they are zero for an inertial frame (which, of course, does not rotate); they take on a different magnitude and direction in every rotating frame, depending upon its particular value of Ω; they are ubiquitous in the rotating frame (affect every particle, regardless of circumstance); and they have no apparent source in identifiable physical sources, in particular, matter. Also, fictitious forces do not drop off with distance (unlike, for example, nuclear forces or electrical forces). For example, the centrifugal force that appears to emanate from the axis of rotation in a rotating frame increases with distance from the axis.
All observers agree on the real forces, F; only non-inertial observers need fictitious forces. The laws of physics in the inertial frame are simpler because unnecessary forces are not present.
In Newton's time the fixed stars were invoked as a reference frame, supposedly at rest relative to absolute space. In reference frames that were either at rest with respect to the fixed stars or in uniform translation relative to these stars, Newton's laws of motion were supposed to hold. In contrast, in frames accelerating with respect to the fixed stars, an important case being frames rotating relative to the fixed stars, the laws of motion did not hold in their simplest form, but had to be supplemented by the addition of fictitious forces, for example, the Coriolis force and the centrifugal force. Two experiments were devised by Newton to demonstrate how these forces could be discovered, thereby revealing to an observer that they were not in an inertial frame: the example of the tension in the cord linking two spheres rotating about their center of gravity, and the example of the curvature of the surface of water in a rotating bucket. In both cases, application of Newton's second law would not work for the rotating observer without invoking centrifugal and Coriolis forces to account for their observations (tension in the case of the spheres; parabolic water surface in the case of the rotating bucket).
As now known, the fixed stars are not fixed. Those that reside in the Milky Way turn with the galaxy, exhibiting proper motions. Those that are outside our galaxy (such as nebulae once mistaken to be stars) participate in their own motion as well, partly due to expansion of the universe, and partly due to peculiar velocities. For instance, the Andromeda Galaxy is on collision course with the Milky Way at a speed of 117 km/s. The concept of inertial frames of reference is no longer tied to either the fixed stars or to absolute space. Rather, the identification of an inertial frame is based on the simplicity of the laws of physics in the frame.
The laws of nature take a simpler form in inertial frames of reference because in these frames one did not have to introduce inertial forces when writing down Newton's law of motion.
In practice, using a frame of reference based upon the fixed stars as though it were an inertial frame of reference introduces little discrepancy. For example, the centrifugal acceleration of the Earth because of its rotation about the Sun is about thirty million times greater than that of the Sun about the galactic center.
To illustrate further, consider the question: "Does the Universe rotate?" An answer might explain the shape of the Milky Way galaxy using the laws of physics, although other observations might be more definitive; that is, provide larger discrepancies or less measurement uncertainty, like the anisotropy of the microwave background radiation or Big Bang nucleosynthesis. The flatness of the Milky Way depends on its rate of rotation in an inertial frame of reference. If its apparent rate of rotation is attributed entirely to rotation in an inertial frame, a different "flatness" is predicted than if it is supposed that part of this rotation is actually due to rotation of the universe and should not be included in the rotation of the galaxy itself. Based upon the laws of physics, a model is set up in which one parameter is the rate of rotation of the Universe. If the laws of physics agree more accurately with observations in a model with rotation than without it, we are inclined to select the best-fit value for rotation, subject to all other pertinent experimental observations. If no value of the rotation parameter is successful and theory is not within observational error, a modification of physical law is considered, for example, dark matter is invoked to explain the galactic rotation curve. So far, observations show any rotation of the universe is very slow, no faster than once every years (10−13 rad/yr), and debate persists over whether there is any rotation. However, if rotation were found, interpretation of observations in a frame tied to the universe would have to be corrected for the fictitious forces inherent in such rotation in classical physics and special relativity, or interpreted as the curvature of spacetime and the motion of matter along the geodesics in general relativity.
When quantum effects are important, there are additional conceptual complications that arise in quantum reference frames.
Primed frames
An accelerated frame of reference is often delineated as being the "primed" frame, and all variables that are dependent on that frame are notated with primes, e.g. x′, y′, a′.
The vector from the origin of an inertial reference frame to the origin of an accelerated reference frame is commonly notated as R. Given a point of interest that exists in both frames, the vector from the inertial origin to the point is called r, and the vector from the accelerated origin to the point is called r′.
From the geometry of the situation
Taking the first and second derivatives of this with respect to time
where V and A are the velocity and acceleration of the accelerated system with respect to the inertial system and v and a are the velocity and acceleration of the point of interest with respect to the inertial frame.
These equations allow transformations between the two coordinate systems; for example, Newton's second law can be written as
When there is accelerated motion due to a force being exerted there is manifestation of inertia. If an electric car designed to recharge its battery system when decelerating is switched to braking, the batteries are recharged, illustrating the physical strength of manifestation of inertia. However, the manifestation of inertia does not prevent acceleration (or deceleration), for manifestation of inertia occurs in response to change in velocity due to a force. Seen from the perspective of a rotating frame of reference the manifestation of inertia appears to exert a force (either in centrifugal direction, or in a direction orthogonal to an object's motion, the Coriolis effect).
A common sort of accelerated reference frame is a frame that is both rotating and translating (an example is a frame of reference attached to a CD which is playing while the player is carried).
This arrangement leads to the equation (see Fictitious force for a derivation):
or, to solve for the acceleration in the accelerated frame,
Multiplying through by the mass m gives
where
(Euler force),
(Coriolis force),
(centrifugal force).
Separating non-inertial from inertial reference frames
Theory
Inertial and non-inertial reference frames can be distinguished by the absence or presence of fictitious forces.
The presence of fictitious forces indicates the physical laws are not the simplest laws available, in terms of the special principle of relativity, a frame where fictitious forces are present is not an inertial frame:
Bodies in non-inertial reference frames are subject to so-called fictitious forces (pseudo-forces); that is, forces that result from the acceleration of the reference frame itself and not from any physical force acting on the body. Examples of fictitious forces are the centrifugal force and the Coriolis force in rotating reference frames.
To apply the Newtonian definition of an inertial frame, the understanding of separation between "fictitious" forces and "real" forces must be made clear.
For example, consider a stationary object in an inertial frame. Being at rest, no net force is applied. But in a frame rotating about a fixed axis, the object appears to move in a circle, and is subject to centripetal force. How can it be decided that the rotating frame is a non-inertial frame? There are two approaches to this resolution: one approach is to look for the origin of the fictitious forces (the Coriolis force and the centrifugal force). It will be found there are no sources for these forces, no associated force carriers, no originating bodies.For example, there is no body providing a gravitational or electrical attraction. A second approach is to look at a variety of frames of reference. For any inertial frame, the Coriolis force and the centrifugal force disappear, so application of the principle of special relativity would identify these frames where the forces disappear as sharing the same and the simplest physical laws, and hence rule that the rotating frame is not an inertial frame.
Newton examined this problem himself using rotating spheres, as shown in Figure 2 and Figure 3. He pointed out that if the spheres are not rotating, the tension in the tying string is measured as zero in every frame of reference.That is, the universality of the laws of physics requires the same tension to be seen by everybody. For example, it cannot happen that the string breaks under extreme tension in one frame of reference and remains intact in another frame of reference, just because we choose to look at the string from a different frame. If the spheres only appear to rotate (that is, we are watching stationary spheres from a rotating frame), the zero tension in the string is accounted for by observing that the centripetal force is supplied by the centrifugal and Coriolis forces in combination, so no tension is needed. If the spheres really are rotating, the tension observed is exactly the centripetal force required by the circular motion. Thus, measurement of the tension in the string identifies the inertial frame: it is the one where the tension in the string provides exactly the centripetal force demanded by the motion as it is observed in that frame, and not a different value. That is, the inertial frame is the one where the fictitious forces vanish.
For linear acceleration, Newton expressed the idea of undetectability of straight-line accelerations held in common:
This principle generalizes the notion of an inertial frame. For example, an observer confined in a free-falling lift will assert that he himself is a valid inertial frame, even if he is accelerating under gravity, so long as he has no knowledge about anything outside the lift. So, strictly speaking, inertial frame is a relative concept. With this in mind, inertial frames can collectively be defined as a set of frames which are stationary or moving at constant velocity with respect to each other, so that a single inertial frame is defined as an element of this set.
For these ideas to apply, everything observed in the frame has to be subject to a base-line, common acceleration shared by the frame itself. That situation would apply, for example, to the elevator example, where all objects are subject to the same gravitational acceleration, and the elevator itself accelerates at the same rate.
Applications
Inertial navigation systems used a cluster of gyroscopes and accelerometers to determine accelerations relative to inertial space. After a gyroscope is spun up in a particular orientation in inertial space, the law of conservation of angular momentum requires that it retain that orientation as long as no external forces are applied to it. Three orthogonal gyroscopes establish an inertial reference frame, and the accelerators measure acceleration relative to that frame. The accelerations, along with a clock, can then be used to calculate the change in position. Thus, inertial navigation is a form of dead reckoning that requires no external input, and therefore cannot be jammed by any external or internal signal source.
A gyrocompass, employed for navigation of seagoing vessels, finds the geometric north. It does so, not by sensing the Earth's magnetic field, but by using inertial space as its reference. The outer casing of the gyrocompass device is held in such a way that it remains aligned with the local plumb line. When the gyroscope wheel inside the gyrocompass device is spun up, the way the gyroscope wheel is suspended causes the gyroscope wheel to gradually align its spinning axis with the Earth's axis. Alignment with the Earth's axis is the only direction for which the gyroscope's spinning axis can be stationary with respect to the Earth and not be required to change direction with respect to inertial space. After being spun up, a gyrocompass can reach the direction of alignment with the Earth's axis in as little as a quarter of an hour.
See also
Absolute rotation
Diffeomorphism
Galilean invariance
General covariance
Local reference frame
Lorentz covariance
Newton's first law
Quantum reference frame
References
Further reading
Edwin F. Taylor and John Archibald Wheeler, Spacetime Physics, 2nd ed. (Freeman, NY, 1992)
Albert Einstein, Relativity, the special and the general theories, 15th ed. (1954)
Albert Einstein, On the Electrodynamics of Moving Bodies, included in The Principle of Relativity, page 38. Dover 1923
Rotation of the Universe
B Ciobanu, I Radinchi Modeling the electric and magnetic fields in a rotating universe Rom. Journ. Phys., Vol. 53, Nos. 1–2, P. 405–415, Bucharest, 2008
Yuri N. Obukhov, Thoralf Chrobok, Mike Scherfner Shear-free rotating inflation Phys. Rev. D 66, 043518 (2002) [5 pages]
Yuri N. Obukhov On physical foundations and observational effects of cosmic rotation (2000)
Li-Xin Li Effect of the Global Rotation of the Universe on the Formation of Galaxies General Relativity and Gravitation, 30 (1998)
P Birch Is the Universe rotating? Nature 298, 451 – 454 (29 July 1982)
Kurt Gödel An example of a new type of cosmological solutions of Einstein's field equations of gravitation Rev. Mod. Phys., Vol. 21, p. 447, 1949.
External links
Stanford Encyclopedia of Philosophy entry
showing scenes as viewed from both an inertial frame and a rotating frame of reference, visualizing the Coriolis and centrifugal forces.
Category:Classical mechanics
Category:Frames of reference
Category:Theory of relativity
Category:Orbits
|
physics
| 5,549
|
15112
|
Wave interference
|
https://en.wikipedia.org/wiki/Wave_interference
|
In physics, interference is a phenomenon in which two coherent waves are combined by adding their intensities or displacements with due consideration for their phase difference. The resultant wave may have greater amplitude (constructive interference) or lower amplitude (destructive interference) if the two waves are in phase or out of phase, respectively.
Interference effects can be observed with all types of waves, for example, light, radio, acoustic, surface water waves, gravity waves, or matter waves as well as in loudspeakers as electrical waves.
Etymology
The word interference is derived from the Latin words inter which means "between" and fere which means "hit or strike", and was used in the context of wave superposition by Thomas Young in 1801.
Mechanisms
class=skin-invert-image|thumb|Interference of right traveling (green) and left traveling (blue) waves in Two-dimensional space, resulting in final (red) wave
The principle of superposition of waves states that when two or more propagating waves of the same type are incident on the same point, the resultant amplitude at that point is equal to the vector sum of the amplitudes of the individual waves.Ockenga, Wymke. Phase contrast. Leika Science Lab, 09 June 2011. "If two waves interfere, the amplitude of the resulting light wave will be equal to the vector sum of the amplitudes of the two interfering waves." If a crest of a wave meets a crest of another wave of the same frequency at the same point, then the amplitude is the sum of the individual amplitudes—this is constructive interference. If a crest of one wave meets a trough of another wave, then the amplitude is equal to the difference in the individual amplitudes—this is known as destructive interference. In ideal mediums (water, air are almost ideal) energy is always conserved, at points of destructive interference, the wave amplitudes cancel each other out, and the energy is redistributed to other areas. For example, when two pebbles are dropped in a pond, a pattern is observable; but eventually waves continue, and only when they reach the shore is the energy absorbed away from the medium.
Constructive interference occurs when the phase difference between the waves is an even multiple of (180°), whereas destructive interference occurs when the difference is an odd multiple of . If the difference between the phases is intermediate between these two extremes, then the magnitude of the displacement of the summed waves lies between the minimum and maximum values.
Consider, for example, what happens when two identical stones are dropped into a still pool of water at different locations. Each stone generates a circular wave propagating outwards from the point where the stone was dropped. When the two waves overlap, the net displacement at a particular point is the sum of the displacements of the individual waves. At some points, these will be in phase, and will produce a maximum displacement. In other places, the waves will be in anti-phase, and there will be no net displacement at these points. Thus, parts of the surface will be stationary—these are seen in the figure above and to the right as stationary blue-green lines radiating from the centre.
Interference of light is a unique phenomenon in that we can never observe superposition of the EM field directly as we can, for example, in water. Superposition in the EM field is an assumed phenomenon and necessary to explain how two light beams pass through each other and continue on their respective paths. Prime examples of light interference are the famous double-slit experiment, laser speckle, anti-reflective coatings and interferometers.
In addition to the classical wave model for understanding optical interference, quantum matter waves also demonstrate interference.
Real-valued wave functions
The above can be demonstrated in one dimension by deriving the formula for the sum of two waves. The equation for the amplitude of a sinusoidal wave traveling to the right along the x-axis is
where is the peak amplitude, is the wavenumber and is the angular frequency of the wave. Suppose a second wave of the same frequency and amplitude but with a different phase is also traveling to the right
where is the phase difference between the waves in radians. The two waves will superpose and add: the sum of the two waves is
Using the trigonometric identity for the sum of two cosines: this can be written
This represents a wave at the original frequency, traveling to the right like its components, whose amplitude is proportional to the cosine of .
Constructive interference: If the phase difference is an even multiple of : then , so the sum of the two waves is a wave with twice the amplitude
Destructive interference: If the phase difference is an odd multiple of : then , so the sum of the two waves is zero
Between two plane waves
class=skin-invert-image|thumb|Geometrical arrangement for two plane wave interference
A simple form of interference pattern is obtained if two plane waves of the same frequency intersect at an angle.
One wave is travelling horizontally, and the other is travelling downwards at an angle θ to the first wave. Assuming that the two waves are in phase at the point B, then the relative phase changes along the x-axis. The phase difference at the point A is given by
It can be seen that the two waves are in phase when
and are half a cycle out of phase when
Constructive interference occurs when the waves are in phase, and destructive interference when they are half a cycle out of phase. Thus, an interference fringe pattern is produced, where the separation of the maxima is
and is known as the fringe spacing. The fringe spacing increases with increase in wavelength, and with decreasing angle .
The fringes are observed wherever the two waves overlap and the fringe spacing is uniform throughout.
Between two spherical waves
A point source produces a spherical wave. If the light from two point sources overlaps, the interference pattern maps out the way in which the phase difference between the two waves varies in space. This depends on the wavelength and on the separation of the point sources. The figure to the right shows interference between two spherical waves. The wavelength increases from top to bottom, and the distance between the sources increases from left to right.
When the plane of observation is far enough away, the fringe pattern will be a series of almost straight lines, since the waves will then be almost planar.
Multiple beams
Interference occurs when several waves are added together provided that the phase differences between them remain constant over the observation time.
It is sometimes desirable for several waves of the same frequency and amplitude to sum to zero (that is, interfere destructively, cancel). This is the principle behind, for example, 3-phase power and the diffraction grating. In both of these cases, the result is achieved by uniform spacing of the phases.
It is easy to see that a set of waves will cancel if they have the same amplitude and their phases are spaced equally in angle. Using phasors, each wave can be represented as for waves from to , where
To show that
one merely assumes the converse, then multiplies both sides by
The Fabry–Pérot interferometer uses interference between multiple reflections.
A diffraction grating can be considered to be a multiple-beam interferometer; since the peaks which it produces are generated by interference between the light transmitted by each of the elements in the grating; see interference vs. diffraction for further discussion.
Complex valued wave functions
Mechanical and gravity waves can be directly observed: they are real-valued wave functions; optical and matter waves cannot be directly observed: they are complex valued wave functions. Some of the differences between real valued and complex valued wave interference include:
The interference involves different types of mathematical functions: A classical wave is a real function representing the displacement from an equilibrium position; an optical or quantum wavefunction is a complex function. A classical wave at any point can be positive or negative; the quantum probability function is non-negative.
Any two different real waves in the same medium interfere; complex waves must be coherent to interfere. In practice this means the wave must come from the same source and have similar frequencies
Real wave interference is obtained simply by adding the displacements from equilibrium (or amplitudes) of the two waves; In complex wave interference, we measure the modulus of the wavefunction squared.
Optical wave interference
Because the frequency of light waves (~1014 Hz) is too high for currently available detectors to detect the variation of the electric field of the light, it is possible to observe only the intensity of an optical interference pattern. The intensity of the light at a given point is proportional to the square of the average amplitude of the wave. This can be expressed mathematically as follows. The displacement of the two waves at a point is:
where represents the magnitude of the displacement, represents the phase and represents the angular frequency.
The displacement of the summed waves is
The intensity of the light at is given by
This can be expressed in terms of the intensities of the individual waves as
Thus, the interference pattern maps out the difference in phase between the two waves, with maxima occurring when the phase difference is a multiple of 2. If the two beams are of equal intensity, the maxima are four times as bright as the individual beams, and the minima have zero intensity.
Classically the two waves must have the same polarization to give rise to interference fringes since it is not possible for waves of different polarizations to cancel one another out or add together. Instead, when waves of different polarization are added together, they give rise to a wave of a different polarization state.
Quantum mechanically the theories of Paul Dirac and Richard Feynman offer a more modern approach. Dirac showed that every quanta or photon of light acts on its own which he famously stated as "every photon interferes with itself". Richard Feynman showed that by evaluating a path integral where all possible paths are considered, that a number of higher probability paths will emerge. In thin films for example, film thickness which is not a multiple of light wavelength will not allow the quanta to traverse, only reflection is possible.
Light source requirements
The discussion above assumes that the waves which interfere with one another are monochromatic, i.e. have a single frequency—this requires that they are infinite in time. This is not, however, either practical or necessary. Two identical waves of finite duration whose frequency is fixed over that period will give rise to an interference pattern while they overlap. Two identical waves which consist of a narrow spectrum of frequency waves of finite duration (but shorter than their coherence time), will give a series of fringe patterns of slightly differing spacings, and provided the spread of spacings is significantly less than the average fringe spacing, a fringe pattern will again be observed during the time when the two waves overlap.
Conventional light sources emit waves of differing frequencies and at different times from different points in the source. If the light is split into two waves and then re-combined, each individual light wave may generate an interference pattern with its other half, but the individual fringe patterns generated will have different phases and spacings, and normally no overall fringe pattern will be observable. However, single-element light sources, such as sodium- or mercury-vapor lamps have emission lines with quite narrow frequency spectra. When these are spatially and colour filtered, and then split into two waves, they can be superimposed to generate interference fringes. All interferometry prior to the invention of the laser was done using such sources and had a wide range of successful applications.
A laser beam generally approximates much more closely to a monochromatic source, and thus it is much more straightforward to generate interference fringes using a laser. The ease with which interference fringes can be observed with a laser beam can sometimes cause problems in that stray reflections may give spurious interference fringes which can result in errors.
Normally, a single laser beam is used in interferometry, though interference has been observed using two independent lasers whose frequencies were sufficiently matched to satisfy the phase requirements.
This has also been observed for widefield interference between two incoherent laser sources.
It is also possible to observe interference fringes using white light. A white light fringe pattern can be considered to be made up of a 'spectrum' of fringe patterns each of slightly different spacing. If all the fringe patterns are in phase in the centre, then the fringes will increase in size as the wavelength decreases and the summed intensity will show three to four fringes of varying colour. Young describes this very elegantly in his discussion of two slit interference. Since white light fringes are obtained only when the two waves have travelled equal distances from the light source, they can be very useful in interferometry, as they allow the zero path difference fringe to be identified.
Optical arrangements
To generate interference fringes, light from the source has to be divided into two waves which then have to be re-combined. Traditionally, interferometers have been classified as either amplitude-division or wavefront-division systems.
In an amplitude-division system, a beam splitter is used to divide the light into two beams travelling in different directions, which are then superimposed to produce the interference pattern. The Michelson interferometer and the Mach–Zehnder interferometer are examples of amplitude-division systems.
In wavefront-division systems, the wave is divided in space—examples are Young's double slit interferometer and Lloyd's mirror.
Interference can also be seen in everyday phenomena such as iridescence and structural coloration. For example, the colours seen in a soap bubble arise from interference of light reflecting off the front and back surfaces of the thin soap film. Depending on the thickness of the film, different colours interfere constructively and destructively.
Quantum interference
Quantum interference – the observed wave-behavior of matterFeynman R, Leighton R, and Sands M., The Feynman Lectures Website, September 2013."The Feynman Lectures on Physics, Volume III" (online edition) – resembles optical interference. Let be a wavefunction solution of the Schrödinger equation for a quantum mechanical object. Then the probability of observing the object in the interval is where * indicates complex conjugation. Quantum interference concerns the issue of this probability when the wavefunction is expressed as a sum or linear superposition of two terms :
Usually, and correspond to distinct situations A and B. When this is the case, the equation indicates that the object can be in situation A or situation B. The above equation can then be interpreted as: The probability of finding the object at is the probability of finding the object at when it is in situation A plus the probability of finding the object at when it is in situation B plus an extra term. This extra term, which is called the quantum interference term, is in the above equation. As in the classical wave case above, the quantum interference term can add (constructive interference) or subtract (destructive interference) from in the above equation depending on whether the quantum interference term is positive or negative. If this term is absent for all , then there is no quantum mechanical interference associated with situations A and B.
The best known example of quantum interference is the double-slit experiment. In this experiment, matter waves from electrons, atoms or molecules approach a barrier with two slits in it. The part of the wavefunction going through one slit is associated with while the part going through the other slit is associated with . The interference pattern occurs on the far side, observed by detectors suitable to the particles originating the matter wave. The pattern matches the optical double slit pattern.
Applications
Beat
In acoustics, a beat is an interference pattern between two sounds of slightly different frequencies, perceived as a periodic variation in volume whose rate is the difference of the two frequencies.
With tuning instruments that can produce sustained tones, beats can be readily recognized. Tuning two tones to a unison will present a peculiar effect: when the two tones are close in pitch but not identical, the difference in frequency generates the beating. The volume varies like in a tremolo as the sounds alternately interfere constructively and destructively. As the two tones gradually approach unison, the beating slows down and may become so slow as to be imperceptible. As the two tones get further apart, their beat frequency starts to approach the range of human pitch perception, the beating starts to sound like a note, and a combination tone is produced. This combination tone can also be referred to as a missing fundamental, as the beat frequency of any two tones is equivalent to the frequency of their implied fundamental frequency.
Interferometry
Interferometry is an experimental technique for measuring or using interference. It can be used with many types of waves. All interferometers require a source of coherent waves.s
Optical interferometry
The simplest interferometer has a pinhole to create a coherent source followed by a mask with two holes and a screen to observe the interference. This gives the double-slit experiment. Modern versions replace the initial pinhole with the coherent light of a laser. Other wave-front splitting interferometers use mirror or prisms to split and recombine waves; amplitude splitting devices use thin dielectric films. Multiple beam interferometers can include lenses.
The results of the Michelson–Morley experiment are generally considered to be the first strong evidence against the theory of a luminiferous aether and in favor of special relativity.
Interferometry has been used in defining and calibrating length standards. When the metre was defined as the distance between two marks on a platinum-iridium bar, Michelson and Benoît used interferometry to measure the wavelength of the red cadmium line in the new standard, and also showed that it could be used as a length standard. Sixty years later, in 1960, the metre in the new SI system was defined to be equal to 1,650,763.73 wavelengths of the orange-red emission line in the electromagnetic spectrum of the krypton-86 atom in a vacuum. This definition was replaced in 1983 by defining the metre as the distance travelled by light in vacuum during a specific time interval. Interferometry is still fundamental in establishing the calibration chain in length measurement.
Interferometry is used in the calibration of slip gauges (called gauge blocks in the US) and in coordinate-measuring machines. It is also used in the testing of optical components.RS Longhurst, Geometrical and Physical Optics, 1968, Longmans, London.
Radio interferometry
In 1946, a technique called astronomical interferometry was developed. Astronomical radio interferometers usually consist either of arrays of parabolic dishes or two-dimensional arrays of omni-directional antennas. All of the telescopes in the array are widely separated and are usually connected together using coaxial cable, waveguide, optical fiber, or other type of transmission line. Interferometry increases the total signal collected, but its primary purpose is to vastly increase the resolution through a process called Aperture synthesis. This technique works by superposing (interfering) the signal waves from the different telescopes on the principle that waves that coincide with the same phase will add to each other while two waves that have opposite phases will cancel each other out. This creates a combined telescope that is equivalent in resolution (though not in sensitivity) to a single antenna whose diameter is equal to the spacing of the antennas farthest apart in the array.
Acoustic interferometry
An acoustic interferometer is an instrument for measuring the physical characteristics of sound waves in a gas or liquid, such velocity, wavelength, absorption, or impedance. A vibrating crystal creates ultrasonic waves that are radiated into the medium. The waves strike a reflector placed parallel to the crystal, reflected back to the source and measured.
See also
Active noise control
Beat (acoustics)
Coherence (physics)
Diffraction
Haidinger fringes
Interference lithography
Interference visibility
Interferometer
Lloyd's Mirror
Moiré pattern
Multipath interference
Newton's rings
Optical path length
Thin-film interference
Rayleigh roughness criterion
Upfade
References
External links
Easy JavaScript Simulation Model of One Dimensional Wave Interference
Java simulation of interference of water waves 1
Java simulation of interference of water waves 2
Flash animations demonstrating interference
Category:Wave mechanics
ca:Interferència (propagació d'ones)#Interferència òptica
|
physics
| 3,378
|
16217
|
Jaguar
|
https://en.wikipedia.org/wiki/Jaguar
|
The jaguar (Panthera onca) is a large cat species and the only living member of the genus Panthera that is native to the Americas. With a body length of up to and a weight of up to , it is the biggest cat species in the Americas and the third largest in the world. Its distinctively marked coat features pale yellow to tan colored fur covered by spots that transition to rosettes on the sides, although a melanistic black coat appears in some individuals. The jaguar's powerful bite allows it to pierce the carapaces of turtles and tortoises, and to employ an unusual killing method: it bites directly through the skull of mammalian prey between the ears to deliver a fatal blow to the brain.
The modern jaguar's ancestors probably entered the Americas from Eurasia during the Early Pleistocene via the land bridge that once spanned the Bering Strait. Today, the jaguar's range extends from the Southwestern United States across Mexico and much of Central America, the Amazon rainforest and south to Paraguay and northern Argentina. It inhabits a variety of forested and open terrains, but its preferred habitat is tropical and subtropical moist broadleaf forest, wetlands and wooded regions. It is adept at swimming and is largely a solitary, opportunistic, stalk-and-ambush apex predator. As a keystone species, it plays an important role in stabilizing ecosystems and in regulating prey populations.
The jaguar is threatened by habitat loss, habitat fragmentation, poaching for trade with its body parts and killings in human–wildlife conflict situations, particularly with ranchers in Central and South America. It has been listed as Near Threatened on the IUCN Red List since 2002. The wild population is thought to have declined since the late 1990s. Priority areas for jaguar conservation comprise 51 Jaguar Conservation Units (JCUs), defined as large areas inhabited by at least 50 breeding jaguars. The JCUs are located in 36 geographic regions ranging from Mexico to Argentina.
The jaguar has featured prominently in the mythology of indigenous peoples of the Americas, including those of the Aztec and Maya civilizations.
Etymology
The word "jaguar" is possibly derived from the Tupi-Guarani word meaning 'wild beast that overcomes its prey at a bound'. Because jaguar also applies to other animals, indigenous peoples in Guyana call it , with the added sufix eté, meaning "true beast".
"Onca" is derived from the Portuguese name for a spotted cat that is larger than a lynx; cf. ounce. The word "panther" is derived from classical Latin , itself from the ancient Greek ().
In North America, the word is pronounced with two syllables, as , while in British English, it is pronounced with three, as .
Taxonomy and evolution
Taxonomy
In 1758, Carl Linnaeus described the jaguar in his work Systema Naturae and gave it the scientific name Felis onca.
In the 19th and 20th centuries, several jaguar type specimens formed the basis for descriptions of subspecies. In 1939, Reginald Innes Pocock recognized eight subspecies based on the geographic origins and skull morphology of these specimens.
Pocock did not have access to sufficient zoological specimens to critically evaluate their subspecific status but expressed doubt about the status of several. Later consideration of his work suggested only three subspecies should be recognized. The description of P. o. palustris was based on a fossil skull.
By 2005, nine subspecies were considered to be valid taxa:
P. o. onca was a jaguar from Brazil.
P. o. peruviana was a jaguar skull from Peru.
P. o. hernandesii was a jaguar from Mazatlán in Mexico.
P. o. palustris was a fossil jaguar mandible excavated in the Sierras Pampeanas of Córdova District, Argentina.
P. o. centralis was a skull of a male jaguar from Talamanca, Costa Rica.
P. o. goldmani was a jaguar skin from Yohatlan in Campeche, Mexico.
P. o. paraguensis was a skull of a male jaguar from Paraguay.
P. o. arizonensis was a skin and skull of a male jaguar from the vicinity of Cibecue, Arizona.
P. o. veraecrucis was a skull of a male jaguar from San Andrés Tuxtla in Mexico.
Reginald Innes Pocock placed the jaguar in the genus Panthera and observed that it shares several morphological features with the leopard (P. pardus). He, therefore, concluded that they are most closely related to each other. Results of morphological and genetic research indicate a clinal north–south variation between populations, but no evidence for subspecific differentiation. DNA analysis of 84 jaguar samples from South America revealed that the gene flow between jaguar populations in Colombia was high in the past. Since 2017, the jaguar is considered to be a monotypic taxon, though the modern Panthera onca onca is still distinguished from two fossil subspecies, Panthera onca augusta and Panthera onca mesembrina. However, the 2024 study suggested that the validity of subspecific assignments on both P. o. augusta and P. o. mesembrina remains unresolved, since both fossil and living jaguars show a considerable variation in morphometry.
Evolution
The Panthera lineage is estimated to have genetically diverged from the common ancestor of the Felidae around to . Some genetic analyzes place the jaguar as a sister species to the lion with which it diverged , but other studies place the lion closer to the leopard.
The lineage of the jaguar appears to have originated in Africa and spread to Eurasia 1.95–1.77 mya. The living jaguar species is often suggested to have descended from the Eurasian Panthera gombaszogensis. The ancestor of the jaguar entered the American continent via Beringia, the land bridge that once spanned the Bering Strait, Some authors have disputed the close relationship between P. gombaszogensis (which is primarily known from Eurasia) and the modern jaguar. The oldest fossils of modern jaguars (P. onca) have been found in North America dating between 850,000-820,000 years ago. Results of mitochondrial DNA analysis of 37 jaguars indicate that current populations evolved between 510,000 and 280,000 years ago in northern South America and subsequently recolonized North and Central America after the extinction of jaguars there during the Late Pleistocene.
Two extinct subspecies of jaguar are recognized in the fossil record: the North American P. o. augusta and South American P. o. mesembrina.
Description
The jaguar is a compact and muscular animal. It is the largest cat native to the Americas and the third largest in the world, exceeded in size only by the tiger and the lion. It stands tall at the shoulders.
Its size and weight vary considerably depending on sex and region: weights in most regions are normally in the range of . Exceptionally big males have been recorded to weigh as much as .
The smallest females from Middle America weigh about . It is sexually dimorphic, with females typically being 10–20% smaller than males. The length from the nose to the base of the tail varies from . The tail is long and the shortest of any big cat.
Its muscular legs are shorter than the legs of other Panthera species with similar body weight.
Size tends to increase from north to south. Jaguars in the Chamela-Cuixmala Biosphere Reserve on the Pacific coast of central Mexico weighed around .
Jaguars in Venezuela and Brazil are much larger, with average weights of about in males and of about in females.
The jaguar's coat ranges from pale yellow to tan or reddish-yellow, with a whitish underside and covered in black spots. The spots and their shapes vary: on the sides, they become rosettes which may include one or several dots. The spots on the head and neck are generally solid, as are those on the tail where they may merge to form bands near the end and create a black tip. They are elongated on the middle of the back, often connecting to create a median stripe, and blotchy on the belly. These patterns serve as camouflage in areas with dense vegetation and patchy shadows.
Jaguars living in forests are often darker and considerably smaller than those living in open areas, possibly due to the smaller numbers of large, herbivorous prey in forest areas.
The jaguar closely resembles the leopard but is generally more robust, with stockier limbs and a more square head. The rosettes on a jaguar's coat are larger, darker, fewer in number and have thicker lines, with a small spot in the middle.
It has powerful jaws with the third-highest bite force of all felids, after the tiger and the lion.
It has an average bite force at the canine tip of 887.0 Newton and a bite force quotient at the canine tip of 118.6.
A jaguar can bite with a force of with the canine teeth and at the carnassial notch.
Color variation
Melanistic jaguars are also known as black panthers. The black morph is less common than the spotted one.
Black jaguars have been documented in Central and South America. Melanism in the jaguar is caused by deletions in the melanocortin 1 receptor gene and inherited through a dominant allele. Black jaguars occur at higher densities in tropical rainforest and are more active during the daytime. This suggests that melanism provides camouflage in dense vegetation with high illumination.
In 2004, a camera trap in the Sierra Madre Occidental mountains photographed the first documented black jaguar in Northern Mexico. Black jaguars were also photographed in Costa Rica's Alberto Manuel Brenes Biological Reserve, in the mountains of the Cordillera de Talamanca, in Barbilla National Park and in eastern Panama.
Distribution and habitat
In 1999, the jaguar's historic range at the turn of the 20th century was estimated at , stretching from the southern United States through Central America to southern Argentina. By the turn of the 21st century, its global range had decreased to about , with most declines occurring in the southern United States, northern Mexico, northern Brazil, and southern Argentina.
Its present range extends from the United States, Mexico, through Central America to South America comprising Belize, Guatemala, Honduras, Nicaragua, Costa Rica, particularly on the Osa Peninsula, Panama, Colombia, Venezuela, Guyana, Suriname, French Guiana, Ecuador, Peru, Bolivia, Brazil, Paraguay and Argentina. It is considered to be locally extinct in El Salvador and Uruguay.
Jaguars have been occasionally sighted in Arizona, New Mexico and Texas, with 62 accounts reported in the 20th century.
Between 2012 and 2015, a male vagrant jaguar was recorded in 23 locations in the Santa Rita Mountains. Eight jaguars were photographed in the southwestern US between 1996 and 2024.
The jaguar prefers dense forest and typically inhabits dry deciduous forests, tropical and subtropical moist broadleaf forests, rainforests and cloud forests in Central and South America; open, seasonally flooded wetlands, dry grassland and historically also oak forests in the United States. It has been recorded at elevations up to but avoids montane forests. It favors riverine habitat and swamps with dense vegetation cover. In the Mayan forests of Mexico and Guatemala, 11 GPS-collared jaguars preferred undisturbed dense habitat away from roads; females avoided even areas with low levels of human activity, whereas males appeared less disturbed by human population density. A young male jaguar was also recorded in the semi-arid Sierra de San Carlos at a waterhole.
Former range
In the 19th century, the jaguar was still sighted at the North Platte River north of Longs Peak in Colorado, in coastal Louisiana, northern Arizona and New Mexico.
Multiple verified zoological reports of the jaguar are known in California, two as far north as Monterey in 1814 and 1826. The only record of an active jaguar den with breeding adults and kittens in the United States was in the Tehachapi Mountains of California prior to 1860. The jaguar persisted in California until about 1860.
The last confirmed jaguar in Texas was shot in 1948, southeast of Kingsville, Texas.
In Arizona, a female was shot in the White Mountains in 1963. By the late 1960s, the jaguar was thought to have been extirpated in the United States. Arizona outlawed jaguar hunting in 1969, but by then no females remained, and over the next 25 years only two males were sighted and killed in the state. In 1996, a rancher and hunting guide from Douglas, Arizona came across a jaguar in the Peloncillo Mountains and became a researcher on jaguars, placing trail cameras, which recorded four more jaguars.
Behavior and ecology
The jaguar is mostly active at night and during twilight.
However, jaguars living in densely forested regions of the Amazon rainforest and the Pantanal are largely active by day, whereas jaguars in the Atlantic Forest are primarily active by night.
The activity pattern of the jaguar coincides with the activity of its main prey species. Jaguars are good swimmers and play and hunt in the water, possibly more than tigers. They have been recorded moving between islands and the shore, swimming distances of at least 1.3km. Jaguars are also good at climbing trees but do so less often than cougars.
Ecological role
The adult jaguar is an apex predator, meaning it is at the top of the food chain and is not preyed upon in the wild. The jaguar has also been termed a keystone species, as it is assumed that it controls the population levels of prey such as herbivorous and seed-eating mammals and thus maintains the structural integrity of forest systems.
However, field work has shown this may be natural variability, and the population increases may not be sustained. Thus, the keystone predator hypothesis is not accepted by all scientists.
The jaguar is sympatric with the cougar. In central Mexico, both prey on white-tailed deer, which makes up 54% and 66% of jaguar and cougar's prey, respectively. In northern Mexico, the jaguar and the cougar share the same habitat, and their diet overlaps dependent on prey availability. Jaguars seemed to prefer deer and calves. In Mexico and Central America, neither of the two cats are considered to be the dominant predator.
In South America, the jaguar is larger than the cougar and tends to take larger prey, usually over . The cougar's prey usually weighs between , which is thought to be the reason for its smaller size.
This situation may be advantageous to the cougar. Its broader prey niche, including its ability to take smaller prey, may give it an advantage over the jaguar in human-altered landscapes.
Hunting and diet
The jaguar is an obligate carnivore and depends solely on flesh for its nutrient requirements. An analysis of 53 studies documenting the diet of the jaguar revealed that its prey ranges in weight from ; it prefers prey weighing , with the capybara and the giant anteater being the most selected. When available, it also preys on marsh deer, southern tamandua, collared peccary and black agouti. In floodplains, jaguars opportunistically take reptiles such as green anacondas, turtles and caimans. Consumption of reptiles appears to be more frequent in jaguars than in other big cats. One remote population in the Brazilian Pantanal is recorded to primarily feed on aquatic reptiles and fish.
The jaguar also preys on livestock in cattle ranching areas where wild prey is scarce.
The daily food requirement of a captive jaguar weighing was estimated at of meat.
The jaguar's bite force allows it to pierce the carapaces of the yellow-spotted Amazon river turtle and the yellow-footed tortoise. It employs an unusual killing method: it bites mammalian prey directly through the skull between the ears to deliver a fatal bite to the brain. It kills capybara by piercing its canine teeth through the temporal bones of its skull, breaking its zygomatic arch and mandible and penetrating its brain, often through the ears.
It has been hypothesized to be an adaptation to cracking open turtle shells; armored reptiles may have formed an abundant prey base for the jaguar following the late Pleistocene extinctions. However, this is disputed, as even in areas where jaguars prey on reptiles, they are still taken relatively infrequently compared to mammals in spite of their greater abundance.
Between October 2001 and April 2004, 10 jaguars were monitored in the southern Pantanal. In the dry season from April to September, they killed prey at intervals ranging from one to seven days; and ranging from one to 16 days in the wet season from October to March.
The jaguar uses a stalk-and-ambush strategy when hunting rather than chasing prey. The cat will slowly walk down forest paths, listening for and stalking prey before rushing or ambushing. The jaguar attacks from cover and usually from a target's blind spot with a quick pounce; the species' ambushing abilities are considered nearly peerless in the animal kingdom by both indigenous people and field researchers and are probably a product of its role as an apex predator in several different environments. The ambush may include leaping into water after prey, as a jaguar is quite capable of carrying a large kill while swimming; its strength is such that carcasses as large as a heifer can be hauled up a tree to avoid flood levels. After killing prey, the jaguar will drag the carcass to a thicket or other secluded spot. It begins eating at the neck and chest. The heart and lungs are consumed, followed by the shoulders.
Social activity
The jaguar is generally solitary except for females with cubs. In 1977, groups consisting of a male, female and cubs, and two females with two males were sighted several times in a study area in the Paraguay River valley; a radio-collared female moved in a home range of , which partly overlapped with another female. The home range of the male in this study area overlapped with several females. In the Venezuelan Llanos and Brazilian Pantanal, male coalitions were detected, which marked, defended and invaded territories together, hunted together and mated with several females.
The jaguar uses scrape marks, urine, and feces to mark its territory.
The size of home ranges depends on the level of deforestation and human population density. The home ranges of females vary from in the Pantanal to in the Amazon to in the Atlantic Forest. Male jaguar home ranges vary from in the Pantanal to in the Amazon to in the Atlantic Forest and in the Cerrado.
Studies employing GPS telemetry in 2003 and 2004 found densities of only six to seven jaguars per in the Pantanal region, compared with 10 to 11 using traditional methods; this suggests the widely used sampling methods may inflate the actual numbers of individuals in a sampling area. Fights between males occur but are rare, and avoidance behavior has been observed in the wild. In one wetland population with degraded territorial boundaries and more social proximity, adults of the same sex are more tolerant of each other and engage in more friendly and co-operative interactions.
The jaguar roars/grunts for long-distance communication; intensive bouts of counter-calling between individuals have been observed in the wild. This vocalization is described as "hoarse" with five or six guttural notes. Chuffing is produced by individuals when greeting, during courting, or by a mother comforting her cubs. This sound is described as low intensity snorts, possibly intended to signal tranquility and passivity. Cubs have been recorded bleating, gurgling and mewing.
Reproduction and life cycle
In captivity, the female jaguar is recorded to reach sexual maturity at the age of about 2.5 years. Estrus lasts 7–15 days with an estrus cycle of 41.8 to 52.6 days. During estrus, she exhibits increased restlessness with rolling and prolonged vocalizations.
She is an induced ovulator but can also ovulate spontaneously.
Gestation lasts 91 to 111 days.
The male is sexually mature at the age of three to four years.
His mean ejaculate volume is 8.6±1.3 ml.
Generation length of the jaguar is 9.8 years.
In the Pantanal, breeding pairs were observed to stay together for up to five days. Females had one to two cubs.
The young are born with closed eyes but open them after two weeks. Cubs are weaned at the age of three months but remain in the birth den for six months before leaving to accompany their mother on hunts.
Jaguars remain with their mothers for up to two years. They appear to rarely live beyond 11 years, but captive individuals may live 22 years.
In 2001, a male jaguar killed and partially consumed two cubs in Emas National Park. DNA paternity testing of blood samples revealed that the male was the father of the cubs. Two more cases of infanticide were documented in the northern Pantanal in 2013. To defend against infanticide, the female hides her cubs and distracts the male with courtship behavior.
Attacks on humans
The Spanish conquistadors feared the jaguar. According to Charles Darwin, the indigenous peoples of South America stated that people did not need to fear the jaguar as long as capybaras were abundant.
The first official record of a jaguar killing a human in Brazil dates to June 2008.
Two children were attacked by jaguars in Guyana.
The majority of known attacks on people happened when it had been cornered or wounded.
Threats
The jaguar is threatened by loss and fragmentation of habitat, illegal killing in retaliation for livestock depredation and for illegal trade in jaguar body parts. It is listed as Near Threatened on the IUCN Red List since 2002, as the jaguar population has probably declined by 20–25% since the mid-1990s. Deforestation is a major threat to the jaguar across its range. Habitat loss was most rapid in drier regions such as the Argentine pampas, the arid grasslands of Mexico and the southwestern United States.
In 2002, it was estimated that the range of the jaguar had declined to about 46% of its range in the early 20th century. In 2018, it was estimated that its range had declined by 55% in the last century. The only remaining stronghold is the Amazon rainforest, a region that is rapidly being fragmented by deforestation.
Between 2000 and 2012, forest loss in the jaguar range amounted to , with fragmentation increasing in particular in corridors between Jaguar Conservation Units (JCUs).
By 2014, direct linkages between two JCUs in Bolivia were lost, and two JCUs in northern Argentina became completely isolated due to deforestation.
In Mexico, the jaguar is primarily threatened by poaching. Its habitat is fragmented in northern Mexico, in the Gulf of Mexico and the Yucatán Peninsula, caused by changes in land use, construction of roads and tourism infrastructure.
In Panama, 220 of 230 jaguars were killed in retaliation for predation on livestock between 1998 and 2014.
In Venezuela, the jaguar was extirpated in about 26% of its range in the country since 1940, mostly in dry savannas and unproductive scrubland in the northeastern region of Anzoátegui.
In Ecuador, the jaguar is threatened by reduced prey availability in areas where the expansion of the road network facilitated access of human hunters to forests.
In the Alto Paraná Atlantic forests, at least 117 jaguars were killed in Iguaçu National Park and the adjacent Misiones Province between 1995 and 2008.
Some Afro-Colombians in the Colombian Chocó Department hunt jaguars for consumption and sale of meat.
Between 2008 and 2012, at least 15 jaguars were killed by livestock farmers in central Belize.
The international trade of jaguar skins boomed between the end of the Second World War and the early 1970s.
Significant declines occurred in the 1960s, as more than 15,000 jaguars were yearly killed for their skins in the Brazilian Amazon alone; the trade in jaguar skins decreased since 1973 when the Convention on International Trade in Endangered Species was enacted.
Interview surveys with 533 people in the northwestern Bolivian Amazon revealed that local people killed jaguars out of fear, in retaliation, and for trade.
Between August 2016 and August 2019, jaguar skins and body parts were seen for sale in tourist markets in the Peruvian cities of Lima, Iquitos and Pucallpa.
Human-wildlife conflict, opportunistic hunting and hunting for trade in domestic markets are key drivers for killing jaguars in Belize and Guatemala.
Seizure reports indicate that at least 857 jaguars were involved in trade between 2012 and 2018, including 482 individuals in Bolivia alone; 31 jaguars were seized in China.
Between 2014 and early 2019, 760 jaguar fangs were seized that originated in Bolivia and were destined for China. Undercover investigations revealed that the smuggling of jaguar body parts is run by Chinese residents in Bolivia.
Conservation
The jaguar is listed on CITES Appendix I, which means that all international commercial trade in jaguars or their body parts is prohibited. Hunting jaguars is prohibited in Argentina, Brazil, Colombia, French Guiana, Honduras, Nicaragua, Panama, Paraguay, Suriname, the United States, and Venezuela. Hunting jaguars is restricted in Guatemala and Peru. In Ecuador, hunting jaguars is prohibited, and it is classified as threatened with extinction.
In Guyana, it is protected as an endangered species, and hunting it is illegal.
In 1986, the Cockscomb Basin Wildlife Sanctuary was established in Belize as the world's first protected area for jaguar conservation.
Jaguar Conservation Units
In 1999, field scientists from 18 jaguar range countries determined the most important areas for long-term jaguar conservation based on the status of jaguar population units, stability of prey base and quality of habitat. These areas, called "Jaguar Conservation Units" (JCUs), are large enough for at least 50 breeding individuals and range in size from ; 51 JCUs were designated in 36 geographic regions including:
the Sierra Madre Occidental and Sierra de Tamaulipas in Mexico
the Selva Maya tropical forests extending over Mexico, Belize and Guatemala
the Chocó–Darién moist forests from Honduras and Panama to Colombia
Venezuelan Llanos
northern Cerrado and Amazon basin in Brazil
Tropical Andes in Bolivia and Peru
Misiones Province in Argentina
Optimal routes of travel between core jaguar population units were identified across its range in 2010 to implement wildlife corridors that connect JCUs. These corridors represent areas with the shortest distance between jaguar breeding populations, require the least possible energy input of dispersing individuals and pose a low mortality risk. They cover an area of and range in length from in Mexico and Central America and from in South America.
Cooperation with local landowners and municipal, state, or federal agencies is essential to maintain connected populations and prevent fragmentation in both JCUs and corridors.
Seven of 13 corridors in Mexico are functioning with a width of at least and a length of no more than . The other corridors may hamper passage, as they are narrower and longer.
In August 2012, the United States Fish and Wildlife Service set aside in Arizona and New Mexico for the protection of the jaguar. The Jaguar Recovery Plan was published in April 2019, in which Interstate 10 is considered to form the northern boundary of the Jaguar Recovery Unit in Arizona and New Mexico.
In Mexico, a national conservation strategy was developed from 2005 on and published in 2016. The Mexican jaguar population increased from an estimated 4,000 individuals in 2010 to about 4,800 individuals in 2018. This increase is seen as a positive effect of conservation measures that were implemented in cooperation with governmental and non-governmental institutions and landowners.
An evaluation of JCUs from Mexico to Argentina revealed that they overlap with high-quality habitats of about 1,500 mammals to varying degrees. Since co-occurring mammals benefit from the JCU approach, the jaguar has been called an umbrella species.
Central American JCUs overlap with the habitat of 187 of 304 regional endemic amphibian and reptile species, of which 19 amphibians occur only in the jaguar range.
Approaches
In setting up protected reserves, efforts generally also have to be focused on the surrounding areas, as jaguars are unlikely to confine themselves to the bounds of a reservation, especially if the population is increasing in size. Human attitudes in the areas surrounding reserves and laws and regulations to prevent poaching are essential to make conservation areas effective.
To estimate population sizes within specific areas and to keep track of individual jaguars, camera trapping and wildlife tracking telemetry are widely used, and feces are sought out with the help of detection dogs to study jaguar health and diet.
Current conservation efforts often focus on educating ranch owners and promoting ecotourism.
Conservationists and professionals in Mexico and the United States have established the Northern Jaguar Reserve in northern Mexico. Advocacy for reintroduction of the jaguar to its former range in Arizona and New Mexico have been supported by documentation of natural migrations by individual jaguars into the southern reaches of both states, the recency of extirpation from those regions by human action, and supportive arguments pertaining to biodiversity, ecological, human, and practical considerations.
In culture and mythology
In the pre-Columbian Americas, the jaguar was a symbol of power and strength. In the Andes, a jaguar cult disseminated by the early Chavín culture became accepted over most of today's Peru by 900 BC. The later Moche culture in northern Peru used the jaguar as a symbol of power in many of their ceramics. In the Muisca religion in Altiplano Cundiboyacense, the jaguar was considered a sacred animal, and people dressed in jaguar skins during religious rituals.
The skins were traded with peoples in the nearby Orinoquía Region.
The name of the Muisca ruler Nemequene was derived from the Chibcha words nymy and quyne, meaning "force of the jaguar".
Sculptures with "Olmec were-jaguar" motifs were found on the Yucatán Peninsula in Veracruz and Tabasco; they show stylized jaguars with half-human faces. In the later Maya civilization, the jaguar was known as balam or bolom in many of the Mayan languages, and was used to symbolize warriors and the elite class for being brave, fierce and strong. It was associated with the underworld and its image was used to decorate tombs and grave-good vessels.
The Aztec civilization called the jaguar ocelotl and considered it to be the king of the animals. It was believed to be fierce and courageous, but also wise, dignified and careful. The military had two classes of warriors, the ocelotl or jaguar warriors and the cuauhtli or eagle warriors and each dressed like their representative animal. In addition, members of the royal class would decorate in jaguar skins. The jaguar was considered to be the totem animal of the powerful deities Tezcatlipoca and Tepeyollotl.
A conch shell gorget depicting a jaguar was found in a burial mound in Benton County, Missouri. The gorget shows evenly-engraved lines and measures .
Rock drawings made by the Hopi, Anasazi and Pueblo all over the desert and chaparral regions of the American Southwest show an explicitly spotted cat, presumably a jaguar, as it is drawn much larger than an ocelot.
See also
List of largest cats
References
External links
People and Jaguars a Guide for Coexistence
Felidae Conservation Fund
Category:Apex predators
Category:Big cats
Category:Carnivorans of Brazil
Category:ESA endangered species
Category:Extant Middle Pleistocene first appearances
Category:Fauna of the Amazon
Category:Fauna of the Caatinga
Category:Fauna of the Cerrado
Category:Fauna of the Pantanal
Category:Fauna of the Atlantic Forest
Category:Fauna of the Southwestern United States
Category:Felids of Central America
Category:Felids of North America
Category:Felids of South America
Category:Mammals described in 1758
Category:Near threatened animals
Category:Near threatened biota of North America
Category:Near threatened biota of South America
onca
Category:Pleistocene mammals of North America
Category:Pleistocene mammals of South America
Category:Animal taxa named by Carl Linnaeus
|
nature_wildlife
| 5,182
|
18203
|
Lambda calculus
|
https://en.wikipedia.org/wiki/Lambda_calculus
|
In mathematical logic, the lambda calculus (also written as λ-calculus) is a formal system for expressing computation based on function abstraction and application using variable binding and substitution. Untyped lambda calculus, the topic of this article, is a universal machine, a model of computation that can be used to simulate any Turing machine (and vice versa). It was introduced by the mathematician Alonzo Church in the 1930s as part of his research into the foundations of mathematics. In 1936, Church found a formulation which was logically consistent, and documented it in 1940.
Definition
The lambda calculus consists of a language of lambda terms, that are defined by a certain formal syntax, and a set of transformation rules for manipulating the lambda terms. In BNF, the syntax is where variables range over an infinite set of names. Terms range over all lambda terms. This corresponds to the following inductive definition:
A variable is itself a valid lambda term.
An abstraction is a lambda term where is a lambda term and is a variable,
An application is a lambda term where and are lambda terms.
A lambda term is syntactically valid if and only if it can be obtained by repeated application of these three rules. For convenience, parentheses can often be omitted when writing a lambda term—see for details.
In the term , occurrences of within that are under the scope of this λ are termed bound; any occurrence of a variable not bound by an enclosing λ is free. is the set of free variables of . The notation denotes capture-avoiding substitution: substituting for every free occurrence of in , while avoiding variable capture. This operation is defined inductively as follows:
; if .
.
has three cases:
If , becomes ( is bound; no change).
If , becomes .
If , first α-rename to with fresh to avoid name collisions, then continue as above.
There are several notions of "equivalence" and "reduction" that make it possible to reduce lambda terms to equivalent lambda terms.
α-conversion captures the intuition that the particular choice of a bound variable, in an abstraction, does not (usually) matter. If , then the terms and are considered alpha-equivalent, written . The equivalence relation is the smallest congruence relation on lambda terms generated by this rule. For instance, and are alpha-equivalent lambda terms.
The β-reduction rule states that a β-redex, an application of the form , reduces to the term . For example, for every , . This demonstrates that really is the identity. Similarly, , which demonstrates that is a constant function.
η-conversion expresses extensionality and converts between and whenever does not appear free in . It is often omitted in many treatments of lambda calculus.
The term redex, short for reducible expression, refers to subterms that can be reduced by one of the reduction rules. For example, (λx.M) N is a β-redex in expressing the substitution of N for x in M. The expression to which a redex reduces is called its reduct; the reduct of (λx.M) N is M[x := N].
Explanation and applications
Lambda calculus is Turing complete, that is, it is a universal model of computation that can be used to simulate any Turing machine. Its namesake, the Greek letter lambda (λ), is used in lambda expressions and lambda terms to denote binding a variable in a function.
Lambda calculus may be untyped or typed. In typed lambda calculus, functions can be applied only if they are capable of accepting the given input's "type" of data. Typed lambda calculi are strictly weaker than the untyped lambda calculus, which is the primary subject of this article, in the sense that typed lambda calculi can express less than the untyped calculus can. On the other hand, more things can be proven with typed lambda calculi. For example, in simply typed lambda calculus, it is a theorem that every evaluation strategy terminates for every simply typed lambda-term, whereas evaluation of untyped lambda-terms need not terminate (see below). One reason there are many different typed lambda calculi has been the desire to do more (of what the untyped calculus can do) without giving up on being able to prove strong theorems about the calculus.
Lambda calculus has applications in many different areas in mathematics, philosophy, linguistics, and computer science.. Lambda calculus has played an important role in the development of the theory of programming languages. Functional programming languages implement lambda calculus. Lambda calculus is also a current research topic in category theory.
History
Lambda calculus was introduced by mathematician Alonzo Church in the 1930s as part of an investigation into the foundations of mathematics. The original system was shown to be logically inconsistent in 1935 when Stephen Kleene and J. B. Rosser developed the Kleene–Rosser paradox.
Subsequently, in 1936 Church isolated and published just the portion relevant to computation, what is now called the untyped lambda calculus. In 1940, he also introduced a computationally weaker, but logically consistent system, known as the simply typed lambda calculus.
Until the 1960s when its relation to programming languages was clarified, the lambda calculus was only a formalism. Thanks to Richard Montague and other linguists' applications in the semantics of natural language, the lambda calculus has begun to enjoy a respectable place in both linguistics and computer science.
Origin of the λ symbol
There is some uncertainty over the reason for Church's use of the Greek letter lambda (λ) as the notation for function-abstraction in the lambda calculus, perhaps in part due to conflicting explanations by Church himself. According to Cardone and Hindley (2006):
By the way, why did Church choose the notation "λ"? In [an unpublished 1964 letter to Harald Dickson] he stated clearly that it came from the notation "" used for class-abstraction by Whitehead and Russell, by first modifying "" to "" to distinguish function-abstraction from class-abstraction, and then changing "" to "λ" for ease of printing.
This origin was also reported in [Rosser, 1984, p.338]. On the other hand, in his later years Church told two enquirers that the choice was more accidental: a symbol was needed and λ just happened to be chosen.
Dana Scott has also addressed this question in various public lectures.Dana Scott, "Looking Backward; Looking Forward", Invited Talk at the Workshop in honour of Dana Scott's 85th birthday and 50 years of domain theory, 7–8 July, FLoC 2018 (talk 7 July 2018). The relevant passage begins at 32:50. (See also this extract of a May 2016 talk at the University of Birmingham, UK.)
Scott recounts that he once posed a question about the origin of the lambda symbol to Church's former student and son-in-law John W. Addison Jr., who then wrote his father-in-law a postcard:
Dear Professor Church,
Russell had the iota operator, Hilbert had the epsilon operator. Why did you choose lambda for your operator?
According to Scott, Church's entire response consisted of returning the postcard with the following annotation: "eeny, meeny, miny, moe".
Motivation
Computable functions are a fundamental concept within computer science and mathematics. The lambda calculus provides simple semantics for computation which are useful for formally studying properties of computation. The lambda calculus incorporates two simplifications that make its semantics simple.
The first simplification is that the lambda calculus treats functions "anonymously"; it does not give them explicit names. For example, the function
can be rewritten in anonymous form as
(which is read as "a tuple of and is mapped to "). Similarly, the function
can be rewritten in anonymous form as
where the input is simply mapped to itself.
The second simplification is that the lambda calculus only uses functions of a single input. An ordinary function that requires two inputs, for instance the function, can be reworked into an equivalent function that accepts a single input, and as output returns another function, that in turn accepts a single input. For example,
can be reworked into
This method, known as currying, transforms a function that takes multiple arguments into a chain of functions each with a single argument.
Function application of the function to the arguments (5, 2), yields at once
,
whereas evaluation of the curried version requires one more step
// the definition of has been used with in the inner expression. This is like β-reduction.
// the definition of has been used with . Again, similar to β-reduction.
to arrive at the same result.
In lambda calculus, functions are taken to be 'first class values', so functions may be used as the inputs, or be returned as outputs from other functions. For example, the lambda term represents the identity function, . Further, represents the constant function , the function that always returns , no matter the input. As an example of a function operating on functions, the function composition can be defined as .
Normal forms and confluence
It can be shown that β-reduction is confluent when working up to α-conversion (i.e. we consider two normal forms to be equal if it is possible to α-convert one into the other). If repeated application of the reduction steps eventually terminates, then by the Church–Rosser theorem it will produce a unique β-normal form. However, the untyped lambda calculus as a rewriting rule under β-reduction is neither strongly normalising nor weakly normalising; there are terms with no normal form such as .
Considering individual terms, both strongly normalising terms and weakly normalising terms have a unique normal form. For strongly normalising terms, any reduction strategy is guaranteed to yield the normal form, whereas for weakly normalising terms, some reduction strategies may fail to find it.
Encoding datatypes
The basic lambda calculus may be used to model arithmetic, Booleans, data structures, and recursion, as illustrated in the following sub-sections i, ii, iii, and § iv.
Arithmetic in lambda calculus
There are several possible ways to define the natural numbers in lambda calculus, but by far the most common are the Church numerals, which can be defined as follows:
and so on. Or using an alternative syntax allowing multiple uncurried arguments to a function:
A Church numeral is a higher-order function—it takes a single-argument function , and returns another single-argument function. The Church numeral is a function that takes a function as argument and returns the -th composition of , i.e. the function composed with itself times. This is denoted and is in fact the -th power of (considered as an operator); is defined to be the identity function. Functional composition is associative, and so, such repeated compositions of a single function obey two laws of exponents, and , which is why these numerals can be used for arithmetic. (In Church's original lambda calculus, the formal parameter of a lambda expression was required to occur at least once in the function body, which made the above definition of impossible.)
One way of thinking about the Church numeral , which is often useful when analyzing programs, is as an instruction 'repeat n times'. For example, using the and functions defined below, one can define a function that constructs a (linked) list of n elements all equal to x by repeating 'prepend another x element' n times, starting from an empty list. The lambda term
creates, given a Church numeral and some , a sequence of n applications
By varying what is being repeated, and what argument(s) that function being repeated is applied to, a great many different effects can be achieved.
We can define a successor function, which takes a Church numeral and returns its successor by performing one additional application of the function it is supplied with, where means "n applications of f starting from x":
Because the -th composition of composed with the -th composition of gives the -th composition of , , addition can be defined as
can be thought of as a function taking two natural numbers as arguments and returning a natural number; it can be verified that
and
are beta-equivalent lambda expressions. Since adding to a number can be accomplished by repeating the successor operation times, an alternative definition is:
; A note (accessed 2017) at the original location suggests that the authors consider the work originally referenced to have been superseded by a book.
Similarly, following , multiplication can be defined as
Thus multiplication of Church numerals is simply their composition as functions. Alternatively
since multiplying and is the same as adding repeatedly, times, starting from zero.
Exponentiation, being the repeated multiplication of a number with itself, translates as a repeated composition of a Church numeral with itself, as a function. And repeated composition is what Church numerals are:
Alternatively here as well,
Simplifying, it becomes
but that is just an eta-expanded version of we already have, above.
The predecessor function, specified by two equations and , is considerably more involved. The formula
can be validated by showing inductively that if T denotes , then for . Two other definitions of are given below, one using conditionals and the other using pairs. With the predecessor function, subtraction is straightforward. Defining
,
yields when and otherwise.
Logic and predicates
By convention, the following two definitions (known as Church Booleans) are used for the Boolean values and :
Then, with these two lambda terms, we can define some logic operators (these are just possible formulations; other expressions could be equally correct):
We are now able to compute some logic functions, for example:
and we see that is equivalent to .
A predicate is a function that returns a Boolean value. The most fundamental predicate is , which returns if its argument is the Church numeral , but if its argument were any other Church numeral:
The following predicate tests whether the first argument is less-than-or-equal-to the second:
,
and since , if and , it is straightforward to build a predicate for numerical equality.
The availability of predicates and the above definition of and make it convenient to write "if-then-else" expressions in lambda calculus. For example, the predecessor function can be defined as:
which can be verified by showing inductively that is the add − 1 function for > 0.
Pairs
A pair (2-tuple) can be defined in terms of and , by using the Church encoding for pairs. For example, encapsulates the pair (,), returns the first element of the pair, and returns the second.
A linked list can be defined as either NIL for the empty list, or the of an element and a smaller list. The predicate tests for the value . (Alternatively, with , the construct obviates the need for an explicit NULL test).
As an example of the use of pairs, the shift-and-increment function that maps to can be defined as
which allows us to give perhaps the most transparent version of the predecessor function:
Additional programming techniques
There is a considerable body of programming idioms for lambda calculus. Many of these were originally developed in the context of using lambda calculus as a foundation for programming language semantics, effectively using lambda calculus as a low-level programming language. Because several programming languages include the lambda calculus (or something very similar) as a fragment, these techniques also see use in practical programming, but may then be perceived as obscure or foreign.
Named constants
In lambda calculus, a library would take the form of a collection of previously defined functions, which as lambda-terms are merely particular constants. The pure lambda calculus does not have a concept of named constants since all atomic lambda-terms are variables, but one can emulate having named constants by setting aside a variable as the name of the constant, using abstraction to bind that variable in the main body, and apply that abstraction to the intended definition. Thus to use to mean N (some explicit lambda-term) in M (another lambda-term, the "main program"), one can say
M N
Authors often introduce syntactic sugar, such as , to permit writing the above in the more intuitive order
N M
By chaining such definitions, one can write a lambda calculus "program" as zero or more function definitions, followed by one lambda-term using those functions that constitutes the main body of the program.
A notable restriction of this is that the name may not be referenced in N, for N is outside the scope of the abstraction binding , which is M; this means a recursive function definition cannot be written with . The construction would allow writing recursive function definitions, where the scope of the abstraction binding includes N as well as M. Or self-application a-la that which leads to combinator could be used.
Recursion and fixed points
Recursion is when a function invokes itself. What would a value be which were to represent such a function? It has to refer to itself somehow inside itself, just as the definition refers to itself inside itself. If this value were to contain itself by value, it would have to be of infinite size, which is impossible. Other notations, which support recursion natively, overcome this by referring to the function by name inside its definition. Lambda calculus cannot express this, since in it there simply are no names for terms to begin with, only arguments' names, i.e. parameters in abstractions. Thus, a lambda expression can receive itself as its argument and refer to (a copy of) itself via the corresponding parameter's name. This will work fine in case it was indeed called with itself as an argument. For example, will express recursion when E is an abstraction which is applying its parameter to itself inside its body to express a recursive call. Since this parameter receives E as its value, its self-application will be the same again.
As a concrete example, consider the factorial function , recursively defined by
.
In the lambda expression which is to represent this function, a parameter (typically the first one) will be assumed to receive the lambda expression itself as its value, so that calling it with itself as its first argument will amount to the recursive call. Thus to achieve recursion, the intended-as-self-referencing argument (called here, reminiscent of "self", or "self-applying") must always be passed to itself within the function body at a recursive call point:
with to hold, so and
and we have
Here becomes the same inside the result of the application , and using the same function for a call is the definition of what recursion is. The self-application achieves replication here, passing the function's lambda expression on to the next invocation as an argument value, making it available to be referenced there by the parameter name to be called via the self-application , again and again as needed, each time re-creating the lambda-term .
The application is an additional step just as the name lookup would be. It has the same delaying effect. Instead of having inside itself as a whole up-front, delaying its re-creation until the next call makes its existence possible by having two finite lambda-terms inside it re-create it on the fly later as needed.
This self-applicational approach solves it, but requires re-writing each recursive call as a self-application. We would like to have a generic solution, without the need for any re-writes:
with to hold, so and
where
so that
Given a lambda term with first argument representing recursive call (e.g. here), the fixed-point combinator will return a self-replicating lambda expression representing the recursive function (here, ). The function does not need to be explicitly passed to itself at any point, for the self-replication is arranged in advance, when it is created, to be done each time it is called. Thus the original lambda expression is re-created inside itself, at call-point, achieving self-reference.
In fact, there are many possible definitions for this operator, the simplest of them being:
In the lambda calculus, is a fixed-point of , as it expands to:
Now, to perform the recursive call to the factorial function for an argument n, we would simply call . Given n = 4, for example, this gives:
Every recursively defined function can be seen as a fixed point of some suitably defined higher order function (also known as functional) closing over the recursive call with an extra argument. Therefore, using , every recursive function can be expressed as a lambda expression. In particular, we can now cleanly define the subtraction, multiplication, and comparison predicates of natural numbers, using recursion.
When Y combinator is coded directly in a strict programming language, the applicative order of evaluation used in such languages will cause an attempt to fully expand the internal self-application prematurely, causing stack overflow or, in case of tail call optimization, indefinite looping. A delayed variant of Y, the Z combinator, can be used in such languages. It has the internal self-application hidden behind an extra abstraction through eta-expansion, as , thus preventing its premature expansion:
Standard terms
Certain terms have commonly accepted names:
is the identity function. and form complete combinator calculus systems that can express any lambda term - see
the next section. is , the smallest term that has no normal form. is another such term.
is standard and defined above, and can also be defined as , so that . and defined above are commonly abbreviated as and .
Abstraction elimination
If N is a lambda-term without abstraction, but possibly containing named constants (combinators), then there exists a lambda-term T(,N) which is equivalent to N but lacks abstraction (except as part of the named constants, if these are considered non-atomic). This can also be viewed as anonymising variables, as T(,N) removes all occurrences of from N, while still allowing argument values to be substituted into the positions where N contains an . The conversion function T can be defined by:
T(, ) := I
T(, N) := K N if is not free in N.
T(, M N) := S T(, M) T(, N)
In either case, a term of the form T(,N) P can reduce by having the initial combinator I, K, or S grab the argument P, just like β-reduction of N P would do. I returns that argument. K throws the argument away, just like N would do if has no free occurrence in N. S passes the argument on to both subterms of the application, and then applies the result of the first to the result of the second.
The combinators B and C are similar to S, but pass the argument on to only one subterm of an application (B to the "argument" subterm and C to the "function" subterm), thus saving a subsequent K if there is no occurrence of in one subterm. In comparison to B and C, the S combinator actually conflates two functionalities: rearranging arguments, and duplicating an argument so that it may be used in two places. The W combinator does only the latter, yielding the B, C, K, W system as an alternative to SKI combinator calculus.
Typed lambda calculus
A typed lambda calculus is a typed formalism that uses the lambda-symbol () to denote anonymous function abstraction. In this context, types are usually objects of a syntactic nature that are assigned to lambda terms; the exact nature of a type depends on the calculus considered (see Kinds of typed lambda calculi). From a certain point of view, typed lambda calculi can be seen as refinements of the untyped lambda calculus but from another point of view, they can also be considered the more fundamental theory and untyped lambda calculus a special case with only one type.Types and Programming Languages, p. 273, Benjamin C. Pierce
Typed lambda calculi are foundational programming languages and are the base of typed functional programming languages such as ML and Haskell and, more indirectly, typed imperative programming languages. Typed lambda calculi play an important role in the design of type systems for programming languages; here typability usually captures desirable properties of the program, e.g., the program will not cause a memory access violation.
Typed lambda calculi are closely related to mathematical logic and proof theory via the Curry–Howard isomorphism and they can be considered as the internal language of classes of categories, e.g., the simply typed lambda calculus is the language of a Cartesian closed category (CCC).
Reduction strategies
Whether a term is normalising or not, and how much work needs to be done in normalising it if it is, depends to a large extent on the reduction strategy used. Common lambda calculus reduction strategies include:
Normal order The leftmost outermost redex is reduced first. That is, whenever possible, arguments are substituted into the body of an abstraction before the arguments are reduced. If a term has a beta-normal form, normal order reduction will always reach that normal form.
Applicative order The leftmost innermost redex is reduced first. As a consequence, a function's arguments are always reduced before they are substituted into the function. Unlike normal order reduction, applicative order reduction may fail to find the beta-normal form of an expression, even if such a normal form exists. For example, the term is reduced to itself by applicative order, while normal order reduces it to its beta-normal form .
Full β-reductions Any redex can be reduced at any time. This means essentially the lack of any particular reduction strategy—with regard to reducibility, "all bets are off".
Weak reduction strategies do not reduce under lambda abstractions:
Call by value Like applicative order, but no reductions are performed inside abstractions. This is similar to the evaluation order of strict languages like C: the arguments to a function are evaluated before calling the function, and function bodies are not even partially evaluated until the arguments are substituted in.
Call by name Like normal order, but no reductions are performed inside abstractions. For example, is in normal form according to this strategy, although it contains the redex .
Strategies with sharing reduce computations that are "the same" in parallel:
Optimal reduction As normal order, but computations that have the same label are reduced simultaneously.
Call by need As call by name (hence weak), but function applications that would duplicate terms instead name the argument. The argument may be evaluated "when needed", at which point the name binding is updated with the reduced value. This can save time compared to normal order evaluation.
Computability
There is no algorithm that takes as input any two lambda expressions and outputs or depending on whether one expression reduces to the other. More precisely, no computable function can decide the question. This was historically the first problem for which undecidability could be proven. As usual for such a proof, computable means computable by any model of computation that is Turing complete. In fact computability can itself be defined via the lambda calculus: a function F: N → N of natural numbers is a computable function if and only if there exists a lambda expression f such that for every pair of x, y in N, F(x)=y if and only if f =β , where and are the Church numerals corresponding to x and y, respectively and =β meaning equivalence with β-reduction. See the Church–Turing thesis for other approaches to defining computability and their equivalence.
Church's proof of uncomputability first reduces the problem to determining whether a given lambda expression has a normal form. Then he assumes that this predicate is computable, and can hence be expressed in lambda calculus. Building on earlier work by Kleene and constructing a Gödel numbering for lambda expressions, he constructs a lambda expression that closely follows the proof of Gödel's first incompleteness theorem. If is applied to its own Gödel number, a contradiction results.
Complexity
The notion of computational complexity for the lambda calculus is a bit tricky, because the cost of a β-reduction may vary depending on how it is implemented.
To be precise, one must somehow find the location of all of the occurrences of the bound variable in the expression , implying a time cost, or one must keep track of the locations of free variables in some way, implying a space cost. A naïve search for the locations of in is O(n) in the length n of . Director strings were an early approach that traded this time cost for a quadratic space usage. More generally this has led to the study of systems that use explicit substitution.
In 2014, it was shown that the number of β-reduction steps taken by normal order reduction to reduce a term is a reasonable time cost model, that is, the reduction can be simulated on a Turing machine in time polynomially proportional to the number of steps. This was a long-standing open problem, due to size explosion, the existence of lambda terms which grow exponentially in size for each β-reduction. The result gets around this by working with a compact shared representation. The result makes clear that the amount of space needed to evaluate a lambda term is not proportional to the size of the term during reduction. It is not currently known what a good measure of space complexity would be.
An unreasonable model does not necessarily mean inefficient. Optimal reduction reduces all computations with the same label in one step, avoiding duplicated work, but the number of parallel β-reduction steps to reduce a given term to normal form is approximately linear in the size of the term. This is far too small to be a reasonable cost measure, as any Turing machine may be encoded in the lambda calculus in size linearly proportional to the size of the Turing machine. The true cost of reducing lambda terms is not due to β-reduction per se but rather the handling of the duplication of redexes during β-reduction. It is not known if optimal reduction implementations are reasonable when measured with respect to a reasonable cost model such as the number of leftmost-outermost steps to normal form, but it has been shown for fragments of the lambda calculus that the optimal reduction algorithm is efficient and has at most a quadratic overhead compared to leftmost-outermost. In addition the BOHM prototype implementation of optimal reduction outperformed both Caml Light and Haskell on pure lambda terms.
Lambda calculus and programming languages
As pointed out by Peter Landin's 1965 paper "A Correspondence between ALGOL 60 and Church's Lambda-notation", sequential procedural programming languages can be understood in terms of the lambda calculus, which provides the basic mechanisms for procedural abstraction and procedure (subprogram) application.
Anonymous functions
For example, in Python the "square" function can be expressed as a lambda expression as follows:
(lambda x: x**2)
The above example is an expression that evaluates to a first-class function. The symbol lambda creates an anonymous function, given a list of parameter names—just the single argument x, in this case—and an expression that is evaluated as the body of the function, x**2. Anonymous functions are sometimes called lambda expressions.
Pascal and many other imperative languages have long supported passing subprograms as arguments to other subprograms through the mechanism of function pointers. However, function pointers are an insufficient condition for functions to be first class datatypes, because a function is a first class datatype if and only if new instances of the function can be created at runtime. Such runtime creation of functions is supported in Smalltalk, JavaScript, Wolfram Language, and more recently in Scala, Eiffel (as agents), C# (as delegates) and C++11, among others.
Parallelism and concurrency
The Church–Rosser property of the lambda calculus means that evaluation (β-reduction) can be carried out in any order, even in parallel. This means that various nondeterministic evaluation strategies are relevant. However, the lambda calculus does not offer any explicit constructs for parallelism. One can add constructs such as futures to the lambda calculus. Other process calculi have been developed for describing communication and concurrency.
Semantics
The fact that lambda calculus terms act as functions on other lambda calculus terms, and even on themselves, led to questions about the semantics of the lambda calculus. Could a sensible meaning be assigned to lambda calculus terms? The natural semantics was to find a set D isomorphic to the function space D → D, of functions on itself. However, no nontrivial such D can exist, by cardinality constraints because the set of all functions from D to D has greater cardinality than D, unless D is a singleton set.
In the 1970s, Dana Scott showed that if only continuous functions were considered, a set or domain D with the required property could be found, thus providing a model for the lambda calculus. Written 1969, widely circulated as an unpublished manuscript.
This work also formed the basis for the denotational semantics of programming languages.
Variations and extensions
These extensions are in the lambda cube:
Typed lambda calculus – Lambda calculus with typed variables (and functions)
System F – A typed lambda calculus with type-variables
Calculus of constructions – A typed lambda calculus with types as first-class values
These formal systems are extensions of lambda calculus that are not in the lambda cube:
Binary lambda calculus – A version of lambda calculus with binary input/output (I/O), a binary encoding of terms, and a designated universal machine.
Lambda-mu calculus – An extension of the lambda calculus for treating classical logic
These formal systems are variations of lambda calculus:
Kappa calculus – A first-order analogue of lambda calculus
These formal systems are related to lambda calculus:
Combinatory logic – A notation for mathematical logic without variables
SKI combinator calculus – A computational system based on the S, K and I combinators, equivalent to lambda calculus, but reducible without variable substitutions
See also
Applicative computing systems – Treatment of objects in the style of the lambda calculus
Cartesian closed category – A setting for lambda calculus in category theory
Categorical abstract machine – A model of computation applicable to lambda calculus
Clojure, programming language
Curry–Howard isomorphism – The formal correspondence between programs and proofs
De Bruijn index – notation disambiguating alpha conversions
De Bruijn notation – notation using postfix modification functions
Domain theory – Study of certain posets giving denotational semantics for lambda calculus
Evaluation strategy – Rules for the evaluation of expressions in programming languages
Explicit substitution – The theory of substitution, as used in β-reduction
Harrop formula – A kind of constructive logical formula such that proofs are lambda terms
Interaction nets
Kleene–Rosser paradox – A demonstration that some form of lambda calculus is inconsistent
Knights of the Lambda Calculus – A semi-fictional organization of LISP and Scheme hackers
Krivine machine – An abstract machine to interpret call-by-name in lambda calculus
Lambda calculus definition – Formal definition of the lambda calculus.
Let expression – An expression closely related to an abstraction.
Minimalism (computing)
Rewriting – Transformation of formulæ in formal systems
SECD machine – A virtual machine designed for the lambda calculus
Scott–Curry theorem – A theorem about sets of lambda terms
To Mock a Mockingbird – An introduction to combinatory logic
Universal Turing machine – A formal computing machine equivalent to lambda calculus
Unlambda – A functional esoteric programming language based on combinatory logic
Further reading
Abelson, Harold & Gerald Jay Sussman. Structure and Interpretation of Computer Programs. The MIT Press. .
Barendregt, Hendrik Pieter Introduction to Lambda Calculus.
Barendregt, Hendrik Pieter, The Impact of the Lambda Calculus in Logic and Computer Science. The Bulletin of Symbolic Logic, Volume 3, Number 2, June 1997.
Barendregt, Hendrik Pieter, The Type Free Lambda Calculus pp1091–1132 of Handbook of Mathematical Logic, North-Holland (1977)
Cardone, Felice and Hindley, J. Roger, 2006. History of Lambda-calculus and Combinatory Logic . In Gabbay and Woods (eds.), Handbook of the History of Logic, vol. 5. Elsevier.
Church, Alonzo, An unsolvable problem of elementary number theory, American Journal of Mathematics, 58 (1936), pp. 345–363. This paper contains the proof that the equivalence of lambda expressions is in general not decidable.
()
Kleene, Stephen, A theory of positive integers in formal logic, American Journal of Mathematics, 57 (1935), pp. 153–173 and 219–244. Contains the lambda calculus definitions of several familiar functions.
Landin, Peter, A Correspondence Between ALGOL 60 and Church's Lambda-Notation, Communications of the ACM, vol. 8, no. 2 (1965), pages 89–101. Available from the ACM site. A classic paper highlighting the importance of lambda calculus as a basis for programming languages.
Larson, Jim, An Introduction to Lambda Calculus and Scheme. A gentle introduction for programmers.
Schalk, A. and Simmons, H. (2005) An introduction to λ-calculi and arithmetic with a decent selection of exercises. Notes for a course in the Mathematical Logic MSc at Manchester University.
A paper giving a formal underpinning to the idea of 'meaning-is-use' which, even if based on proofs, it is different from proof-theoretic semantics as in the Dummett–Prawitz tradition since it takes reduction as the rules giving meaning.
Hankin, Chris, An Introduction to Lambda Calculi for Computer Scientists,
Monographs/textbooks for graduate students
Sørensen, Morten Heine and Urzyczyn, Paweł (2006), Lectures on the Curry–Howard isomorphism, Elsevier, is a recent monograph that covers the main topics of lambda calculus from the type-free variety, to most typed lambda calculi, including more recent developments like pure type systems and the lambda cube. It does not cover subtyping extensions.
covers lambda calculi from a practical type system perspective; some topics like dependent types are only mentioned, but subtyping is an important topic.
Documents
A Short Introduction to the Lambda Calculus-(PDF) by Achim Jung
A timeline of lambda calculus-(PDF) by Dana Scott
A Tutorial Introduction to the Lambda Calculus-(PDF) by Raúl Rojas
Lecture Notes on the Lambda Calculus-(PDF) by Peter Selinger
Graphic lambda calculus by Marius Buliga
Lambda Calculus as a Workflow Model by Peter Kelly, Paul Coddington, and Andrew Wendelborn; mentions graph reduction as a common means of evaluating lambda expressions and discusses the applicability of lambda calculus for distributed computing (due to the Church–Rosser property, which enables parallel graph reduction for lambda expressions).
Notes
References
Some parts of this article are based on material from FOLDOC, used with permission.
External links
Graham Hutton, Lambda Calculus, a short (12 minutes) Computerphile video on the Lambda Calculus
Helmut Brandl, Step by Step Introduction to Lambda Calculus
David C. Keenan, To Dissect a Mockingbird: A Graphical Notation for the Lambda Calculus with Animated Reduction
L. Allison, Some executable λ-calculus examples
Georg P. Loczewski, The Lambda Calculus and A++
Bret Victor, Alligator Eggs: A Puzzle Game Based on Lambda Calculus
Lambda Calculus on Safalra's Website
LCI Lambda Interpreter a simple yet powerful pure calculus interpreter
Lambda Calculus links on Lambda-the-Ultimate
Mike Thyer, Lambda Animator, a graphical Java applet demonstrating alternative reduction strategies.
Implementing the Lambda calculus using C++ Templates
Shane Steinert-Threlkeld, "Lambda Calculi", Internet Encyclopedia of Philosophy
Anton Salikhmetov, Macro Lambda Calculus
Category:1936 in computing
Category:Computability theory
Category:Formal methods
Category:Models of computation
Category:Theoretical computer science
Category:Programming language comparisons
Category:Articles with example Lisp (programming language) code
Category:Articles with example Python (programming language) code
|
computer_science
| 6,433
|
19588
|
Mitochondrion
|
https://en.wikipedia.org/wiki/Mitochondrion
|
A mitochondrion () is an organelle found in the cells of most eukaryotes, such as animals, plants and fungi. Mitochondria have a double membrane structure and use aerobic respiration to generate adenosine triphosphate (ATP), which is used throughout the cell as a source of chemical energy. They were discovered by Albert von Kölliker in 1857 in the voluntary muscles of insects. The term mitochondrion, meaning a thread-like granule, was coined by Carl Benda in 1898. The mitochondrion is popularly nicknamed the "powerhouse of the cell", a phrase popularized by Philip Siekevitz in a 1957 Scientific American article of the same name.
Some cells in some multicellular organisms lack mitochondria (for example, mature mammalian red blood cells). The multicellular animal Henneguya salminicola is known to have retained mitochondrion-related organelles despite a complete loss of their mitochondrial genome. A large number of unicellular organisms, such as microsporidia, parabasalids and diplomonads, have reduced or transformed their mitochondria into other structures, e.g. hydrogenosomes and mitosomes. The oxymonads Monocercomonoides, Streblomastix, and Blattamonas completely lost their mitochondria.
Mitochondria are commonly between 0.75 and 3 μm in cross section, but vary considerably in size and structure. Unless specifically stained, they are not visible. The mitochondrion is composed of compartments that carry out specialized functions. These compartments or regions include the outer membrane, intermembrane space, inner membrane, cristae, and matrix.
In addition to supplying cellular energy, mitochondria are involved in other tasks, such as signaling, cellular differentiation, and cell death, as well as maintaining control of the cell cycle and cell growth. Mitochondrial biogenesis is in turn temporally coordinated with these cellular processes.
Mitochondria are implicated in human disorders and conditions such as mitochondrial diseases, cardiac dysfunction, heart failure, and autism.
The number of mitochondria in a cell vary widely by organism, tissue, and cell type. A mature red blood cell has no mitochondria, whereas a liver cell can have more than 2000.
Although most of a eukaryotic cell's DNA is contained in the cell nucleus, the mitochondrion has its own genome ("mitogenome") that is similar to bacterial genomes. This finding has led to general acceptance of symbiogenesis (endosymbiotic theory) – that free-living prokaryotic ancestors of modern mitochondria permanently fused with eukaryotic cells in the distant past, evolving such that modern animals, plants, fungi, and other eukaryotes respire to generate cellular energy.
Structure
Mitochondria may have a number of different shapes. A mitochondrion contains outer and inner membranes composed of phospholipid bilayers and proteins. The two membranes have different properties. Because of this double-membraned organization, there are five distinct parts to a mitochondrion:
The outer mitochondrial membrane,
The intermembrane space (the space between the outer and inner membranes),
The inner mitochondrial membrane,
The cristae space (formed by infoldings of the inner membrane), and
The matrix (space within the inner membrane), which is a fluid.
Mitochondria have folding to increase surface area, which in turn increases ATP (adenosine triphosphate) production.
Mitochondria stripped of their outer membrane are called mitoplasts.
Outer membrane
The outer mitochondrial membrane, which encloses the entire organelle, is 60 to 75 angstroms (Å) thick. It has a protein-to-phospholipid ratio similar to that of the cell membrane (about 1:1 by weight). It contains large numbers of integral membrane proteins called porins. A major trafficking protein is the pore-forming voltage-dependent anion channel (VDAC). The VDAC is the primary transporter of nucleotides, ions and metabolites between the cytosol and the intermembrane space. It is formed as a beta barrel that spans the outer membrane, similar to that in the gram-negative bacterial outer membrane. Larger proteins can enter the mitochondrion if a signaling sequence at their N-terminus binds to a large multisubunit protein called translocase in the outer membrane, which then actively moves them across the membrane. Mitochondrial pro-proteins are imported through specialised translocation complexes.
The outer membrane also contains enzymes involved in such diverse activities as the elongation of fatty acids, oxidation of epinephrine, and the degradation of tryptophan. These enzymes include monoamine oxidase, rotenone-insensitive NADH-cytochrome c-reductase, kynurenine hydroxylase and fatty acid Co-A ligase. Disruption of the outer membrane permits proteins in the intermembrane space to leak into the cytosol, leading to cell death. The outer mitochondrial membrane can associate with the endoplasmic reticulum (ER) membrane, in a structure called MAM (mitochondria-associated ER-membrane). This is important in the ER-mitochondria calcium signaling and is involved in the transfer of lipids between the ER and mitochondria. Outside the outer membrane are small (diameter: 60 Å) particles named sub-units of Parson.
Intermembrane space
The mitochondrial intermembrane space is the space between the outer membrane and the inner membrane. It is also known as perimitochondrial space. Because the outer membrane is freely permeable to small molecules, the concentrations of small molecules, such as ions and sugars, in the intermembrane space is the same as in the cytosol. However, large proteins must have a specific signaling sequence to be transported across the outer membrane, so the protein composition of this space is different from the protein composition of the cytosol. One protein that is localized to the intermembrane space in this way is cytochrome c.
Inner membrane
The inner mitochondrial membrane contains proteins with three types of functions:
Those that perform the electron transport chain redox reactions
ATP synthase, which generates ATP in the matrix
Specific transport proteins that regulate metabolite passage into and out of the mitochondrial matrix
It contains more than 151 different polypeptides, and has a very high protein-to-phospholipid ratio (more than 3:1 by weight, which is about 1 protein for 15 phospholipids). The inner membrane is home to around 1/5 of the total protein in a mitochondrion. Additionally, the inner membrane is rich in an unusual phospholipid, cardiolipin. This phospholipid was originally discovered in cow hearts in 1942, and is usually characteristic of mitochondrial and bacterial plasma membranes. Cardiolipin contains four fatty acids rather than two, and may help to make the inner membrane impermeable, and its disruption can lead to multiple clinical disorders including neurological disorders and cancer. Unlike the outer membrane, the inner membrane does not contain porins, and is highly impermeable to all molecules. Almost all ions and molecules require special membrane transporters to enter or exit the matrix. Proteins are ferried into the matrix via the translocase of the inner membrane (TIM) complex or via OXA1L. In addition, there is a membrane potential across the inner membrane, formed by the action of the enzymes of the electron transport chain. Inner membrane fusion is mediated by the inner membrane protein OPA1.
Cristae
The inner mitochondrial membrane is compartmentalized into numerous folds called cristae, which expand the surface area of the inner mitochondrial membrane, enhancing its ability to produce ATP. For typical liver mitochondria, the area of the inner membrane is about five times as large as that of the outer membrane. This ratio is variable and mitochondria from cells that have a greater demand for ATP, such as muscle cells, contain even more cristae. Mitochondria within the same cell can have substantially different crista-density, with the ones that are required to produce more energy having much more crista-membrane surface. These folds are studded with small round bodies known as F particles or oxysomes.
Matrix
The matrix is the space enclosed by the inner membrane. It contains about 2/3 of the total proteins in a mitochondrion. The matrix is important in the production of ATP with the aid of the ATP synthase contained in the inner membrane. The matrix contains a highly concentrated mixture of hundreds of enzymes, special mitochondrial ribosomes, tRNA, and several copies of the mitochondrial DNA genome. Of the enzymes, the major functions include oxidation of pyruvate and fatty acids, and the citric acid cycle. The DNA molecules are packaged into nucleoids by proteins, one of which is TFAM.
Function
The most prominent roles of mitochondria are to produce the energy currency of the cell, ATP (i.e., phosphorylation of ADP), through respiration and to regulate cellular metabolism. The central set of reactions involved in ATP production are collectively known as the citric acid cycle, or the Krebs cycle, and oxidative phosphorylation. However, the mitochondrion has many other functions in addition to the production of ATP.
Energy conversion
A dominant role for the mitochondria is the production of ATP, as reflected by the large number of proteins in the inner membrane for this task. This is done by oxidizing the major products of glucose: pyruvate, and NADH, which are produced in the cytosol. This type of cellular respiration, known as aerobic respiration, is dependent on the presence of oxygen. When oxygen is limited, the glycolytic products will be metabolized by anaerobic fermentation, a process that is independent of the mitochondria. The production of ATP from glucose and oxygen has an approximately 13-times higher yield during aerobic respiration compared to fermentation. Plant mitochondria can also produce a limited amount of ATP either by breaking the sugar produced during photosynthesis or without oxygen by using the alternate substrate nitrite. ATP crosses out through the inner membrane with the help of a specific protein, and across the outer membrane via porins. After conversion of ATP to ADP by dephosphorylation that releases energy, ADP returns via the same route.
Pyruvate and the citric acid cycle
Pyruvate molecules produced by glycolysis are actively transported across the inner mitochondrial membrane, and into the matrix where they can either be oxidized and combined with coenzyme A to form CO, acetyl-CoA, and NADH, or they can be carboxylated (by pyruvate carboxylase) to form oxaloacetate. This latter reaction "fills up" the amount of oxaloacetate in the citric acid cycle and is therefore an anaplerotic reaction, increasing the cycle's capacity to metabolize acetyl-CoA when the tissue's energy needs (e.g., in muscle) are suddenly increased by activity.
In the citric acid cycle, all the intermediates (e.g. citrate, iso-citrate, alpha-ketoglutarate, succinate, fumarate, malate and oxaloacetate) are regenerated during each turn of the cycle. Adding more of any of these intermediates to the mitochondrion therefore means that the additional amount is retained within the cycle, increasing all the other intermediates as one is converted into the other. Hence, the addition of any one of them to the cycle has an anaplerotic effect, and its removal has a cataplerotic effect. These anaplerotic and cataplerotic reactions will, during the course of the cycle, increase or decrease the amount of oxaloacetate available to combine with acetyl-CoA to form citric acid. This in turn increases or decreases the rate of ATP production by the mitochondrion, and thus the availability of ATP to the cell.
Acetyl-CoA, on the other hand, derived from pyruvate oxidation, or from the beta-oxidation of fatty acids, is the only fuel to enter the citric acid cycle. With each turn of the cycle one molecule of acetyl-CoA is consumed for every molecule of oxaloacetate present in the mitochondrial matrix, and is never regenerated. It is the oxidation of the acetate portion of acetyl-CoA that produces CO and water, with the energy thus released captured in the form of ATP.
In the liver, the carboxylation of cytosolic pyruvate into intra-mitochondrial oxaloacetate is an early step in the gluconeogenic pathway, which converts lactate and de-aminated alanine into glucose, under the influence of high levels of glucagon and/or epinephrine in the blood. Here, the addition of oxaloacetate to the mitochondrion does not have a net anaplerotic effect, as another citric acid cycle intermediate (malate) is immediately removed from the mitochondrion to be converted to cytosolic oxaloacetate, and ultimately to glucose, in a process that is almost the reverse of glycolysis.
The enzymes of the citric acid cycle are located in the mitochondrial matrix, with the exception of succinate dehydrogenase, which is bound to the inner mitochondrial membrane as part of Complex II. The citric acid cycle oxidizes the acetyl-CoA to carbon dioxide, and, in the process, produces reduced cofactors (three molecules of NADH and one molecule of FADH) that are a source of electrons for the electron transport chain, and a molecule of GTP (which is readily converted to an ATP).
O and NADH: energy-releasing reactions
The electrons from NADH and FADH are transferred to oxygen (O) and hydrogen (protons) in several steps via an electron transport chain. NADH and FADH molecules are produced within the matrix via the citric acid cycle and in the cytoplasm by glycolysis. Reducing equivalents from the cytoplasm can be imported via the malate-aspartate shuttle system of antiporter proteins or fed into the electron transport chain using a glycerol phosphate shuttle.
The major energy-releasing reactions that make the mitochondrion the "powerhouse of the cell" occur at protein complexes I, III and IV in the inner mitochondrial membrane (NADH dehydrogenase (ubiquinone), cytochrome c reductase, and cytochrome c oxidase). At complex IV, O2 reacts with the reduced form of iron in cytochrome c:
ΔrGo' = -218 kJ/mol
releasing a lot of free energy from the reactants without breaking bonds of an organic fuel. The free energy put in to remove an electron from Fe2+ is released at complex III when Fe3+ of cytochrome c reacts to oxidize ubiquinol (QH2):
ΔrGo' = -30 kJ/mol
The ubiquinone (Q) generated reacts, in complex I, with NADH:
ΔrGo' = -81 kJ/mol
While the reactions are controlled by an electron transport chain, free electrons are not amongst the reactants or products in the three reactions shown and therefore do not affect the free energy released, which is used to pump protons (H) into the intermembrane space. This process is efficient, but a small percentage of electrons may prematurely reduce oxygen, forming reactive oxygen species such as superoxide. This can cause oxidative stress in the mitochondria and may contribute to the decline in mitochondrial function associated with aging.
As the proton concentration increases in the intermembrane space, a strong electrochemical gradient is established across the inner membrane. The protons can return to the matrix through the ATP synthase complex, and their potential energy is used to synthesize ATP from ADP and inorganic phosphate (P). This process is called chemiosmosis, and was first described by Peter Mitchell, who was awarded the 1978 Nobel Prize in Chemistry for his work. Later, part of the 1997 Nobel Prize in Chemistry was awarded to Paul D. Boyer and John E. Walker for their clarification of the working mechanism of ATP synthase.
Heat production
Under certain conditions, protons can re-enter the mitochondrial matrix without contributing to ATP synthesis. This process is known as proton leak or mitochondrial uncoupling and is due to the facilitated diffusion of protons into the matrix. The process results in the unharnessed potential energy of the proton electrochemical gradient being released as heat. The process is mediated by a proton channel called thermogenin, or UCP1. Thermogenin is primarily found in brown adipose tissue, or brown fat, and is responsible for non-shivering thermogenesis. Brown adipose tissue is found in mammals, and is at its highest levels in early life and in hibernating animals. In humans, brown adipose tissue is present at birth and decreases with age.
Mitochondrial fatty acid synthesis
Mitochondrial fatty acid synthesis (mtFAS) is essential for cellular respiration and mitochondrial biogenesis. In response to mitochondrial acetyl-CoA availability, mtFAS builds acyl chains on the 4'-phosphopantetheine group of the matrix-soluble scaffold protein ACP (holo-ACP), producing acyl-ACP species with varying chain lengths of at least eight carbons.
Among these, octanoyl-ACP (C8) serves as the precursor for lipoic acid biosynthesis. Since lipoic acid is a cofactor for key mitochondrial enzyme complexes, including the pyruvate dehydrogenase complex (PDC), α-ketoglutarate dehydrogenase complex (OGDC), 2-oxoadipate dehydrogenase complex (OADHC), branched-chain α-ketoacid dehydrogenase complex (BCKDC), and the glycine cleavage system (GCS), mtFAS significantly influences energy metabolism.
In contrast, longer-chain acyl-ACPs (C12–C18) allosterically activate the network of LYRM proteins, which comprises at least 12 members in humans and regulates mitochondrial translation, iron-sulfur cluster biogenesis, and the assembly of electron transport chain complexes. MtFAS and ACP thus coordinate the activation of mitochondrial respiration in response to substrate availability. This enables cells to increase their oxidative capacity when substrates are abundant and prevents the electron transport chain from running empty and inducing the formation of reactive oxygen species (ROS) under substrate-limited conditions.
MtFAS is also thought to play a role as a mediator in intracellular signaling due to its influence on the levels of bioactive lipids, such as lysophospholipids and sphingolipids.
Uptake, storage and release of calcium ions
The concentrations of free calcium in the cell can regulate an array of reactions and is important for signal transduction in the cell. Mitochondria can transiently store calcium, a contributing process for the cell's homeostasis of calcium.
Their ability to rapidly take in calcium for later release makes them good "cytosolic buffers" for calcium. The endoplasmic reticulum (ER) is the most significant storage site of calcium, and there is a significant interplay between the mitochondrion and ER with regard to calcium. The calcium is taken up into the matrix by the mitochondrial calcium uniporter on the inner mitochondrial membrane. It is primarily driven by the mitochondrial membrane potential. Release of this calcium back into the cell's interior can occur via a sodium-calcium exchange protein or via "calcium-induced-calcium-release" pathways. This can initiate calcium spikes or calcium waves with large changes in the membrane potential. These can activate a series of second messenger system proteins that can coordinate processes such as neurotransmitter release in nerve cells and release of hormones in endocrine cells.
Ca influx to the mitochondrial matrix has recently been implicated as a mechanism to regulate respiratory bioenergetics by allowing the electrochemical potential across the membrane to transiently "pulse" from ΔΨ-dominated to pH-dominated, facilitating a reduction of oxidative stress. In neurons, concomitant increases in cytosolic and mitochondrial calcium act to synchronize neuronal activity with mitochondrial energy metabolism. Mitochondrial matrix calcium levels can reach the tens of micromolar levels, which is necessary for the activation of isocitrate dehydrogenase, one of the key regulatory enzymes of the Krebs cycle.
Cellular proliferation regulation
The relationship between cellular proliferation and mitochondria has been investigated. Tumor cells require ample ATP to synthesize bioactive compounds such as lipids, proteins, and nucleotides for rapid proliferation. The majority of ATP in tumor cells is generated via the oxidative phosphorylation pathway (OxPhos). Interference with OxPhos cause cell cycle arrest suggesting that mitochondria play a role in cell proliferation. Mitochondrial ATP production is also vital for cell division and differentiation in infection in addition to basic functions in the cell including the regulation of cell volume, solute concentration, and cellular architecture. ATP levels differ at various stages of the cell cycle suggesting that there is a relationship between the abundance of ATP and the cell's ability to enter a new cell cycle. ATP's role in the basic functions of the cell make the cell cycle sensitive to changes in the availability of mitochondrial derived ATP. The variation in ATP levels at different stages of the cell cycle support the hypothesis that mitochondria play an important role in cell cycle regulation. Although the specific mechanisms between mitochondria and the cell cycle regulation is not well understood, studies have shown that low energy cell cycle checkpoints monitor the energy capability before committing to another round of cell division.
Programmed cell death and innate immunity
Programmed cell death (PCD) is crucial for various physiological functions, including organ development and cellular homeostasis. It serves as an intrinsic mechanism to prevent malignant transformation and plays a fundamental role in immunity by aiding in antiviral defense, pathogen elimination, inflammation, and immune cell recruitment.
Mitochondria have long been recognized for their central role in the intrinsic pathway of apoptosis, a form of PCD. In recent decades, they have also been identified as a signalling hub for much of the innate immune system. The endosymbiotic origin of mitochondria distinguishes them from other cellular components, and the exposure of mitochondrial elements to the cytosol can trigger the same pathways as infection markers. These pathways lead to apoptosis, autophagy, or the induction of proinflammatory genes.
Mitochondria contribute to apoptosis by releasing cytochrome c, which directly induces the formation of apoptosomes. Additionally, they are a source of various damage-associated molecular patterns (DAMPs). These DAMPs are often recognised by the same pattern-recognition receptors (PRRs) that respond to pathogen-associated molecular patterns (PAMPs) during infections. For example, mitochondrial mtDNA resembles bacterial DNA due to its lack of CpG methylation and can be detected by Toll-like receptor 9 and cGAS. Double-stranded RNA (dsRNA), produced due to bidirectional mitochondrial transcription, can activate viral sensing pathways through RIG-I-like receptors. Additionally, the N-formylation of mitochondrial proteins, similar to that of bacterial proteins, can be recognized by formyl peptide receptors.
Normally, these mitochondrial components are sequestered from the rest of the cell but are released following mitochondrial membrane permeabilization during apoptosis or passively after mitochondrial damage. However, mitochondria also play an active role in innate immunity, releasing mtDNA in response to metabolic cues. Mitochondria are also the localization site for immune and apoptosis regulatory proteins, such as BAX, MAVS (located on the outer membrane), and NLRX1 (found in the matrix). These proteins are modulated by the mitochondrial metabolic status and mitochondrial dynamics.
Donation
Some cells donate mitochondria to other cells. Such donations occur in multiple cell types, in organisms such as yeast, molluscs, and rodents. Mitochondrial donation was first observed in 2006. it had not been observed in humans in vivo. Donations may occur to help damaged cells, trigger tissue repair or the immune system, or to power distressed cells.
Researchers cultured human mitochondria-free lung cancer cells with stem cells. The stem cells ejected mitochondria, which were absorbed by the lung cells. The lung cells then recovered their ability to divide and metabolize glucose. Mitochondria were then detected moving among lung, heart, brain, fat, bone, and other cells. Research has not identified how a cell indicates that it needs mitochondrial assistance or how other cells read those indicators.
Various purposes have been observed to explain such donations. These include:
Restore function and extending lifespans of damaged cells
Endothelial cell donation to cancer cells can increase chemoresistance or tumorigenic potential.
Following acute lung injury, stromal cells can donate mitochondria to lung cells, which in turn distributed ATP (fuel) to nearby cells that did not receive mitochondria.
Platelets can donate mitochondria to stem cells which then release molecules that aid in blood vessel formation, which accelerates wound healing. Bone cell donations had a similar effect.
Maintain the blood-brain barrier
Maintain macrophage function when their metabolism is disrupted
Reduce inflammatory response, particularly when donated to T cells. Stem cells cultured from rheumatoid arthritis patients donated fewer mitochondria to T cells than do those from others.
Extracellular mitochondria use multiple modes of transport:
tunnelling nanotubes that temporarily connect cells to transport various cargo
passengers on vesicles
free-floating (typically in blood)
cell contact/fusion
Additional functions
Mitochondria play a central role in many other metabolic tasks, such as:
Signaling through mitochondrial reactive oxygen species
Regulation of the membrane potential
Calcium signaling (including calcium-evoked apoptosis)
Regulation of cellular metabolism
Certain heme synthesis reactions (see also: Porphyrin)
Steroid synthesis
Hormonal signaling – mitochondria are sensitive and responsive to hormones, in part by the action of mitochondrial estrogen receptors (mtERs). These receptors have been found in various tissues and cell types, including brain and heart
Development and function of immune cells
Neuronal mitochondria also contribute to cellular quality control by reporting neuronal status towards microglia through specialised somatic-junctions.
Mitochondria of developing neurons contribute to intercellular signaling towards microglia, which communication is indispensable for proper regulation of brain development.
Some mitochondrial functions are performed only in specific types of cells. For example, mitochondria in liver cells contain enzymes that allow them to detoxify ammonia, a waste product of protein metabolism. A mutation in the genes regulating any of these functions can result in mitochondrial diseases.
Mitochondrial proteins (proteins transcribed from mitochondrial DNA) vary depending on the tissue and the species. In humans, 615 distinct types of proteins have been identified from cardiac mitochondria, whereas in rats, 940 proteins have been reported. The mitochondrial proteome is thought to be dynamically regulated.
Organization and distribution
Mitochondria (or related structures) are found in all eukaryotes (except the Oxymonad Monocercomonoides). Although commonly depicted as bean-like structures they form a highly dynamic network in the majority of cells where they constantly undergo fission and fusion. The population of all the mitochondria of a given cell constitutes the chondriome. Mitochondria vary in number and location according to cell type. A single mitochondrion is often found in unicellular organisms, while human liver cells have about 1000–2000 mitochondria per cell, making up 1/5 of the cell volume. The mitochondrial content of otherwise similar cells can vary substantially in size and membrane potential, with differences arising from sources including uneven partitioning at cell division, leading to extrinsic differences in ATP levels and downstream cellular processes. The mitochondria can be found nestled between myofibrils of muscle or wrapped around the sperm flagellum. Often, they form a complex 3D branching network inside the cell with the cytoskeleton. The association with the cytoskeleton determines mitochondrial shape, which can affect the function as well: different structures of the mitochondrial network may afford the population a variety of physical, chemical, and signalling advantages or disadvantages. Mitochondria in cells are always distributed along microtubules and the distribution of these organelles is also correlated with the endoplasmic reticulum. Recent evidence suggests that vimentin, one of the components of the cytoskeleton, is also critical to the association with the cytoskeleton.
Mitochondria-associated ER membrane (MAM)
The mitochondria-associated ER membrane (MAM) is another structural element that is increasingly recognized for its critical role in cellular physiology and homeostasis. Once considered a technical snag in cell fractionation techniques, the alleged ER vesicle contaminants that invariably appeared in the mitochondrial fraction have been re-identified as membranous structures derived from the MAM—the interface between mitochondria and the ER. Physical coupling between these two organelles had previously been observed in electron micrographs and has more recently been probed with fluorescence microscopy. Such studies estimate that at the MAM, which may comprise up to 20% of the mitochondrial outer membrane, the ER and mitochondria are separated by a mere 10–25 nm and held together by protein tethering complexes.
Purified MAM from subcellular fractionation is enriched in enzymes involved in phospholipid exchange, in addition to channels associated with Ca signaling. These hints of a prominent role for the MAM in the regulation of cellular lipid stores and signal transduction have been borne out, with significant implications for mitochondrial-associated cellular phenomena, as discussed below. Not only has the MAM provided insight into the mechanistic basis underlying such physiological processes as intrinsic apoptosis and the propagation of calcium signaling, but it also favors a more refined view of the mitochondria. Though often seen as static, isolated 'powerhouses' hijacked for cellular metabolism through an ancient endosymbiotic event, the evolution of the MAM underscores the extent to which mitochondria have been integrated into overall cellular physiology, with intimate physical and functional coupling to the endomembrane system.
Phospholipid transfer
The MAM is enriched in enzymes involved in lipid biosynthesis, such as phosphatidylserine synthase on the ER face and phosphatidylserine decarboxylase on the mitochondrial face. Because mitochondria are dynamic organelles constantly undergoing fission and fusion events, they require a constant and well-regulated supply of phospholipids for membrane integrity. But mitochondria are not only a destination for the phospholipids they finish synthesis of; rather, this organelle also plays a role in inter-organelle trafficking of the intermediates and products of phospholipid biosynthetic pathways, ceramide and cholesterol metabolism, and glycosphingolipid anabolism.
Such trafficking capacity depends on the MAM, which has been shown to facilitate transfer of lipid intermediates between organelles. In contrast to the standard vesicular mechanism of lipid transfer, evidence indicates that the physical proximity of the ER and mitochondrial membranes at the MAM allows for lipid flipping between opposed bilayers. Despite this unusual and seemingly energetically unfavorable mechanism, such transport does not require ATP. Instead, in yeast, it has been shown to be dependent on a multiprotein tethering structure termed the ER-mitochondria encounter structure, or ERMES, although it remains unclear whether this structure directly mediates lipid transfer or is required to keep the membranes in sufficiently close proximity to lower the energy barrier for lipid flipping.
The MAM may also be part of the secretory pathway, in addition to its role in intracellular lipid trafficking. In particular, the MAM appears to be an intermediate destination between the rough ER and the Golgi in the pathway that leads to very-low-density lipoprotein, or VLDL, assembly and secretion. The MAM thus serves as a critical metabolic and trafficking hub in lipid metabolism.
Calcium signaling
A critical role for the ER in calcium signaling was acknowledged before such a role for the mitochondria was widely accepted, in part because the low affinity of Ca channels localized to the outer mitochondrial membrane seemed to contradict this organelle's purported responsiveness to changes in intracellular Ca flux. But the presence of the MAM resolves this apparent contradiction: the close physical association between the two organelles results in Ca microdomains at contact points that facilitate efficient Ca transmission from the ER to the mitochondria. Transmission occurs in response to so-called "Ca puffs" generated by spontaneous clustering and activation of IP3R, a canonical ER membrane Ca channel.
The fate of these puffs—in particular, whether they remain restricted to isolated locales or integrated into Ca waves for propagation throughout the cell—is determined in large part by MAM dynamics. Although reuptake of Ca by the ER (concomitant with its release) modulates the intensity of the puffs, thus insulating mitochondria to a certain degree from high Ca exposure, the MAM often serves as a firewall that essentially buffers Ca puffs by acting as a sink into which free ions released into the cytosol can be funneled. This Ca tunneling occurs through the low-affinity Ca receptor VDAC1, which recently has been shown to be physically tethered to the IP3R clusters on the ER membrane and enriched at the MAM. The ability of mitochondria to serve as a Ca sink is a result of the electrochemical gradient generated during oxidative phosphorylation, which makes tunneling of the cation an exergonic process. Normal, mild calcium influx from cytosol into the mitochondrial matrix causes transient depolarization that is corrected by pumping out protons.
But transmission of Ca is not unidirectional; rather, it is a two-way street. The properties of the Ca pump SERCA and the channel IP3R present on the ER membrane facilitate feedback regulation coordinated by MAM function. In particular, the clearance of Ca by the MAM allows for spatio-temporal patterning of Ca signaling because Ca alters IP3R activity in a biphasic manner. SERCA is likewise affected by mitochondrial feedback: uptake of Ca by the MAM stimulates ATP production, thus providing energy that enables SERCA to reload the ER with Ca for continued Ca efflux at the MAM. Thus, the MAM is not a passive buffer for Ca puffs; rather it helps modulate further Ca signaling through feedback loops that affect ER dynamics.
Regulating ER release of Ca at the MAM is especially critical because only a certain window of Ca uptake sustains the mitochondria, and consequently the cell, at homeostasis. Sufficient intraorganelle Ca signaling is required to stimulate metabolism by activating dehydrogenase enzymes critical to flux through the citric acid cycle. However, once Ca signaling in the mitochondria passes a certain threshold, it stimulates the intrinsic pathway of apoptosis in part by collapsing the mitochondrial membrane potential required for metabolism. Studies examining the role of pro- and anti-apoptotic factors support this model; for example, the anti-apoptotic factor Bcl-2 has been shown to interact with IP3Rs to reduce Ca filling of the ER, leading to reduced efflux at the MAM and preventing collapse of the mitochondrial membrane potential post-apoptotic stimuli. Given the need for such fine regulation of Ca signaling, it is perhaps unsurprising that dysregulated mitochondrial Ca has been implicated in several neurodegenerative diseases, while the catalogue of tumor suppressors includes a few that are enriched at the MAM.
Molecular basis for tethering
Recent advances in the identification of the tethers between the mitochondrial and ER membranes suggest that the scaffolding function of the molecular elements involved is secondary to other, non-structural functions. In yeast, ERMES, a multiprotein complex of interacting ER- and mitochondrial-resident membrane proteins, is required for lipid transfer at the MAM and exemplifies this principle. One of its components, for example, is also a constituent of the protein complex required for insertion of transmembrane beta-barrel proteins into the lipid bilayer. However, a homologue of the ERMES complex has not yet been identified in mammalian cells. Other proteins implicated in scaffolding likewise have functions independent of structural tethering at the MAM; for example, ER-resident and mitochondrial-resident mitofusins form heterocomplexes that regulate the number of inter-organelle contact sites, although mitofusins were first identified for their role in fission and fusion events between individual mitochondria. Glucose-related protein 75 (grp75) is another dual-function protein. In addition to the matrix pool of grp75, a portion serves as a chaperone that physically links the mitochondrial and ER Ca channels VDAC and IP3R for efficient Ca transmission at the MAM. Another potential tether is Sigma-1R, a non-opioid receptor whose stabilization of ER-resident IP3R may preserve communication at the MAM during the metabolic stress response.
Perspective
The MAM is a critical signaling, metabolic, and trafficking hub in the cell that allows for the integration of ER and mitochondrial physiology. Coupling between these organelles is not simply structural but functional as well and critical for overall cellular physiology and homeostasis. The MAM thus offers a perspective on mitochondria that diverges from the traditional view of this organelle as a static, isolated unit appropriated for its metabolic capacity by the cell. Instead, this mitochondrial-ER interface emphasizes the integration of the mitochondria, the product of an endosymbiotic event, into diverse cellular processes. Recently it has also been shown, that mitochondria and MAM-s in neurons are anchored to specialised intercellular communication sites (so called somatic-junctions). Microglial processes monitor and protect neuronal functions at these sites, and MAM-s are supposed to have an important role in this type of cellular quality-control.
Origin and evolution
There are two hypotheses about the origin of mitochondria: endosymbiotic and autogenous. The endosymbiotic hypothesis suggests that mitochondria were originally prokaryotic cells, capable of implementing oxidative mechanisms that were not possible for eukaryotic cells; they became endosymbionts living inside the eukaryote. In the autogenous hypothesis, mitochondria were born by splitting off a portion of DNA from the nucleus of the eukaryotic cell at the time of divergence with the prokaryotes; this DNA portion would have been enclosed by membranes, which could not be crossed by proteins. Since mitochondria have many features in common with bacteria, the endosymbiotic hypothesis is the more widely accepted of the two accounts.
A mitochondrion contains DNA, which is organized as several copies of a single, usually circular chromosome. This mitochondrial chromosome contains genes for redox proteins, such as those of the respiratory chain. The CoRR hypothesis proposes that this co-location is required for redox regulation. The mitochondrial genome codes for some RNAs of ribosomes, and the 22 tRNAs necessary for the translation of mRNAs into protein. The circular structure is also found in prokaryotes. The proto-mitochondrion was probably closely related to the order Rickettsiales, which is in class Alphaproteobactera of phylum Pseudomonadota. However, the exact relationship of the ancestor of mitochondria to the alphaproteobacteria and whether the mitochondrion was formed at the same time or after the nucleus, remains controversial. For example, it has been suggested that the SAR11 clade of bacteria shares a relatively recent common ancestor with the mitochondria, while phylogenomic analyses indicate that mitochondria evolved from a Pseudomonadota lineage that is closely related to or a member of alphaproteobacteria. Some papers describe mitochondria as sister to the alphaproteobacteria, together forming the sister the marineproteo1 group, together forming the sister to Magnetococcidae.
The ribosomes coded for by the mitochondrial DNA are similar to those from bacteria in size and structure. They closely resemble the bacterial 70S ribosome and not the 80S cytoplasmic ribosomes, which are coded for by nuclear DNA.
The endosymbiotic relationship of mitochondria with their host cells was popularized by Lynn Margulis. The endosymbiotic theory suggests that mitochondria descended from aerobic bacteria that somehow survived endocytosis by another cell, and became incorporated into the cytoplasm. The ability of these bacteria to conduct respiration in host cells that had relied on glycolysis and fermentation would have provided a considerable evolutionary advantage. This symbiotic relationship probably developed 1.7 to 2 billion years ago.
A few groups of unicellular eukaryotes have only vestigial mitochondria or derived structures: The microsporidians, metamonads, and archamoebae. These groups appear as the most primitive eukaryotes on phylogenetic trees constructed using rRNA information, which once suggested that they appeared before the origin of mitochondria. However, this is now known to be an artifact of long-branch attraction: They are derived groups and retain genes or organelles derived from mitochondria (e. g., mitosomes and hydrogenosomes). Hydrogenosomes, mitosomes, and related organelles as found in some loricifera (e. g. Spinoloricus) and myxozoa (e. g. Henneguya zschokkei) are together classified as MROs, mitochondrion-related organelles.
Monocercomonoides and other oxymonads appear to have lost their mitochondria completely and at least some of the mitochondrial functions seem to be carried out by cytoplasmic proteins now.
Mitochondrial genetics
Mitochondria contain their own genome. The human mitochondrial genome is a circular double-stranded DNA molecule of about 16 kilobases. It encodes 37 genes: 13 for subunits of respiratory complexes I, III, IV and V, 22 for mitochondrial tRNA (for the 20 standard amino acids, plus an extra gene for leucine and serine), and 2 for rRNA (12S and 16S rRNA). One mitochondrion can contain two to ten copies of its DNA. One of the two mitochondrial DNA (mtDNA) strands has a disproportionately higher ratio of the heavier nucleotides adenine and guanine, and this is termed the heavy strand (or H strand), whereas the other strand is termed the light strand (or L strand). The weight difference allows the two strands to be separated by centrifugation. mtDNA has one long non-coding stretch known as the non-coding region (NCR), which contains the heavy strand promoter (HSP) and light strand promoter (LSP) for RNA transcription, the origin of replication for the H strand (OriH) localized on the L strand, three conserved sequence boxes (CSBs 1–3), and a termination-associated sequence (TAS). The origin of replication for the L strand (OriL) is localized on the H strand 11,000 bp downstream of OriH, located within a cluster of genes coding for tRNA.
As in prokaryotes, there is a very high proportion of coding DNA and an absence of repeats. Mitochondrial genes are transcribed as multigenic transcripts, which are cleaved and polyadenylated to yield mature mRNAs. Most proteins necessary for mitochondrial function are encoded by genes in the cell nucleus and the corresponding proteins are imported into the mitochondrion. The exact number of genes encoded by the nucleus and the mitochondrial genome differs between species. Most mitochondrial genomes are circular. In general, mitochondrial DNA lacks introns, as is the case in the human mitochondrial genome; however, introns have been observed in some eukaryotic mitochondrial DNA, such as that of yeast and protists, including Dictyostelium discoideum. Between protein-coding regions, tRNAs are present. Mitochondrial tRNA genes have different sequences from the nuclear tRNAs, but lookalikes of mitochondrial tRNAs have been found in the nuclear chromosomes with high sequence similarity.
In animals, the mitochondrial genome is typically a single circular chromosome that is approximately 16 kb long and has 37 genes. The genes, while highly conserved, may vary in location. Curiously, this pattern is not found in the human body louse (Pediculus humanus). Instead, this mitochondrial genome is arranged in 18 minicircular chromosomes, each of which is 3–4 kb long and has one to three genes. This pattern is also found in other sucking lice, but not in chewing lice. Recombination has been shown to occur between the minichromosomes.
Human population genetic studies
The near-absence of genetic recombination in mitochondrial DNA makes it a useful source of information for studying population genetics and evolutionary biology. Because all the mitochondrial DNA is inherited as a single unit, or haplotype, the relationships between mitochondrial DNA from different individuals can be represented as a gene tree. Patterns in these gene trees can be used to infer the evolutionary history of populations. The classic example of this is in human evolutionary genetics, where the molecular clock can be used to provide a recent date for mitochondrial Eve. This is often interpreted as strong support for a recent modern human expansion out of Africa. Another human example is the sequencing of mitochondrial DNA from Neanderthal bones. The relatively large evolutionary distance between the mitochondrial DNA sequences of Neanderthals and living humans has been interpreted as evidence for the lack of interbreeding between Neanderthals and modern humans.
However, mitochondrial DNA reflects only the history of the females in a population. This can be partially overcome by the use of paternal genetic sequences, such as the non-recombining region of the Y-chromosome.
Recent measurements of the molecular clock for mitochondrial DNA reported a value of 1 mutation every 7884 years dating back to the most recent common ancestor of humans and apes, which is consistent with estimates of mutation rates of autosomal DNA (10 per base per generation).
Alternative genetic code
+Exceptions to the standard genetic code in mitochondriaOrganismCodonStandardMitochondriaMammalsAGA, AGGArginineStop codonInvertebratesAGA, AGGArginineSerineFungiCUALeucineThreonineAll of the aboveAUAIsoleucineMethionineUGAStop codonTryptophan
While slight variations on the standard genetic code had been predicted earlier, none was discovered until 1979, when researchers studying human mitochondrial genes determined that they used an alternative code. Nonetheless, the mitochondria of many other eukaryotes, including most plants, use the standard code. Many slight variants have been discovered since, including various alternative mitochondrial codes. Further, the AUA, AUC, and AUU codons are all allowable start codons.
Some of these differences should be regarded as pseudo-changes in the genetic code due to the phenomenon of RNA editing, which is common in mitochondria. In higher plants, it was thought that CGG encoded for tryptophan and not arginine; however, the codon in the processed RNA was discovered to be the UGG codon, consistent with the standard genetic code for tryptophan. Of note, the arthropod mitochondrial genetic code has undergone parallel evolution within a phylum, with some organisms uniquely translating AGG to lysine.
Replication and inheritance
Mitochondria divide by mitochondrial fission, a form of binary fission that is also done by bacteria although the process is tightly regulated by the host eukaryotic cell and involves communication between and contact with several other organelles. The regulation of this division differs between eukaryotes. In many single-celled eukaryotes, their growth and division are linked to the cell cycle. For example, a single mitochondrion may divide synchronously with the nucleus. This division and segregation process must be tightly controlled so that each daughter cell receives at least one mitochondrion. In other eukaryotes (in mammals for example), mitochondria may replicate their DNA and divide mainly in response to the energy needs of the cell, rather than in phase with the cell cycle. When the energy needs of a cell are high, mitochondria grow and divide. When energy use is low, mitochondria are destroyed or become inactive. In such examples mitochondria are apparently randomly distributed to the daughter cells during the division of the cytoplasm. Mitochondrial dynamics, the balance between mitochondrial fusion and fission, is an important factor in pathologies associated with several disease conditions.
The hypothesis of mitochondrial binary fission has relied on the visualization by fluorescence microscopy and conventional transmission electron microscopy (TEM). The resolution of fluorescence microscopy (≈200 nm) is insufficient to distinguish structural details, such as double mitochondrial membrane in mitochondrial division or even to distinguish individual mitochondria when several are close together. Conventional TEM has also some technical limitations in verifying mitochondrial division. Cryo-electron tomography was recently used to visualize mitochondrial division in frozen hydrated intact cells. It revealed that mitochondria divide by budding.
An individual's mitochondrial genes are inherited only from the mother, with rare exceptions. In humans, when an egg cell is fertilized by a sperm, the mitochondria, and therefore the mitochondrial DNA, usually come from the egg only. The sperm's mitochondria enter the egg, but do not contribute genetic information to the embryo.Kimball, J.W. (2006) "Sexual Reproduction in Humans: Copulation and Fertilization" , Kimball's Biology Pages (based on Biology, 6th ed., 1996) Instead, paternal mitochondria are marked with ubiquitin to select them for later destruction inside the embryo. Discussed in Science News . The egg cell contains relatively few mitochondria, but these mitochondria divide to populate the cells of the adult organism. This mode is seen in most organisms, including the majority of animals. However, mitochondria in some species can sometimes be inherited paternally. This is the norm among certain coniferous plants, although not in pine trees and yews. For Mytilids, paternal inheritance only occurs within males of the species.Male and Female Mitochondrial DNA Lineages in the Blue Mussel (Mytilus edulis) Species Group by Donald T. Stewart, Carlos Saavedra, Rebecca R. Stanwood, Amy 0. Ball, and Eleftherios Zouros It has been suggested that it occurs at a very low level in humans.
Uniparental inheritance leads to little opportunity for genetic recombination between different lineages of mitochondria, although a single mitochondrion can contain 2–10 copies of its DNA. What recombination does take place maintains genetic integrity rather than maintaining diversity. However, there are studies showing evidence of recombination in mitochondrial DNA. It is clear that the enzymes necessary for recombination are present in mammalian cells. Further, evidence suggests that animal mitochondria can undergo recombination. The data are more controversial in humans, although indirect evidence of recombination exists.
Entities undergoing uniparental inheritance and with little to no recombination may be expected to be subject to Muller's ratchet, the accumulation of deleterious mutations until functionality is lost. Animal populations of mitochondria avoid this buildup through a developmental process known as the mtDNA bottleneck. The bottleneck exploits stochastic processes in the cell to increase the cell-to-cell variability in mutant load as an organism develops: a single egg cell with some proportion of mutant mtDNA thus produces an embryo where different cells have different mutant loads. Cell-level selection may then act to remove those cells with more mutant mtDNA, leading to a stabilization or reduction in mutant load between generations. The mechanism underlying the bottleneck is debated, with a recent mathematical and experimental metastudy providing evidence for a combination of random partitioning of mtDNAs at cell divisions and random turnover of mtDNA molecules within the cell.
DNA repair
Mitochondria can repair oxidative DNA damage by mechanisms analogous to those occurring in the cell nucleus. The proteins employed in mtDNA repair are encoded by nuclear genes, and are translocated to the mitochondria. The DNA repair pathways in mammalian mitochondria include base excision repair, double-strand break repair, direct reversal and mismatch repair. Alternatively, DNA damage may be bypassed, rather than repaired, by translesion synthesis.
Of the several DNA repair process in mitochondria, the base excision repair pathway has been most comprehensively studied. Base excision repair is carried out by a sequence of enzyme-catalyzed steps that include recognition and excision of a damaged DNA base, removal of the resulting abasic site, end processing, gap filling and ligation. A common damage in mtDNA that is repaired by base excision repair is 8-oxoguanine produced by oxidation of guanine.
Double-strand breaks can be repaired by homologous recombinational repair in both mammalian mtDNA and plant mtDNA. Double-strand breaks in mtDNA can also be repaired by microhomology-mediated end joining. Although there is evidence for the repair processes of direct reversal and mismatch repair in mtDNA, these processes are not well characterized.
Lack of mitochondrial DNA
Some organisms have lost mitochondrial DNA altogether. In these cases, genes encoded by the mitochondrial DNA have been lost or transferred to the nucleus. Cryptosporidium have mitochondria that lack any DNA, presumably because all their genes have been lost or transferred. In Cryptosporidium, the mitochondria have an altered ATP generation system that renders the parasite resistant to many classical mitochondrial inhibitors such as cyanide, azide, and atovaquone. Mitochondria that lack their own DNA have been found in a marine parasitic dinoflagellate from the genus Amoebophrya. This microorganism, A. cerati, has functional mitochondria that lack a genome. In related species, the mitochondrial genome still has three genes, but in A. cerati only a single mitochondrial gene — the cytochrome c oxidase I gene (cox1) — is found, and it has migrated to the genome of the nucleus.
Dysfunction and disease
Mitochondrial diseases
Damage and subsequent dysfunction in mitochondria is an important factor in a range of human diseases due to their influence in cell metabolism. Mitochondrial disorders often present as neurological disorders, including autism. They can also manifest as myopathy, diabetes, multiple endocrinopathy, and a variety of other systemic disorders. Diseases caused by mutation in the mtDNA include Kearns–Sayre syndrome, MELAS syndrome and Leber's hereditary optic neuropathy. In the vast majority of cases, these diseases are transmitted by a female to her children, as the zygote derives its mitochondria and hence its mtDNA from the ovum. Diseases such as Kearns-Sayre syndrome, Pearson syndrome, and progressive external ophthalmoplegia are thought to be due to large-scale mtDNA rearrangements, whereas other diseases such as MELAS syndrome, Leber's hereditary optic neuropathy, MERRF syndrome, and others are due to point mutations in mtDNA.
It has also been reported that drug tolerant cancer cells have an increased number and size of mitochondria which suggested an increase in mitochondrial biogenesis. A 2022 study in Nature Nanotechnology has reported that cancer cells can hijack the mitochondria from immune cells via physical tunneling nanotubes.
In other diseases, defects in nuclear genes lead to dysfunction of mitochondrial proteins. This is the case in Friedreich's ataxia, hereditary spastic paraplegia, and Wilson's disease. These diseases are inherited in a dominance relationship, as applies to most other genetic diseases. A variety of disorders can be caused by nuclear mutations of oxidative phosphorylation enzymes, such as coenzyme Q10 deficiency and Barth syndrome. Environmental influences may interact with hereditary predispositions and cause mitochondrial disease. For example, there may be a link between pesticide exposure and the later onset of Parkinson's disease. Other pathologies with etiology involving mitochondrial dysfunction include schizophrenia, bipolar disorder, dementia, Alzheimer's disease, Parkinson's disease, epilepsy, stroke, cardiovascular disease, myalgic encephalomyelitis/chronic fatigue syndrome (ME/CFS), retinitis pigmentosa, and diabetes mellitus.
Mitochondria-mediated oxidative stress plays a role in cardiomyopathy in type 2 diabetics. Increased fatty acid delivery to the heart increases fatty acid uptake by cardiomyocytes, resulting in increased fatty acid oxidation in these cells. This process increases the reducing equivalents available to the electron transport chain of the mitochondria, ultimately increasing reactive oxygen species (ROS) production. ROS increases uncoupling proteins (UCPs) and potentiate proton leakage through the adenine nucleotide translocator (ANT), the combination of which uncouples the mitochondria. Uncoupling then increases oxygen consumption by the mitochondria, compounding the increase in fatty acid oxidation. This creates a vicious cycle of uncoupling; furthermore, even though oxygen consumption increases, ATP synthesis does not increase proportionally because the mitochondria are uncoupled. Less ATP availability ultimately results in an energy deficit presenting as reduced cardiac efficiency and contractile dysfunction. To compound the problem, impaired sarcoplasmic reticulum calcium release and reduced mitochondrial reuptake limits peak cytosolic levels of the important signaling ion during muscle contraction. Decreased intra-mitochondrial calcium concentration increases dehydrogenase activation and ATP synthesis. So in addition to lower ATP synthesis due to fatty acid oxidation, ATP synthesis is impaired by poor calcium signaling as well, causing cardiac problems for diabetics.
Mitochondria also modulate processes such as testicular somatic cell development, spermatogonial stem cell differentiation, luminal acidification, testosterone production in testes, and more. Thus, dysfunction of mitochondria in spermatozoa can be a cause for infertility.
In efforts to combat mitochondrial disease, mitochondrial replacement therapy (MRT) has been developed. This form of in vitro fertilization uses donor mitochondria, which avoids the transmission of diseases caused by mutations of mitochondrial DNA. However, this therapy is still being researched and can introduce genetic modification, as well as safety concerns. These diseases are rare but can be extremely debilitating and progressive diseases, thus posing complex ethical questions for public policy.
Relationships to aging
There may be some leakage of the electrons transferred in the respiratory chain to form reactive oxygen species. This was thought to result in significant oxidative stress in the mitochondria with high mutation rates of mitochondrial DNA. Hypothesized links between aging and oxidative stress are not new and were proposed in 1956, which was later refined into the mitochondrial free radical theory of aging. A vicious cycle was thought to occur, as oxidative stress leads to mitochondrial DNA mutations, which can lead to enzymatic abnormalities and further oxidative stress.
A number of changes can occur to mitochondria during the aging process. Tissues from elderly humans show a decrease in enzymatic activity of the proteins of the respiratory chain. However, mutated mtDNA can only be found in about 0.2% of very old cells. Large deletions in the mitochondrial genome have been hypothesized to lead to high levels of oxidative stress and neuronal death in Parkinson's disease. Mitochondrial dysfunction has also been shown to occur in amyotrophic lateral sclerosis.
Since mitochondria cover a pivotal role in the ovarian function, by providing ATP necessary for the development from germinal vesicle to mature oocyte, a decreased mitochondria function can lead to inflammation, resulting in premature ovarian failure and accelerated ovarian aging. The resulting dysfunction is then reflected in quantitative (such as mtDNA copy number and mtDNA deletions), qualitative (such as mutations and strand breaks) and oxidative damage (such as dysfunctional mitochondria due to ROS), which are not only relevant in ovarian aging, but perturb oocyte-cumulus crosstalk in the ovary, are linked to genetic disorders (such as Fragile X) and can interfere with embryo selection.
History
The first observations of intracellular structures that probably represented mitochondria were published in 1857, by the physiologist Albert von Kolliker. On p. 316, Kölliker described mitochondria which he observed in fresh frog muscles: " ... sehr blasse rundliche Körnchen, welche in langen linienförmigen Zügen [...] wenn man einmal auf dieselben aufmerksam geworden ist." ( ... [they are] very faint round granules, which are embedded in the [muscle's] contractile substance in long linear trains. These granules are located in the whole thickness of the muscle fiber, on the surface as in the interior, and [they] are so numerous that they appear as a not unimportant element of the muscle fibers, once one has become alert to them.) Kölliker said (p. 321) that he had found mitochondria in the muscles of other animals. In Figure 3 of Table XIV, Kölliker depicted mitochondria in frog muscles. Richard Altmann, in 1890, established them as cell organelles and called them "bioblasts". From p. 125: "Da auch sonst mancherlei Umstände dafür sprechen, dass Mikroorganismen und Granula einander gleichwerthig sind und Elementarorganismen vorstellen, welche sich überall finden, wo lebendige Kräfte ausgelöst werden, so wollen wir sie mit dem gemeinschaftlichen Namen der Bioblasten bezeichnen." (Since otherwise some circumstances indicate that microorganisms and granula are equivalent to each other and suggest elementary organisms, which are to be found wherever living forces are unleashed, we will designate them with the collective name of "bioblasts".) In 1898, Carl Benda coined the term "mitochondria" from the Greek , , "thread", and , , "granule". From p. 397: After Brenda states that " ... ich bereits in vielen Zellarten aller möglichen Thierclassen gefunden habe, ... " ( ... I have already found [them (mitochondria)] in many types of cells of all possible classes of animals, ... ), he suggests: "Ich möchte vorläufig vorschlagen, ihnen als Mitochondria eine besondere Stellung vorzubehalten, die ich in weiteren Arbeiten begründen werde." (I would like to suggest provisionally reserving for them, as "mitochondria", a special status which I will justify in further work.)
Leonor Michaelis discovered that Janus green can be used as a supravital stain for mitochondria in 1900. In 1904, Friedrich Meves made the first recorded observation of mitochondria in plants in cells of the white waterlily, Nymphaea alba,Ernster's citation is wrong, correct citation is , cited in Meves' 1908 paper and in , with confirmation of Nymphaea alba and in 1908, along with Claudius Regaud, suggested that they contain proteins and lipids. Benjamin F. Kingsbury, in 1912, first related them with cell respiration, but almost exclusively based on morphological observations. From p. 47: " ... the mitochondria are the structural expression thereof [i.e., of the chemical reducing processes in the cytoplasm], ... " In 1913, Otto Heinrich Warburg linked respiration to particles which he had obtained from extracts of guinea-pig liver and which he called "grana". Warburg and Heinrich Otto Wieland, who had also postulated a similar particle mechanism, disagreed on the chemical nature of the respiration. It was not until 1925, when David Keilin discovered cytochromes, that the respiratory chain was described.
In 1939, experiments using minced muscle cells demonstrated that cellular respiration using one oxygen molecule can form four adenosine triphosphate (ATP) molecules, and in 1941, the concept of the phosphate bonds of ATP being a form of energy in cellular metabolism was developed by Fritz Albert Lipmann. In the following years, the mechanism behind cellular respiration was further elaborated, although its link to the mitochondria was not known. The introduction of tissue fractionation by Albert Claude allowed mitochondria to be isolated from other cell fractions and biochemical analysis to be conducted on them alone. In 1946, he concluded that cytochrome oxidase and other enzymes responsible for the respiratory chain were isolated to the mitochondria. Eugene Kennedy and Albert Lehninger discovered in 1948 that mitochondria are the site of oxidative phosphorylation in eukaryotes. Over time, the fractionation method was further developed, improving the quality of the mitochondria isolated, and other elements of cell respiration were determined to occur in the mitochondria.
The first high-resolution electron micrographs appeared in 1952, replacing the Janus Green stains as the preferred way to visualize mitochondria. This led to a more detailed analysis of the structure of the mitochondria, including confirmation that they were surrounded by a membrane. It also showed a second membrane inside the mitochondria that folded up in ridges dividing up the inner chamber and that the size and shape of the mitochondria varied from cell to cell.
The popular term "powerhouse of the cell" was coined by Philip Siekevitz in 1957.
In 1967, it was discovered that mitochondria contained ribosomes. In 1968, methods were developed for mapping the mitochondrial genes, with the genetic and physical map of yeast mitochondrial DNA completed in 1976. In November 2024, Researchers from the United States have discovered that mitochondria divide into two distinct forms when cells are starved, this could help explain and describe how cancers thrive in hostile conditions.
See also
Anti-mitochondrial antibodies
Mitochondrial metabolic rates
Mitochondrial permeability transition pore
Mitophagy
Nebenkern
Oncocyte
Oncocytoma
Paternal mtDNA transmission
Plastid
Submitochondrial particle
References
General
External links
Powering the Cell Mitochondria – XVIVO Scientific Animation
Mitodb.com – The mitochondrial disease database.
Mitochondria Atlas at University of Mainz
Mitochondria Research Portal at mitochondrial.net
Mitochondria: Architecture dictates function at cytochemistry.net
Mitochondria links at University of Alabama
MIP Mitochondrial Physiology Society
3D structures of proteins from inner mitochondrial membrane at University of Michigan
3D structures of proteins associated with outer mitochondrial membrane at University of Michigan
Mitochondrial Protein Partnership at University of Wisconsin
MitoMiner – A mitochondrial proteomics database at MRC Mitochondrial Biology Unit
Mitochondrion – Cell Centered Database
Mitochondrion Reconstructed by Electron Tomography at San Diego State University
Video Clip of Rat-liver Mitochondrion from Cryo-electron Tomography
Category:Cellular respiration
Category:Endosymbiotic events
|
biology
| 9,885
|
19702
|
Mutation
|
https://en.wikipedia.org/wiki/Mutation
|
In biology, a mutation is an alteration in the nucleic acid sequence of the genome of an organism, virus, or extrachromosomal DNA. Viral genomes contain either DNA or RNA. Mutations result from errors during DNA or viral replication, mitosis, or meiosis or other types of damage to DNA (such as pyrimidine dimers caused by exposure to ultraviolet radiation), which then may undergo error-prone repair (especially microhomology-mediated end joining), cause an error during other forms of repair, or cause an error during replication (translesion synthesis). Mutations may also result from substitution, insertion or deletion of segments of DNA due to mobile genetic elements.
Mutations may or may not produce detectable changes in the observable characteristics (phenotype) of an organism. Mutations play a part in both normal and abnormal biological processes including: evolution, cancer, and the development of the immune system, including junctional diversity. Mutation is the ultimate source of all genetic variation, providing the raw material on which evolutionary forces such as natural selection can act.
Mutation can result in many different types of change in sequences. Mutations in genes can have no effect, alter the product of a gene, or prevent the gene from functioning properly or completely. Mutations can also occur in non-genic regions. A 2007 study on genetic variations between different species of Drosophila suggested that, if a mutation changes a protein produced by a gene, the result is likely to be harmful, with an estimated 70% of amino acid polymorphisms that have damaging effects, and the remainder being either neutral or marginally beneficial.
Mutation and DNA damage are the two major types of errors that occur in DNA, but they are fundamentally different. DNA damage is a physical alteration in the DNA structure, such as a single or double strand break, a modified guanosine residue in DNA such as 8-hydroxydeoxyguanosine, or a polycyclic aromatic hydrocarbon adduct. DNA damages can be recognized by enzymes, and therefore can be correctly repaired using the complementary undamaged strand in DNA as a template or an undamaged sequence in a homologous chromosome if it is available. If DNA damage remains in a cell, transcription of a gene may be prevented and thus translation into a protein may also be blocked. DNA replication may also be blocked and/or the cell may die. In contrast to a DNA damage, a mutation is an alteration of the base sequence of the DNA. Ordinarily, a mutation cannot be recognized by enzymes once the base change is present in both DNA strands, and thus a mutation is not ordinarily repaired. At the cellular level, mutations can alter protein function and regulation. Unlike DNA damages, mutations are replicated when the cell replicates. At the level of cell populations, cells with mutations will increase or decrease in frequency according to the effects of the mutations on the ability of the cell to survive and reproduce. Although distinctly different from each other, DNA damages and mutations are related because DNA damages often cause errors of DNA synthesis during replication or repair and these errors are a major source of mutation.
Overview
Mutations can involve the duplication of large sections of DNA, usually through genetic recombination. These duplications are a major source of raw material for evolving new genes, with tens to hundreds of genes duplicated in animal genomes every million years. Most genes belong to larger gene families of shared ancestry, detectable by their sequence homology. Novel genes are produced by several methods, commonly through the duplication and mutation of an ancestral gene, or by recombining parts of different genes to form new combinations with new functions.
Here, protein domains act as modules, each with a particular and independent function, that can be mixed together to produce genes encoding new proteins with novel properties. For example, the human eye uses four genes to make structures that sense light: three for cone cell or colour vision and one for rod cell or night vision; all four arose from a single ancestral gene. Another advantage of duplicating a gene (or even an entire genome) is that this increases engineering redundancy; this allows one gene in the pair to acquire a new function while the other copy performs the original function. Other types of mutation occasionally create new genes from previously noncoding DNA.
Changes in chromosome number may involve even larger mutations, where segments of the DNA within chromosomes break and then rearrange. For example, in the Homininae, two chromosomes fused to produce human chromosome 2; this fusion did not occur in the lineage of the other apes, and they retain these separate chromosomes. In evolution, the most important role of such chromosomal rearrangements may be to accelerate the divergence of a population into new species by making populations less likely to interbreed, thereby preserving genetic differences between these populations.
Sequences of DNA that can move about the genome, such as transposons, make up a major fraction of the genetic material of plants and animals, and may have been important in the evolution of genomes. For example, more than a million copies of the Alu sequence are present in the human genome, and these sequences have now been recruited to perform functions such as regulating gene expression. Another effect of these mobile DNA sequences is that when they move within a genome, they can mutate or delete existing genes and thereby produce genetic diversity.
Nonlethal mutations accumulate within the gene pool and increase the amount of genetic variation. The abundance of some genetic changes within the gene pool can be reduced by natural selection, while other "more favorable" mutations may accumulate and result in adaptive changes.
For example, a butterfly may produce offspring with new mutations. The majority of these mutations will have no effect; but one might change the colour of one of the butterfly's offspring, making it harder (or easier) for predators to see. If this color change is advantageous, the chances of this butterfly's surviving and producing its own offspring are a little better, and over time the number of butterflies with this mutation may form a larger percentage of the population.
Neutral mutations are defined as mutations whose effects do not influence the fitness of an individual. These can increase in frequency over time due to genetic drift. It is believed that the overwhelming majority of mutations have no significant effect on an organism's fitness. Also, DNA repair mechanisms are able to mend most changes before they become permanent mutations, and many organisms have mechanisms, such as apoptotic pathways, for eliminating otherwise-permanently mutated somatic cells.
Beneficial mutations can improve reproductive success.
Causes
Four classes of mutations are (1) mutations (molecular decay), (2) mutations due to error-prone replication bypass of naturally occurring DNA damage (also called error-prone translesion synthesis), (3) errors introduced during DNA repair, and (4) induced mutations caused by mutagens. Scientists may sometimes deliberately introduce mutations into cells or research organisms for the sake of scientific experimentation.
One 2017 study claimed that 66% of cancer-causing mutations are random, 29% are due to the environment (the studied population spanned 69 countries), and 5% are inherited.
Humans on average pass 60 new mutations to their children but fathers pass more mutations depending on their age with every year adding two new mutations to a child.
Spontaneous mutation
Spontaneous mutations occur with non-zero probability even given a healthy, uncontaminated cell. Naturally occurring oxidative DNA damage is estimated to occur 10,000 times per cell per day in humans and 100,000 times per cell per day in rats. Spontaneous mutations can be characterized by the specific change:
Tautomerism – A base is changed by the repositioning of a hydrogen atom, altering the hydrogen bonding pattern of that base, resulting in incorrect base pairing during replication. Theoretical results suggest that proton tunneling is an important factor in the spontaneous creation of GC tautomers.
Depurination – Loss of a purine base (A or G) to form an apurinic site (AP site).
Deamination – Hydrolysis changes a normal base to an atypical base containing a keto group in place of the original amine group. Examples include C → U and A → HX (hypoxanthine), which can be corrected by DNA repair mechanisms; and 5MeC (5-methylcytosine) → T, which is less likely to be detected as a mutation because thymine is a normal DNA base.
Slipped strand mispairing – Denaturation of the new strand from the template during replication, followed by renaturation in a different spot ("slipping"). This can lead to insertions or deletions.
Error-prone replication bypass
There is increasing evidence that the majority of spontaneously arising mutations are due to error-prone replication (translesion synthesis) past DNA damage in the template strand. In mice, the majority of mutations are caused by translesion synthesis. Likewise, in yeast, Kunz et al. found that more than 60% of the spontaneous single base pair substitutions and deletions were caused by translesion synthesis.
Errors introduced during DNA repair
Although naturally occurring double-strand breaks occur at a relatively low frequency in DNA, their repair often causes mutation. Non-homologous end joining (NHEJ) is a major pathway for repairing double-strand breaks. NHEJ involves removal of a few nucleotides to allow somewhat inaccurate alignment of the two ends for rejoining followed by addition of nucleotides to fill in gaps. As a consequence, NHEJ often introduces mutations.
Induced mutation
Induced mutations are alterations in the gene after it has come in contact with mutagens and environmental causes.
Induced mutations on the molecular level can be caused by:
Chemicals
Hydroxylamine
Base analogues (e.g., Bromodeoxyuridine (BrdU))
Alkylating agents (e.g., N-ethyl-N-nitrosourea (ENU). These agents can mutate both replicating and non-replicating DNA. In contrast, a base analogue can mutate the DNA only when the analogue is incorporated in replicating the DNA. Each of these classes of chemical mutagens has certain effects that then lead to transitions, transversions, or deletions.
Agents that form DNA adducts (e.g., ochratoxin A)
DNA intercalating agents (e.g., ethidium bromide)
DNA crosslinkers
Oxidative damage
Nitrous acid converts amine groups on A and C to diazo groups, altering their hydrogen bonding patterns, which leads to incorrect base pairing during replication.
Radiation
Ultraviolet light (UV) (including non-ionizing radiation). Two nucleotide bases in DNA—cytosine and thymine—are most vulnerable to radiation that can change their properties. UV light can induce adjacent pyrimidine bases in a DNA strand to become covalently joined as a pyrimidine dimer. UV radiation, in particular longer-wave UVA, can also cause oxidative damage to DNA.
Ionizing radiation. Exposure to ionizing radiation, such as gamma radiation, can result in mutation, possibly resulting in cancer or death.
Whereas in former times mutations were assumed to occur by chance, or induced by mutagens, molecular mechanisms of mutation have been discovered in bacteria and across the tree of life. As S. Rosenberg states, "These mechanisms reveal a picture of highly regulated mutagenesis, up-regulated temporally by stress responses and activated when cells/organisms are maladapted to their environments—when stressed—potentially accelerating adaptation." Since they are self-induced mutagenic mechanisms that increase the adaptation rate of organisms, they have some times been named as adaptive mutagenesis mechanisms, and include the SOS response in bacteria, ectopic intrachromosomal recombination and other chromosomal events such as duplications.
Classification of types
By effect on structure
The sequence of a gene can be altered in a number of ways. Gene mutations have varying effects on health depending on where they occur and whether they alter the function of essential proteins.
Mutations in the structure of genes can be classified into several types.
Large-scale mutations
Large-scale mutations in chromosomal structure include:
Amplifications (or gene duplications) or repetition of a chromosomal segment or presence of extra piece of a chromosome broken piece of a chromosome may become attached to a homologous or non-homologous chromosome so that some of the genes are present in more than two doses leading to multiple copies of all chromosomal regions, increasing the dosage of the genes located within them.
Polyploidy, duplication of entire sets of chromosomes, potentially resulting in a separate breeding population and speciation.
Deletions of large chromosomal regions, leading to loss of the genes within those regions.
Mutations whose effect is to juxtapose previously separate pieces of DNA, potentially bringing together separate genes to form functionally distinct fusion genes (e.g., bcr-abl).
Large scale changes to the structure of chromosomes called chromosomal rearrangement that can lead to a decrease of fitness but also to speciation in isolated, inbred populations. These include:
Chromosomal translocations: interchange of genetic parts from nonhomologous chromosomes.
Chromosomal inversions: reversing the orientation of a chromosomal segment.
Non-homologous chromosomal crossover.
Interstitial deletions: an intra-chromosomal deletion that removes a segment of DNA from a single chromosome, thereby apposing previously distant genes. For example, cells isolated from a human astrocytoma, a type of brain tumour, were found to have a chromosomal deletion removing sequences between the Fused in Glioblastoma (FIG) gene and the receptor tyrosine kinase (ROS), producing a fusion protein (FIG-ROS). The abnormal FIG-ROS fusion protein has constitutively active kinase activity that causes oncogenic transformation (a transformation from normal cells to cancer cells).
Loss of heterozygosity: loss of one allele, either by a deletion or a genetic recombination event, in an organism that previously had two different alleles.
Small-scale mutations
Small-scale mutations affect a gene in one or a few nucleotides. (If only a single nucleotide is affected, they are called point mutations.) Small-scale mutations include:
Insertions add one or more extra nucleotides into the DNA. They are usually caused by transposable elements, or errors during replication of repeating elements. Insertions in the coding region of a gene may alter splicing of the mRNA (splice site mutation), or cause a shift in the reading frame (frameshift), both of which can significantly alter the gene product. Insertions can be reversed by excision of the transposable element.
Deletions remove one or more nucleotides from the DNA. Like insertions, these mutations can alter the reading frame of the gene. In general, they are irreversible: Though exactly the same sequence might, in theory, be restored by an insertion, transposable elements able to revert a very short deletion (say 1–2 bases) in any location either are highly unlikely to exist or do not exist at all.
Substitution mutations, often caused by chemicals or malfunction of DNA replication, exchange a single nucleotide for another. These changes are classified as transitions or transversions. Most common is the transition that exchanges a purine for a purine (A ↔ G) or a pyrimidine for a pyrimidine, (C ↔ T). A transition can be caused by nitrous acid, base mispairing, or mutagenic base analogues such as BrdU. Less common is a transversion, which exchanges a purine for a pyrimidine or a pyrimidine for a purine (C/T ↔ A/G). An example of a transversion is the conversion of adenine (A) into a cytosine (C). Point mutations are modifications of single base pairs of DNA or other small base pairs within a gene. A point mutation can be reversed by another point mutation, in which the nucleotide is changed back to its original state (true reversion) or by second-site reversion (a complementary mutation elsewhere that results in regained gene functionality). As discussed below, point mutations that occur within the protein coding region of a gene may be classified as synonymous or nonsynonymous substitutions, the latter of which in turn can be divided into missense or nonsense mutations.
By impact on protein sequence
alt=Diagram of the structure of a eukaryotic protein-coding gene, showing regulatory regions, introns, and coding regions. Four stages are shown: DNA, initial mRNA product, mature mRNA, and protein.|thumb|460x460px|The structure of a eukaryotic protein-coding gene. A mutation in the protein coding region (red) can result in a change in the amino acid sequence. Mutations in other areas of the gene can have diverse effects. Changes within regulatory sequences (yellow and blue) can effect transcriptional and translational regulation of gene expression.
301px|thumb|right|Selection of disease-causing mutations, in a standard table of the genetic code of amino acidsReferences for the image are found in Wikimedia Commons page at: Commons:File:Notable mutations.svg#References.
The effect of a mutation on protein sequence depends in part on where in the genome it occurs, especially whether it is in a coding or non-coding region. Mutations in the non-coding regulatory sequences of a gene, such as promoters, enhancers, and silencers, can alter levels of gene expression, but are less likely to alter the protein sequence. Mutations within introns and in regions with no known biological function (e.g. pseudogenes, retrotransposons) are generally neutral, having no effect on phenotype – though intron mutations could alter the protein product if they affect mRNA splicing.
Mutations that occur in coding regions of the genome are more likely to alter the protein product, and can be categorized by their effect on amino acid sequence:
A frameshift mutation is caused by insertion or deletion of a number of nucleotides that is not evenly divisible by three from a DNA sequence. Due to the triplet nature of gene expression by codons, the insertion or deletion can disrupt the reading frame, or the grouping of the codons, resulting in a completely different translation from the original. The earlier in the sequence the deletion or insertion occurs, the more altered the protein produced is. (For example, the code CCU GAC UAC CUA codes for the amino acids proline, aspartic acid, tyrosine, and leucine. If the U in CCU was deleted, the resulting sequence would be CCG ACU ACC UAx, which would instead code for proline, threonine, threonine, and part of another amino acid or perhaps a stop codon (where the x stands for the following nucleotide).) By contrast, any insertion or deletion that is evenly divisible by three is termed an in-frame mutation.
A point substitution mutation results in a change in a single nucleotide and can be either synonymous or nonsynonymous.
A synonymous substitution replaces a codon with another codon that codes for the same amino acid, so that the produced amino acid sequence is not modified. Synonymous mutations occur due to the degenerate nature of the genetic code. If this mutation does not result in any phenotypic effects, then it is called silent, but not all synonymous substitutions are silent. (There can also be silent mutations in nucleotides outside of the coding regions, such as the introns, because the exact nucleotide sequence is not as crucial as it is in the coding regions, but these are not considered synonymous substitutions.)
A nonsynonymous substitution replaces a codon with another codon that codes for a different amino acid, so that the produced amino acid sequence is modified. Nonsynonymous substitutions can be classified as nonsense or missense mutations:
A missense mutation changes a nucleotide to cause substitution of a different amino acid. This in turn can render the resulting protein nonfunctional. Such mutations are responsible for diseases such as Epidermolysis bullosa, sickle-cell disease, and SOD1-mediated ALS. On the other hand, if a missense mutation occurs in an amino acid codon that results in the use of a different, but chemically similar, amino acid, then sometimes little or no change is rendered in the protein. For example, a change from AAA to AGA will encode arginine, a chemically similar molecule to the intended lysine. In this latter case the mutation will have little or no effect on phenotype and therefore be neutral.
A nonsense mutation is a point mutation in a sequence of DNA that results in a premature stop codon, or a nonsense codon in the transcribed mRNA, and possibly a truncated, and often nonfunctional protein product. This sort of mutation has been linked to different diseases, such as congenital adrenal hyperplasia. (See Stop codon.)
By effect on function
A mutation becomes an effect on function mutation when the exactitude of functions between a mutated protein and its direct interactor undergoes change. The interactors can be other proteins, molecules, nucleic acids, etc. There are many mutations that fall under the category of by effect on function, but depending on the specificity of the change the mutations listed below will occur.
Loss-of-function mutations, also called inactivating mutations, result in the gene product having less or no function (being partially or wholly inactivated). When the allele has a complete loss of function (null allele), it is often called an amorph or amorphic mutation in Muller's morphs schema. Phenotypes associated with such mutations are most often recessive. Exceptions are when the organism is haploid, or when the reduced dosage of a normal gene product is not enough for a normal phenotype (this is called haploinsufficiency). Examples of diseases caused by a loss-of-function mutation include Gitelman syndrome and cystic fibrosis.
Gain-of-function mutations also called activating mutations, change the gene product such that its effect gets stronger (enhanced activation) or even is superseded by a different and abnormal function. When the new allele is created, a heterozygote containing the newly created allele as well as the original will express the new allele; genetically this defines the mutations as dominant phenotypes. Several of Muller's morphs correspond to the gain of function, including hypermorph (increased gene expression) and neomorph (novel function).
Dominant negative mutations (also called anti-morphic mutations) have an altered gene product that acts antagonistically to the wild-type allele. These mutations usually result in an altered molecular function (often inactive) and are characterized by a dominant or semi-dominant phenotype. In humans, dominant negative mutations have been implicated in cancer (e.g., mutations in genes p53, ATM, CEBPA, and PPARgamma). Marfan syndrome is caused by mutations in the FBN1 gene, located on chromosome 15, which encodes fibrillin-1, a glycoprotein component of the extracellular matrix. Marfan syndrome is also an example of dominant negative mutation and haploinsufficiency.
Lethal mutations result in rapid organismal death when occurring during development and cause significant reductions of life expectancy for developed organisms. An example of a disease that is caused by a dominant lethal mutation is Huntington's disease.
Null mutations, also known as Amorphic mutations, are a form of loss-of-function mutations that completely prohibit the gene's function. The mutation leads to a complete loss of operation at the phenotypic level, also causing no gene product to be formed. Atopic eczema and dermatitis syndrome are common diseases caused by a null mutation of the gene that activates filaggrin.
Suppressor mutations are a type of mutation that causes the double mutation to appear normally. In suppressor mutations the phenotypic activity of a different mutation is completely suppressed, thus causing the double mutation to look normal. There are two types of suppressor mutations: intragenic and extragenic. Intragenic mutations occur in the gene where the first mutation occurs, while extragenic mutations occur in the gene that interacts with the product of the first mutation. A common disease that results from this type of mutation is Alzheimer's disease.
Neomorphic mutations are a part of the gain-of-function mutations and are characterized by the control of new protein product synthesis. The newly synthesized gene normally contains a novel gene expression or molecular function. The result of the neomorphic mutation is the gene where the mutation occurs has a complete change in function.
A back mutation or reversion is a point mutation that restores the original sequence and hence the original phenotype.
By effect on fitness (harmful, beneficial, neutral mutations)
In genetics, it is sometimes useful to classify mutations as either or beneficial (or neutral):
A harmful, or , mutation decreases the fitness of the organism. Many, but not all mutations in essential genes are harmful (if a mutation does not change the amino acid sequence in an essential protein, it is harmless in most cases).
A beneficial, or advantageous mutation increases the fitness of the organism. Examples are mutations that lead to antibiotic resistance in bacteria (which are beneficial for bacteria but usually not for humans).
A neutral mutation has no harmful or beneficial effect on the organism. Such mutations occur at a steady rate, forming the basis for the molecular clock. In the neutral theory of molecular evolution, neutral mutations provide genetic drift as the basis for most variation at the molecular level. In animals or plants, most mutations are neutral, given that the vast majority of their genomes is either non-coding or consists of repetitive sequences that have no obvious function ("junk DNA").
Large-scale quantitative mutagenesis screens, in which thousands of millions of mutations are tested, invariably find that a larger fraction of mutations has harmful effects but always returns a number of beneficial mutations as well. For instance, in a screen of all gene deletions in E. coli, 80% of mutations were negative, but 20% were positive, even though many had a very small effect on growth (depending on condition). Gene deletions involve removal of whole genes, so that point mutations almost always have a much smaller effect. In a similar screen in Streptococcus pneumoniae, but this time with transposon insertions, 76% of insertion mutants were classified as neutral, 16% had a significantly reduced fitness, but 6% were advantageous.
This classification is obviously relative and somewhat artificial: a harmful mutation can quickly turn into a beneficial mutations when conditions change. Also, there is a gradient from harmful/beneficial to neutral, as many mutations may have small and mostly neglectable effects but under certain conditions will become relevant. Also, many traits are determined by hundreds of genes (or loci), so that each locus has only a minor effect. For instance, human height is determined by hundreds of genetic variants ("mutations") but each of them has a very minor effect on height, apart from the impact of nutrition. Height (or size) itself may be more or less beneficial as the huge range of sizes in animal or plant groups shows.
Distribution of fitness effects (DFE)
Attempts have been made to infer the distribution of fitness effects (DFE) using mutagenesis experiments and theoretical models applied to molecular sequence data. DFE, as used to determine the relative abundance of different types of mutations (i.e., strongly deleterious, nearly neutral or advantageous), is relevant to many evolutionary questions, such as the maintenance of genetic variation, the rate of genomic decay, the maintenance of outcrossing sexual reproduction as opposed to inbreeding and the evolution of sex and genetic recombination. DFE can also be tracked by tracking the skewness of the distribution of mutations with putatively severe effects as compared to the distribution of mutations with putatively mild or absent effect. In summary, the DFE plays an important role in predicting evolutionary dynamics. A variety of approaches have been used to study the DFE, including theoretical, experimental and analytical methods.
Mutagenesis experiment: The direct method to investigate the DFE is to induce mutations and then measure the mutational fitness effects, which has already been done in viruses, bacteria, yeast, and Drosophila. For example, most studies of the DFE in viruses used site-directed mutagenesis to create point mutations and measure relative fitness of each mutant. In Escherichia coli, one study used transposon mutagenesis to directly measure the fitness of a random insertion of a derivative of Tn10. In yeast, a combined mutagenesis and deep sequencing approach has been developed to generate high-quality systematic mutant libraries and measure fitness in high throughput. However, given that many mutations have effects too small to be detected and that mutagenesis experiments can detect only mutations of moderately large effect; DNA sequence analysis can provide valuable information about these mutations.
One of the earliest theoretical studies of the distribution of fitness effects was done by Motoo Kimura, an influential theoretical population geneticist. His neutral theory of molecular evolution proposes that most novel mutations will be highly deleterious, with a small fraction being neutral. A later proposal by Hiroshi Akashi proposed a bimodal model for the DFE, with modes centered around highly deleterious and neutral mutations. Both theories agree that the vast majority of novel mutations are neutral or deleterious and that advantageous mutations are rare, which has been supported by experimental results. One example is a study done on the DFE of random mutations in vesicular stomatitis virus. Out of all mutations, 39.6% were lethal, 31.2% were non-lethal deleterious, and 27.1% were neutral. Another example comes from a high throughput mutagenesis experiment with yeast. In this experiment it was shown that the overall DFE is bimodal, with a cluster of neutral mutations, and a broad distribution of deleterious mutations.
Though relatively few mutations are advantageous, those that are play an important role in evolutionary changes. Like neutral mutations, weakly selected advantageous mutations can be lost due to random genetic drift, but strongly selected advantageous mutations are more likely to be fixed. Knowing the DFE of advantageous mutations may lead to increased ability to predict the evolutionary dynamics. Theoretical work on the DFE for advantageous mutations has been done by John H. Gillespie and H. Allen Orr. They proposed that the distribution for advantageous mutations should be exponential under a wide range of conditions, which, in general, has been supported by experimental studies, at least for strongly selected advantageous mutations.
In general, it is accepted that the majority of mutations are neutral or deleterious, with advantageous mutations being rare; however, the proportion of types of mutations varies between species. This indicates two important points: first, the proportion of effectively neutral mutations is likely to vary between species, resulting from dependence on effective population size; second, the average effect of deleterious mutations varies dramatically between species. In addition, the DFE also differs between coding regions and noncoding regions, with the DFE of noncoding DNA containing more weakly selected mutations.
By inheritance
In multicellular organisms with dedicated reproductive cells, mutations can be subdivided into germline mutations, which can be passed on to descendants through their reproductive cells, and somatic mutations (also called acquired mutations), which involve cells outside the dedicated reproductive group and which are not usually transmitted to descendants.
Diploid organisms (e.g., humans) contain two copies of each gene—a paternal and a maternal allele. Based on the occurrence of mutation on each chromosome, we may classify mutations into three types. A wild type or homozygous non-mutated organism is one in which neither allele is mutated.
A heterozygous mutation is a mutation of only one allele.
A homozygous mutation is an identical mutation of both the paternal and maternal alleles.
Compound heterozygous mutations or a genetic compound consists of two different mutations in the paternal and maternal alleles.
Germline mutation
A germline mutation in the reproductive cells of an individual gives rise to a constitutional mutation in the offspring, that is, a mutation that is present in every cell. A constitutional mutation can also occur very soon after fertilization, or continue from a previous constitutional mutation in a parent. A germline mutation can be passed down through subsequent generations of organisms.
The distinction between germline and somatic mutations is important in animals that have a dedicated germline to produce reproductive cells. However, it is of little value in understanding the effects of mutations in plants, which lack a dedicated germline. The distinction is also blurred in those animals that reproduce asexually through mechanisms such as budding, because the cells that give rise to the daughter organisms also give rise to that organism's germline.
A new germline mutation not inherited from either parent is called a de novo mutation.
Somatic mutation
A change in the genetic structure that is not inherited from a parent, and also not passed to offspring, is called a somatic mutation. Somatic mutations are not inherited by an organism's offspring because they do not affect the germline. However, they are passed down to all the progeny of a mutated cell within the same organism during mitosis. A major section of an organism therefore might carry the same mutation. These types of mutations are usually prompted by environmental causes, such as ultraviolet radiation or any exposure to certain harmful chemicals, and can cause diseases including cancer.
With plants, some somatic mutations can be propagated without the need for seed production, for example, by grafting and stem cuttings. These types of mutations have led to new types of fruits, such as the "Delicious" apple and the "Washington" navel orange.
Human and mouse somatic cells have a mutation rate more than ten times higher than the germline mutation rate for both species; mice have a higher rate of both somatic and germline mutations per cell division than humans. The disparity in mutation rate between the germline and somatic tissues likely reflects the greater importance of genome maintenance in the germline than in the soma.
Special classes
Conditional mutation is a mutation that has wild-type (or less severe) phenotype under certain "permissive" environmental conditions and a mutant phenotype under certain "restrictive" conditions. For example, a temperature-sensitive mutation can cause cell death at high temperature (restrictive condition), but might have no deleterious consequences at a lower temperature (permissive condition). These mutations are non-autonomous, as their manifestation depends upon presence of certain conditions, as opposed to other mutations which appear autonomously. The permissive conditions may be temperature, certain chemicals, light or mutations in other parts of the genome. In vivo mechanisms like transcriptional switches can create conditional mutations. For instance, association of Steroid Binding Domain can create a transcriptional switch that can change the expression of a gene based on the presence of a steroid ligand. Conditional mutations have applications in research as they allow control over gene expression. This is especially useful studying diseases in adults by allowing expression after a certain period of growth, thus eliminating the deleterious effect of gene expression seen during stages of development in model organisms. DNA Recombinase systems like Cre-Lox recombination used in association with promoters that are activated under certain conditions can generate conditional mutations. Dual Recombinase technology can be used to induce multiple conditional mutations to study the diseases which manifest as a result of simultaneous mutations in multiple genes. Certain inteins have been identified which splice only at certain permissive temperatures, leading to improper protein synthesis and thus, loss-of-function mutations at other temperatures. Conditional mutations may also be used in genetic studies associated with ageing, as the expression can be changed after a certain time period in the organism's lifespan.
Replication timing quantitative trait loci affects DNA replication.
Nomenclature
In order to categorize a mutation as such, the "normal" sequence must be obtained from the DNA of a "normal" or "healthy" organism (as opposed to a "mutant" or "sick" one), it should be identified and reported; ideally, it should be made publicly available for a straightforward nucleotide-by-nucleotide comparison, and agreed upon by the scientific community or by a group of expert geneticists and biologists, who have the responsibility of establishing the standard or so-called "consensus" sequence. This step requires a tremendous scientific effort. Once the consensus sequence is known, the mutations in a genome can be pinpointed, described, and classified. The committee of the Human Genome Variation Society (HGVS) has developed the standard human sequence variant nomenclature, which should be used by researchers and DNA diagnostic centers to generate unambiguous mutation descriptions. In principle, this nomenclature can also be used to describe mutations in other organisms. The nomenclature specifies the type of mutation and base or amino acid changes.
Nucleotide substitution (e.g., 76A>T) – The number is the position of the nucleotide from the 5' end; the first letter represents the wild-type nucleotide, and the second letter represents the nucleotide that replaced the wild type. In the given example, the adenine at the 76th position was replaced by a thymine.
If it becomes necessary to differentiate between mutations in genomic DNA, mitochondrial DNA, and RNA, a simple convention is used. For example, if the 100th base of a nucleotide sequence mutated from G to C, then it would be written as g.100G>C if the mutation occurred in genomic DNA, m.100G>C if the mutation occurred in mitochondrial DNA, or r.100g>c if the mutation occurred in RNA. Note that, for mutations in RNA, the nucleotide code is written in lower case.
Amino acid substitution (e.g., D111E) – The first letter is the one letter code of the wild-type amino acid, the number is the position of the amino acid from the N-terminus, and the second letter is the one letter code of the amino acid present in the mutation. Nonsense mutations are represented with an X for the second amino acid (e.g. D111X).
Amino acid deletion (e.g., ΔF508) – The Greek letter Δ (delta) indicates a deletion. The letter refers to the amino acid present in the wild type and the number is the position from the N terminus of the amino acid were it to be present as in the wild type.
Mutation rates
Mutation rates vary substantially across species, and the evolutionary forces that generally determine mutation are the subject of ongoing investigation.
In humans, the mutation rate is about 50–90 de novo mutations per genome per generation, that is, each human accumulates about 50–90 novel mutations that were not present in their parents. This number has been established by sequencing thousands of human trios, that is, two parents and at least one child.
The genomes of RNA viruses are based on RNA rather than DNA. The RNA viral genome can be double-stranded (as in DNA) or single-stranded. In some of these viruses (such as the single-stranded human immunodeficiency virus), replication occurs quickly, and there are no mechanisms to check the genome for accuracy. This error-prone process often results in mutations.
The rate of de novo mutations, whether germline or somatic, vary among organisms. Individuals within the same species can even express varying rates of mutation. Overall, rates of de novo mutations are low compared to those of inherited mutations, which categorizes them as rare forms of genetic variation. Many observations of de novo mutation rates have associated higher rates of mutation correlated to paternal age. In sexually reproducing organisms, the comparatively higher frequency of cell divisions in the parental sperm donor germline drive conclusions that rates of de novo mutation can be tracked along a common basis. The frequency of error during the DNA replication process of gametogenesis, especially amplified in the rapid production of sperm cells, can promote more opportunities for de novo mutations to replicate unregulated by DNA repair machinery. This claim combines the observed effects of increased probability for mutation in rapid spermatogenesis with short periods of time between cellular divisions that limit the efficiency of repair machinery. Rates of de novo mutations that affect an organism during its development can also increase with certain environmental factors. For example, certain intensities of exposure to radioactive elements can inflict damage to an organism's genome, heightening rates of mutation. In humans, the appearance of skin cancer during one's lifetime is induced by overexposure to UV radiation that causes mutations in the cellular and skin genome.
Randomness of mutations
There is a widespread assumption that mutations are (entirely) "random" with respect to their consequences (in terms of probability). This was shown to be wrong as mutation frequency can vary across regions of the genome, with such DNA repair- and mutation-biases being associated with various factors. For instance, Monroe and colleagues demonstrated that—in the studied plant (Arabidopsis thaliana)—more important genes mutate less frequently than less important ones. They demonstrated that mutation is "non-random in a way that benefits the plant". Additionally, previous experiments typically used to demonstrate mutations being random with respect to fitness (such as the Fluctuation Test and Replica plating) have been shown to only support the weaker claim that those mutations are random with respect to external selective constraints, not fitness as a whole.
Disease causation
Changes in DNA caused by mutation in a coding region of DNA can cause errors in protein sequence that may result in partially or completely non-functional proteins. Each cell, in order to function correctly, depends on thousands of proteins to function in the right places at the right times. When a mutation alters a protein that plays a critical role in the body, a medical condition can result. One study on the comparison of genes between different species of Drosophila suggests that if a mutation does change a protein, the mutation will most likely be harmful, with an estimated 70 per cent of amino acid polymorphisms having damaging effects, and the remainder being either neutral or weakly beneficial. Some mutations alter a gene's DNA base sequence but do not change the protein made by the gene. Studies have shown that only 7% of point mutations in noncoding DNA of yeast are deleterious and 12% in coding DNA are deleterious. The rest of the mutations are either neutral or slightly beneficial.
Inherited disorders
If a mutation is present in a germ cell, it can give rise to offspring that carries the mutation in all of its cells. This is the case in hereditary diseases. In particular, if there is a mutation in a DNA repair gene within a germ cell, humans carrying such germline mutations may have an increased risk of cancer. A list of 34 such germline mutations is given in the article DNA repair-deficiency disorder. An example of one is albinism, a mutation that occurs in the OCA1 or OCA2 gene. Individuals with this disorder are more prone to many types of cancers, other disorders and have impaired vision.
DNA damage can cause an error when the DNA is replicated, and this error of replication can cause a gene mutation that, in turn, could cause a genetic disorder. DNA damages are repaired by the DNA repair system of the cell. Each cell has a number of pathways through which enzymes recognize and repair damages in DNA. Because DNA can be damaged in many ways, the process of DNA repair is an important way in which the body protects itself from disease. Once DNA damage has given rise to a mutation, the mutation cannot be repaired.
Role in carcinogenesis
On the other hand, a mutation may occur in a somatic cell of an organism. Such mutations will be present in all descendants of this cell within the same organism. The accumulation of certain mutations over generations of somatic cells is part of cause of malignant transformation, from normal cell to cancer cell.
Cells with heterozygous loss-of-function mutations (one good copy of gene and one mutated copy) may function normally with the unmutated copy until the good copy has been spontaneously somatically mutated. This kind of mutation happens often in living organisms, but it is difficult to measure the rate. Measuring this rate is important in predicting the rate at which people may develop cancer.
Point mutations may arise from spontaneous mutations that occur during DNA replication. The rate of mutation may be increased by mutagens. Mutagens can be physical, such as radiation from UV rays, X-rays or extreme heat, or chemical (molecules that misplace base pairs or disrupt the helical shape of DNA). Mutagens associated with cancers are often studied to learn about cancer and its prevention.
Beneficial and conditional mutations
Although mutations that cause changes in protein sequences can be harmful to an organism, on occasions the effect may be positive in a given environment. In this case, the mutation may enable the mutant organism to withstand particular environmental stresses better than wild-type organisms, or reproduce more quickly. In these cases a mutation will tend to become more common in a population through natural selection. That said, the same mutation can be beneficial in one condition and disadvantageous in another condition. Examples include the following:
HIV resistance: a specific 32 base pair deletion in human CCR5 (CCR5-Δ32) confers HIV resistance to homozygotes and delays AIDS onset in heterozygotes. One possible explanation of the etiology of the relatively high frequency of CCR5-Δ32 in the European population is that it conferred resistance to the bubonic plague in mid-14th century Europe. People with this mutation were more likely to survive infection; thus its frequency in the population increased. Episode background. This theory could explain why this mutation is not found in Southern Africa, which remained untouched by bubonic plague. A newer theory suggests that the selective pressure on the CCR5 Delta 32 mutation was caused by smallpox instead of the bubonic plague.
Malaria resistance: An example of a harmful mutation is sickle-cell disease, a blood disorder in which the body produces an abnormal type of the oxygen-carrying substance haemoglobin in the red blood cells. One-third of all indigenous inhabitants of Sub-Saharan Africa carry the allele, because, in areas where malaria is common, there is a survival value in carrying only a single sickle-cell allele (sickle cell trait). Those with only one of the two alleles of the sickle-cell disease are more resistant to malaria, since the infestation of the malaria Plasmodium is halted by the sickling of the cells that it infests.
Antibiotic resistance: Practically all bacteria develop antibiotic resistance when exposed to antibiotics. In fact, bacterial populations already have such mutations that get selected under antibiotic selection. Obviously, such mutations are only beneficial for the bacteria but not for those infected.
Lactase persistence. A mutation allowed humans to express the enzyme lactase after they are naturally weaned from breast milk, allowing adults to digest lactose, which is likely one of the most beneficial mutations in recent human evolution.
Role in evolution
By introducing novel genetic qualities to a population of organisms, de novo mutations play a critical role in the combined forces of evolutionary change. However, the weight of genetic diversity generated by mutational change is often considered a generally "weak" evolutionary force. Although the random emergence of mutations alone provides the basis for genetic variation across all organic life, this force must be taken in consideration alongside all evolutionary forces at play. Spontaneous de novo mutations as cataclysmic events of speciation depend on factors introduced by natural selection, genetic flow, and genetic drift. For example, smaller populations with heavy mutational input (high rates of mutation) are prone to increases of genetic variation which lead to speciation in future generations. In contrast, larger populations tend to see lesser effects of newly introduced mutated traits. In these conditions, selective forces diminish the frequency of mutated alleles, which are most often deleterious, over time.
Compensated pathogenic deviations
Compensated pathogenic deviations refer to amino acid residues in a protein sequence that are pathogenic in one species but are wild type residues in the functionally equivalent protein in another species. Although the amino acid residue is pathogenic in the first species, it is not so in the second species because its pathogenicity is compensated by one or more amino acid substitutions in the second species. The compensatory mutation can occur in the same protein or in another protein with which it interacts.
It is critical to understand the effects of compensatory mutations in the context of fixed deleterious mutations due to the population fitness decreasing because of fixation. Effective population size refers to a population that is reproducing. An increase in this population size has been correlated with a decreased rate of genetic diversity. The position of a population relative to the critical effect population size is essential to determine the effect deleterious alleles will have on fitness. If the population is below the critical effective size fitness will decrease drastically, however if the population is above the critical effect size, fitness can increase regardless of deleterious mutations due to compensatory alleles.
Compensatory mutations in RNA
As the function of a RNA molecule is dependent on its structure, the structure of RNA molecules is evolutionarily conserved. Therefore, any mutation that alters the stable structure of RNA molecules must be compensated by other compensatory mutations. In the context of RNA, the sequence of the RNA can be considered as ' genotype' and the structure of the RNA can be considered as its 'phenotype'. Since RNAs have relatively simpler composition than proteins, the structure of RNA molecules can be computationally predicted with high degree of accuracy. Because of this convenience, compensatory mutations have been studied in computational simulations using RNA folding algorithms.
Evolutionary mechanism of compensation
Compensatory mutations can be explained by the genetic phenomenon epistasis whereby the phenotypic effect of one mutation is dependent upon mutation(s) at other loci. While epistasis was originally conceived in the context of interaction between different genes, intragenic epistasis has also been studied recently. Existence of compensated pathogenic deviations can be explained by 'sign epistasis', in which the effects of a deleterious mutation can be compensated by the presence of an epistatic mutation in another loci. For a given protein, a deleterious mutation (D) and a compensatory mutation (C) can be considered, where C can be in the same protein as D or in a different interacting protein depending on the context. The fitness effect of C itself could be neutral or somewhat deleterious such that it can still exist in the population, and the effect of D is deleterious to the extent that it cannot exist in the population. However, when C and D co-occur together, the combined fitness effect becomes neutral or positive. Thus, compensatory mutations can bring novelty to proteins by forging new pathways of protein evolution : it allows individuals to travel from one fitness peak to another through the valleys of lower fitness.
DePristo et al. 2005 outlined two models to explain the dynamics of compensatory pathogenic deviations (CPD). In the first hypothesis P is a pathogenic amino acid mutation that and C is a neutral compensatory mutation. Under these conditions, if the pathogenic mutation arises after a compensatory mutation, then P can become fixed in the population. The second model of CPDs states that P and C are both deleterious mutations resulting in fitness valleys when mutations occur simultaneously. Using publicly available, Ferrer-Costa et al. 2007 obtained compensatory mutations and human pathogenic mutation datasets that were characterized to determine what causes CPDs. Results indicate that the structural constraints and the location in protein structure determine whether compensated mutations will occur.
Experimental evidence of compensatory mutations
Experiment in bacteria
Lunzer et al. tested the outcome of swapping divergent amino acids between two orthologous proteins of isopropylmalate dehydrogenase (IMDH). They substituted 168 amino acids in Escherichia coli IMDH that are wild type residues in IMDH Pseudomonas aeruginosa. They found that over one third of these substitutions compromised IMDH enzymatic activity in the Escherichia coli genetic background. This demonstrated that identical amino acid states can result in different phenotypic states depending on the genetic background. Corrigan et al. 2011 demonstrated how Staphylococcus aureus was able to grow normally without the presence of lipoteichoic acid due to compensatory mutations. Whole genome sequencing results revealed that when Cyclic-di-AMP phosphodiesterase (GdpP) was disrupted in this bacterium, it compensated for the disappearance of the cell wall polymer, resulting in normal cell growth.
Research has shown that bacteria can gain drug resistance through compensatory mutations that do not impede or having little effect on fitness. Previous research from Gagneux et al. 2006 has found that laboratory grown Mycobacterium tuberculosis strains with rifampicin resistance have reduced fitness, however drug resistant clinical strains of this pathogenic bacteria do not have reduced fitness. Comas et al. 2012 used whole genome comparisons between clinical strains and lab derived mutants to determine the role and contribution of compensatory mutations in drug resistance to rifampicin. Genome analysis reveal rifampicin resistant strains have a mutation in rpoA and rpoC. A similar study investigated the bacterial fitness associated with compensatory mutations in rifampin resistant Escherichia coli. Results obtained from this study demonstrate that drug resistance is linked to bacterial fitness as higher fitness costs are linked to greater transcription errors.
Experiment in virus
Gong et al. collected obtained genotype data of influenza nucleoprotein from different timelines and temporally ordered them according to their time of origin. Then they isolated 39 amino acid substitutions that occurred in different timelines and substituted them in a genetic background that approximated the ancestral genotype. They found that 3 of the 39 substitutions significantly reduced the fitness of the ancestral background. Compensatory mutations are new mutations that arise and have a positive or neutral impact on a populations fitness. Previous research has shown that populations have can compensate detrimental mutations. Burch and Chao tested Fisher's geometric model of adaptive evolution by testing whether bacteriophage φ6 evolves by small steps. Their results showed that bacteriophage φ6 fitness declined rapidly and recovered in small steps . Viral nucleoproteins have been shown to avoid cytotoxic T lymphocytes (CTLs) through arginine-to glycine substitutions. This substitution mutations impacts the fitness of viral nucleoproteins, however compensatory co-mutations impede fitness declines and aid the virus to avoid recognition from CTLs. Mutations can have three different effects; mutations can have deleterious effects, some increase fitness through compensatory mutations, and lastly mutations can be counterbalancing resulting in compensatory neutral mutations.
Application in human evolution and disease
In the human genome, the frequency and characteristics of de novo mutations have been studied as important contextual factors to our evolution. Compared to the human reference genome, a typical human genome varies at approximately 4.1 to 5.0 million loci, and the majority of this genetic diversity is shared by nearly 0.5% of the population. The typical human genome also contains 40,000 to 200,000 rare variants observed in less than 0.5% of the population that can only have occurred from at least one de novo germline mutation in the history of human evolution. De novo mutations have also been researched as playing a crucial role in the persistence of genetic disease in humans. With recents advancements in next-generation sequencing (NGS), all types of de novo mutations within the genome can be directly studied, the detection of which provides a magnitude of insight toward the causes of both rare and common genetic disorders. Currently, the best estimate of the average human germline SNV mutation rate is 1.18 x 10^-8, with an approximate ~78 novel mutations per generation. The ability to conduct whole genome sequencing of parents and offspring allows for the comparison of mutation rates between generations, narrowing down the origin possibilities of certain genetic disorders.
See also
References
External links
– The Mutalyzer website.
Category:Evolutionary biology
Category:Radiation health effects
Category:Molecular evolution
|
biology
| 8,954
|
20408
|
Marie Curie
|
https://en.wikipedia.org/wiki/Marie_Curie
|
Maria Salomea Skłodowska-Curie (; ; 7 November 1867 – 4 July 1934), known as Marie Curie ( ; ), was a Polish and naturalised-French physicist and chemist who conducted pioneering research on radioactivity.
She was the first woman to win a Nobel Prize, the first person to win a Nobel Prize twice, and the only person to win a Nobel Prize in two scientific fields. Her husband, Pierre Curie, was a co-winner of her first Nobel Prize, making them the first married couple to win the Nobel Prize and launching the Curie family legacy of five Nobel Prizes. She was, in 1906, the first woman to become a professor at the University of Paris.
She was born in Warsaw, in what was then the Kingdom of Poland, part of the Russian Empire. She studied at Warsaw's clandestine Flying University and began her practical scientific training in Warsaw. In 1891, aged 24, she followed her elder sister Bronisława to study in Paris, where she earned her higher degrees and conducted her subsequent scientific work. In 1895, she married the French physicist Pierre Curie, and she shared the 1903 Nobel Prize in Physics with him and with the physicist Henri Becquerel for their pioneering work developing the theory of "radioactivity"—a term she coined. In 1906, Pierre Curie died in a Paris street accident. Marie won the 1911 Nobel Prize in Chemistry for her discovery of the elements polonium and radium, using techniques she invented for isolating radioactive isotopes.
Under her direction, the world's first studies were conducted into the treatment of neoplasms by the use of radioactive isotopes. She founded the Curie Institute in Paris in 1920, and the Curie Institute in Warsaw in 1932; both remain major medical research centres. During World War I, she developed mobile radiography units to provide X-ray services to field hospitals.
While a French citizen, Marie Skłodowska Curie, who used both surnames,See her signature, "M. Skłodowska Curie", in the infobox.Her 1911 Nobel Prize in Chemistry was granted to "Marie Sklodowska Curie" :File:Marie Skłodowska-Curie's Nobel Prize in Chemistry 1911.jpg. never lost her sense of Polish identity. She taught her daughters the Polish language and took them on visits to Poland. She named the first chemical element she discovered polonium, after her native country.
Marie Curie died in 1934, aged 66, at the sanatorium in Passy (), France, of aplastic anaemia likely from exposure to radiation in the course of her scientific research and in the course of her radiological work at field hospitals during World War I. (a 2013 BBC documentary) In addition to her Nobel Prizes, she received numerous other honours and tributes; in 1995 she became the first woman to be entombed on her own merits in the Paris , and Poland declared 2011 the Year of Marie Curie during the International Year of Chemistry. She is the subject of numerous biographies.
Life and career
Early years
Maria Salomea Skłodowska was born in Warsaw, in Congress Poland in the Russian Empire, on 7 November 1867, the fifth and youngest child of well-known teachers Bronisława, née Boguska, and Władysław Skłodowski. The elder siblings of Maria (nicknamed Mania) were Zofia (born 1862, nicknamed Zosia), (born 1863, nicknamed Józio), Bronisława (born 1865, nicknamed Bronia) and Helena (born 1866, nicknamed Hela).
On both the paternal and maternal sides, the family had lost their property and fortunes through patriotic involvements in Polish national uprisings aimed at restoring Poland's independence (the most recent had been the January Uprising of 1863–1865). This condemned the subsequent generation, including Maria and her elder siblings, to a difficult struggle to get ahead in life. Maria's paternal grandfather, Józef Skłodowski had been principal of the Lublin primary school attended by Bolesław Prus,Monika Piątkowska, Prus: Śledztwo biograficzne (Prus: A Biographical Investigation), Kraków, Wydawnictwo Znak, 2017, , pp. 49–50. who became a leading figure in Polish literature.
Władysław Skłodowski taught mathematics and physics, subjects that Maria was to pursue, and was also director of two Warsaw gymnasia (secondary schools) for boys. After Russian authorities eliminated laboratory instruction from the Polish schools, he brought much of the laboratory equipment home and instructed his children in its use. He was eventually fired by his Russian supervisors for pro-Polish sentiments and forced to take lower-paying posts; the family also lost money on a bad investment and eventually chose to supplement their income by lodging boys in the house. Maria's mother Bronisława operated a prestigious Warsaw boarding school for girls; she resigned from the position after Maria was born. She died of tuberculosis in May 1878, when Maria was ten years old. Less than three years earlier, Maria's oldest sibling, Zofia, had died of typhus contracted from a boarder. Maria's father was an atheist, her mother a devout Catholic. The deaths of Maria's mother and sister caused her to give up Catholicism and become agnostic.
When she was ten years old, Maria began attending J. Sikorska's boarding school; next she attended a gymnasium (secondary school) for girls, from which she graduated on 12 June 1883 with a gold medal. After a collapse, possibly due to depression, she spent the following year in the countryside with relatives of her father, and the next year with her father in Warsaw, where she did some tutoring. Unable to enrol in a regular institution of higher education because she was a woman, she and her sister Bronisława became involved with the clandestine Flying University (sometimes translated as "Floating University"), a Polish patriotic institution of higher learning that admitted women students.
Maria made an agreement with her sister, Bronisława, that she would give her financial assistance during Bronisława's medical studies in Paris, in exchange for similar assistance two years later. In connection with this, Maria took a position first as a home tutor in Warsaw, then for two years as a governess in Szczuki with a landed family, the Żorawskis, who were relatives of her father. While working for the latter family, she fell in love with their son, Kazimierz Żorawski, a future eminent mathematician. His parents rejected the idea of his marrying the penniless relative, and Kazimierz was unable to oppose them. Maria's loss of the relationship with Żorawski was tragic for both. He soon earned a doctorate and pursued an academic career as a mathematician, becoming a professor and rector of Kraków University. Still, as an old man and a mathematics professor at the Warsaw Polytechnic, he would sit contemplatively before the statue of Maria Skłodowska that had been erected in 1935 before the Radium Institute, which she had founded in 1932.
At the beginning of 1890, Bronisława—who a few months earlier had married Kazimierz Dłuski, a Polish physician and social and political activist—invited Maria to join them in Paris. Maria declined because she could not afford the university tuition; it would take her a year and a half longer to gather the necessary funds. She was helped by her father, who was able to secure a more lucrative position again. All that time she continued to educate herself, reading books, exchanging letters, and being tutored herself. In early 1889 she returned home to her father in Warsaw. She continued working as a governess and remained there until late 1891. She tutored, studied at the Flying University, and began her practical scientific training (1890–1891) in a chemistry laboratory at the Museum of Industry and Agriculture at Krakowskie Przedmieście 66, near Warsaw's Old Town. The laboratory was run by her cousin Józef Boguski, who had been an assistant in Saint Petersburg to the Russian chemist Dmitri Mendeleyev.
Life in Paris
In late 1891, she left Poland for France. In Paris, Maria (or Marie, as she would be known in France) briefly found shelter with her sister and brother-in-law before renting a garret closer to the university, in the Latin Quarter, and proceeding with her studies of physics, chemistry, and mathematics at the University of Paris, where she enrolled in late 1891. She subsisted on her meagre resources, keeping herself warm during cold winters by wearing all the clothes she had. She focused so hard on her studies that she sometimes forgot to eat. Skłodowska studied during the day and tutored evenings, barely earning her keep. In 1893, she was awarded a degree in physics and began work in an industrial laboratory of Gabriel Lippmann. Meanwhile, she continued studying at the University of Paris and with the aid of a fellowship she was able to earn a second degree in 1894.
Skłodowska had begun her scientific career in Paris with an investigation of the magnetic properties of various steels, commissioned by the Society for the Encouragement of National Industry. That same year, Pierre Curie entered her life: it was their mutual interest in natural sciences that drew them together. Pierre Curie was an instructor at The City of Paris Industrial Physics and Chemistry Higher Educational Institution (ESPCI Paris). They were introduced by Polish physicist Józef Wierusz-Kowalski, who had learned that she was looking for a larger laboratory space, something that Wierusz-Kowalski thought Pierre could access. Though Curie did not have a large laboratory, he was able to find some space for Skłodowska where she was able to begin work.
Their mutual passion for science brought them increasingly closer, and they began to develop feelings for one another. Eventually, Pierre proposed marriage, but at first Skłodowska did not accept as she was still planning to go back to her native country. Curie, however, declared that he was ready to move with her to Poland, even if it meant being reduced to teaching French. Meanwhile, for the 1894 summer break, Skłodowska returned to Warsaw, where she visited her family. She was still labouring under the illusion that she would be able to work in her chosen field in Poland, but she was denied a place at Kraków University because of sexism in academia. A letter from Pierre convinced her to return to Paris to pursue a PhD. At Skłodowska's insistence, Curie had written up his research on magnetism and received his own doctorate in March 1895; he was also promoted to professor at the School. A contemporary quip would call Skłodowska "Pierre's biggest discovery".
On 26 July 1895, they were married in Sceaux; neither wanted a religious service. Marie's dark blue outfit, worn instead of a bridal gown, would serve her for many years as a laboratory outfit. They shared two pastimes: long bicycle trips and journeys abroad, which brought them even closer. In Pierre, Marie had found a new love, a partner, and a scientific collaborator on whom she could depend.
New elements
In 1895, Wilhelm Röntgen discovered the existence of X-rays, though the mechanism behind their production was not yet understood. In 1896, Henri Becquerel discovered that uranium salts emitted rays that resembled X-rays in their penetrating power. He demonstrated that this radiation, unlike phosphorescence, did not depend on an external source of energy but seemed to arise spontaneously from uranium itself. Influenced by these two important discoveries, Curie decided to look into uranium rays as a possible field of research for a thesis.
She used an innovative technique to investigate samples. Fifteen years earlier, her husband and his brother had developed a version of the electrometer, a sensitive device for measuring electric charge. Using her husband's electrometer, she discovered that uranium rays caused the air around a sample to conduct electricity. Using this technique, her first result was the finding that the activity of the uranium compounds depended only on the quantity of uranium present. She hypothesized that the radiation was not the outcome of some interaction of molecules but must come from the atom itself. This hypothesis was an important step in disproving the assumption that atoms were indivisible.
In 1897, her daughter Irène was born. To support her family, Curie began teaching at the . The Curies did not have a dedicated laboratory; most of their research was carried out in a converted shed next to ESPCI. The shed, formerly a medical school dissecting room, was poorly ventilated and not even waterproof. They were unaware of the deleterious effects of radiation exposure attendant on their continued unprotected work with radioactive substances. ESPCI did not sponsor her research, but she received subsidies from metallurgical and mining companies and from various organisations and governments.
Curie's systematic studies included two uranium minerals, pitchblende and torbernite (also known as chalcolite). Her electrometer showed that pitchblende was four times as active as uranium itself, and chalcolite twice as active. She concluded that, if her earlier results relating the quantity of uranium to its activity were correct, then these two minerals must contain small quantities of another substance that was far more active than uranium. She began a systematic search for additional substances that emit radiation, and by 1898 she discovered that the element thorium was also radioactive. Pierre Curie was increasingly intrigued by her work. By mid-1898 he was so invested in it that he decided to drop his work on crystals and to join her.
She was acutely aware of the importance of promptly publishing her discoveries and thus establishing her priority. Had not Becquerel, two years earlier, presented his discovery to the Académie des Sciences the day after he made it, credit for the discovery of radioactivity (and even a Nobel Prize), would instead have gone to Silvanus Thompson. Curie chose the same rapid means of publication. Since she was not a member of the Académie, her paper, giving a brief and simple account of her work, was presented for her to the Académie on 12 April 1898 by her former professor, Gabriel Lippmann. Even so, just as Thompson had been beaten by Becquerel, so Curie was beaten in the race to tell of her discovery that thorium gives off rays in the same way as uranium; two months earlier, Gerhard Carl Schmidt had published his own finding in Berlin.
At that time, no one else in the world of physics had noticed what Curie recorded in a sentence of her paper, describing how much greater were the activities of pitchblende and chalcolite than that of uranium itself: "The fact is very remarkable, and leads to the belief that these minerals may contain an element which is much more active than uranium." She later would recall how she felt "a passionate desire to verify this hypothesis as rapidly as possible". On 14 April 1898, the Curies optimistically weighed out a 100-gram sample of pitchblende and ground it with a pestle and mortar. They did not realise at the time that what they were searching for was present in such minute quantities that they would eventually have to process tonnes of the ore.
In July 1898, Curie and her husband published a joint paper announcing the existence of an element they named 'polonium', in honour of her native Poland, English translation. which would for another twenty years remain partitioned among three empires (Russia, Austria, and Prussia). On 26 December 1898, the Curies announced the existence of a second element, which they named 'radium', from the Latin word for 'ray'. English translation In the course of their research, they also coined the word 'radioactivity'.
To prove their discoveries beyond any doubt, the Curies sought to isolate polonium and radium in pure form. Pitchblende is a complex mineral; the chemical separation of its constituents was an arduous task. The discovery of polonium had been relatively easy; chemically it resembles the element bismuth, and polonium was the only bismuth-like substance in the ore. Radium, however, was more elusive; it is closely related chemically to barium, and pitchblende contains both elements. By 1898 the Curies had obtained traces of radium, but appreciable quantities, uncontaminated with barium, were still beyond reach. The Curies undertook the arduous task of separating out radium salt by differential crystallisation. From a tonne of pitchblende, one-tenth of a gram of radium chloride was separated in 1902. In 1910, she isolated pure radium metal. She never succeeded in isolating polonium, which has a half-life of only 138 days.
Between 1898 and 1902, the Curies published, jointly or separately, a total of 32 scientific papers, including one that announced that, when exposed to radium, diseased, tumour-forming cells were destroyed faster than healthy cells."Marie Sklodowska Curie", Encyclopedia of World Biography, 2nd ed., vol. 4, Detroit, Gale, 2004, pp. 339–41. Gale Virtual Reference Library. Web. 3 June 2013.
In 1900, Curie became the first woman faculty member at the École normale supérieure de jeunes filles and her husband joined the faculty of the University of Paris. In 1902 she visited Poland on the occasion of her father's death.
In June 1903, supervised by Gabriel Lippmann, Curie was awarded her doctorate from the University of Paris. That month the couple were invited to the Royal Institution in London to give a speech on radioactivity; being a woman, she was prevented from speaking, and Pierre Curie alone was allowed to. Meanwhile, a new industry began developing, based on radium. The Curies did not patent their discovery and benefited little from this increasingly profitable business.
Nobel Prizes
In December 1903 the Royal Swedish Academy of Sciences awarded Pierre Curie, Marie Curie, and Henri Becquerel the Nobel Prize in Physics, "in recognition of the extraordinary services they have rendered by their joint researches on the radiation phenomena discovered by Professor Henri Becquerel." At first the committee had intended to honour only Pierre Curie and Henri Becquerel, but a committee member and advocate for women scientists, Swedish mathematician Magnus Gösta Mittag-Leffler, alerted Pierre to the situation, and after his complaint, Marie's name was added to the nomination. Marie Curie was the first woman to be awarded a Nobel Prize.
Curie and her husband declined to go to Stockholm to receive the prize in person; they were too busy with their work, and Pierre Curie, who disliked public ceremonies, was feeling increasingly ill. As Nobel laureates were required to deliver a lecture, the Curies finally undertook the trip in 1905. The award money allowed the Curies to hire their first laboratory assistant. Following the award of the Nobel Prize, and galvanised by an offer from the University of Geneva, which offered Pierre Curie a position, the University of Paris gave him a professorship and the chair of physics, although the Curies still did not have a proper laboratory. Upon Pierre Curie's complaint, the University of Paris relented and agreed to furnish a new laboratory, but it would not be ready until 1906.
In December 1904, Curie gave birth to their second daughter, Ève. She hired Polish governesses to teach her daughters her native language, and sent or took them on visits to Poland.
On 19 April 1906, Pierre Curie died in a road accident. Walking across the Rue Dauphine in heavy rain, he was struck by a horse-drawn vehicle and fell under its wheels, which fractured his skull and killed him instantly. Curie was devastated by her husband's death. On 13 May 1906 the physics department of the University of Paris decided to retain the chair that had been created for her late husband and offer it to Marie. She accepted it, hoping to create a world-class laboratory as a tribute to her husband Pierre. She was the first woman to become a professor at the University of Paris.
Curie's quest to create a new laboratory did not end with the University of Paris, however. In her later years, she headed the Radium Institute (, now Curie Institute, ), a radioactivity laboratory created for her by the Pasteur Institute and the University of Paris. The initiative for creating the Radium Institute had come in 1909 from Pierre Paul Émile Roux, director of the Pasteur Institute, who had been disappointed that the University of Paris was not giving Curie a proper laboratory and had suggested that she move to the Pasteur Institute. Only then, with the threat of Curie leaving, did the University of Paris relent, and eventually the Curie Pavilion became a joint initiative of the University of Paris and the Pasteur Institute.
In 1910 Curie succeeded in isolating radium; she also defined an international standard for radioactive emissions that was eventually named for her and Pierre: the curie. Nevertheless, in 1911 the French Academy of Sciences failed, by one or two votes, to elect her to membership in the academy. Elected instead was Édouard Branly, an inventor who had helped Guglielmo Marconi develop the wireless telegraph. It was only over half a century later, in 1962, that a doctoral student of Curie's, Marguerite Perey, became the first woman elected to membership in the academy.
Despite Curie's fame as a scientist working for France, the public's attitude tended toward xenophobia—the same that had led to the Dreyfus affair—which also fuelled false speculation that Curie was Jewish. During the French Academy of Sciences elections, she was vilified by the right-wing press as a foreigner and atheist. Her daughter later remarked on the French press's hypocrisy in portraying Curie as an unworthy foreigner when she was nominated for a French honour, but portraying her as a French heroine when she received foreign honours such as her Nobel Prizes.
In 1911, it was revealed that Curie was involved in a year-long affair with physicist Paul Langevin, a former student of Pierre Curie's, a married man who was estranged from his wife. This resulted in a press scandal that was exploited by her academic opponents. Curie (then in her mid-40s) was five years older than Langevin and was misrepresented in the tabloids as a foreign Jewish home-wrecker. When the scandal broke, she was away at a conference in Belgium; on her return, she found an angry mob in front of her house and had to seek refuge, with her daughters, in the home of her friend Camille Marbo.
International recognition for her work had been growing to new heights, and the Royal Swedish Academy of Sciences, overcoming opposition prompted by the Langevin scandal, honoured her a second time, with the 1911 Nobel Prize in Chemistry. This award was "in recognition of her services to the advancement of chemistry by the discovery of the elements radium and polonium, by the isolation of radium and the study of the nature and compounds of this remarkable element". Because of the negative publicity due to her affair with Langevin, the chair of the Nobel committee, Svante Arrhenius, attempted to prevent her attendance at the official ceremony for her Nobel Prize in Chemistry, citing her questionable moral standing. Curie replied that she would be present at the ceremony, because "the prize has been given to her for her discovery of polonium and radium" and that "there is no relation between her scientific work and the facts of her private life".
She was the first person to win or share two Nobel Prizes, and remains alone with Linus Pauling as Nobel laureates in two fields each. A delegation of celebrated Polish men of learning, headed by novelist Henryk Sienkiewicz, encouraged her to return to Poland and continue her research in her native country. Curie's second Nobel Prize enabled her to persuade the French government to support the Radium Institute, built in 1914, where research was conducted in chemistry, physics, and medicine. A month after accepting her 1911 Nobel Prize, she was hospitalised with depression and a kidney ailment. For most of 1912, she avoided public life but did spend time in England with her friend and fellow physicist Hertha Ayrton. She returned to her laboratory only in December, after a break of about 14 months.
In 1912 the Warsaw Scientific Society offered her the directorship of a new laboratory in Warsaw but she declined, focusing on the developing Radium Institute to be completed in August 1914, and on a new street named Rue Pierre-Curie (today rue Pierre-et-Marie-Curie). She was appointed director of the Curie Laboratory in the Radium Institute of the University of Paris, founded in 1914. She visited Poland in 1913 and was welcomed in Warsaw but the visit was mostly ignored by the Russian authorities. The institute's development was interrupted by the First World War, as most researchers were drafted into the French Army; it fully resumed its activities after the war, in 1919.
World War I
During World War I, Curie recognised that wounded soldiers were best served if operated upon as soon as possible. She saw a need for field radiological centres near the front lines to assist battlefield surgeons, including to obviate amputations when in fact limbs could be saved.Radioactive, the movie After a quick study of radiology, anatomy, and automotive mechanics, she procured X-ray equipment, vehicles, and auxiliary generators, and she developed mobile radiography units, which came to be popularly known as ("Little Curies"). She became the director of the Red Cross Radiology Service and set up France's first military radiology centre, operational by late 1914. Assisted at first by a military doctor and her 17-year-old daughter Irène, Curie directed the installation of 20 mobile radiological vehicles and another 200 radiological units at field hospitals in the first year of the war. Later, she began training other women as aides.
In 1915, Curie produced hollow needles containing "radium emanation", a colourless, radioactive gas given off by radium, later identified as radon, to be used for sterilising infected tissue. She provided the radium from her own one-gram supply. It is estimated that over a million wounded soldiers were treated with her X-ray units. Busy with this work, she carried out very little scientific research during that period. In spite of all her humanitarian contributions to the French war effort, Curie never received any formal recognition of it from the French government.
Also, promptly after the war started, she attempted to donate her gold Nobel Prize medals to the war effort but the French National Bank refused to accept them. She did buy war bonds, using her Nobel Prize money. She said:
She was also an active member in committees of Poles in France dedicated to the Polish cause. After the war, she summarised her wartime experiences in a book, Radiology in War (1919).
Postwar years
In 1920, for the 25th anniversary of the discovery of radium, the French government established a stipend for her; its previous recipient was Louis Pasteur, who had died in 1895. In 1921, Curie toured the United States to raise funds for research on radium. Marie Mattingly Meloney, after interviewing Curie, created a Marie Curie Radium Fund and helped publicise her trip.
In 1921 U.S. President Warren G. Harding received Curie at the White House to present her with the 1 gram of radium collected in the United States. Before the meeting, recognising her growing fame abroad, and embarrassed by the fact that she had no French official distinctions to wear in public, the French government had offered her a Legion of Honour award, but she refused it. In 1922 she became a fellow of the French Academy of Medicine. She also travelled to other countries, appearing publicly and giving lectures in Belgium, Brazil, Spain, and Czechoslovakia.
Led by Curie, the Institute produced four more Nobel Prize winners, including her daughter Irène Joliot-Curie and her son-in-law, Frédéric Joliot-Curie. Eventually, it became one of the world's four major radioactivity-research laboratories, the others being the Cavendish Laboratory, with Ernest Rutherford; the Institute for Radium Research, Vienna, with Stefan Meyer; and the Kaiser Wilhelm Institute for Chemistry, with Otto Hahn and Lise Meitner.
In August 1922, Curie became a member of the League of Nations' newly created International Committee on Intellectual Cooperation. See also: French version (PDF) and English summary . She sat on the committee until 1934 and contributed to League of Nations' scientific coordination with other prominent researchers such as Albert Einstein, Hendrik Lorentz, and Henri Bergson. In 1923 she wrote a biography of her late husband, titled Pierre Curie. In 1925 she visited Poland to participate in a ceremony laying the foundations for Warsaw's Radium Institute. Her second American tour, in 1929, succeeded in equipping the Warsaw Radium Institute with radium; the Institute opened in 1932, with her sister Bronisława its director. These distractions from her scientific labours, and the attendant publicity, caused her much discomfort but provided resources for her work. In 1930, she was elected to the International Atomic Weights Committee, on which she served until her death. In 1931, Curie was awarded the Cameron Prize for Therapeutics of the University of Edinburgh.
Death
Curie visited Poland for the last time in early 1934. A few months later, on 4 July 1934, she died aged 66 at the Sancellemoz sanatorium in Passy, Haute-Savoie, from aplastic anaemia believed to have been contracted from her long-term exposure to radiation, causing damage to her bone marrow.Marie Curie profile , National Stem Cell Foundation. Accessed 16 July 2022.
The damaging effects of ionising radiation were not known at the time of her work, which had been carried out without the safety measures later developed. She had carried test tubes containing radioactive isotopes in her pocket, and she stored them in her desk drawer, remarking on the faint light that the substances gave off in the dark. Curie was also exposed to X-rays from unshielded equipment while serving as a radiologist in field hospitals during the First World War. When Curie's body was exhumed in 1995, the French Office de Protection contre les Rayonnements Ionisants (OPRI) "concluded that she could not have been exposed to lethal levels of radium while she was alive". They pointed out that radium poses a risk only if it is ingested, and speculated that her illness was more likely to have been due to her use of radiography during the First World War.
She was interred at the cemetery in Sceaux, alongside her husband Pierre. Sixty years later, in 1995, in honour of their achievements, the remains of both were transferred to the Paris . Their remains were sealed in a lead lining because of the radioactivity. She became the second woman to be interred at the Panthéon (after Sophie Berthelot) and the first woman to be honoured with interment in the Panthéon on her own merits.
Because of their levels of radioactive contamination, her papers from the 1890s are considered too dangerous to handle. Even her cookbooks are highly radioactive. Her papers are kept in lead-lined boxes, and those who wish to consult them must wear protective clothing. In her last year, she worked on a book, Radioactivity, which was published posthumously in 1935.
Legacy
The physical and societal aspects of the Curies' work contributed to shaping the world of the twentieth and twenty-first centuries. Cornell University professor L. Pearce Williams observes:
In addition to helping to overturn established ideas in physics and chemistry, Curie's work has had a profound effect in the societal sphere. To attain her scientific achievements, she had to overcome barriers, in both her native and her adoptive country, that were placed in her way because she was a woman.
She was known for her honesty and moderate lifestyle. Having received a small scholarship in 1893, she returned it in 1897 as soon as she began earning her keep. She gave much of her first Nobel Prize money to friends, family, students, and research associates. Curie intentionally refrained from patenting the radium-isolation process so that the scientific community could do research unhindered. She insisted that monetary gifts and awards be given to the scientific institutions she was affiliated with rather than to her. She and her husband often refused awards and medals. Albert Einstein reportedly remarked that she was probably the only person who could not be corrupted by fame.
Commemorations
As one of the most famous scientists in history, Marie Curie has become an icon in the scientific world and has received tributes from across the globe, even in the realm of pop culture. She also received many honorary degrees from universities across the world.
Marie Curie was the first woman to win a Nobel Prize, the first person to win two Nobel Prizes, the only woman to win in two fields, and the only person to win in multiple sciences. Awards and honours that she received include:
Nobel Prize in Physics (1903, with her husband Pierre Curie and Henri Becquerel)
Davy Medal (1903, with Pierre)
Matteucci Medal (1904, with Pierre)
Actonian Prize (1907)
Elliott Cresson Medal (1909)
Legion of Honour (1909, rejected)
Nobel Prize in Chemistry (1911)
Civil Order of Alfonso XII (1919)
Franklin Medal of the American Philosophical Society (1921)
Order of the White Eagle (2018, posthumously)
Entities that have been named after Marie Curie include:
The curie (symbol Ci), a unit of radioactivity, is named in honour of her and Pierre Curie (although the commission which agreed on the name never clearly stated whether the standard was named after Pierre, Marie, or both).
The element with atomic number 96 was named curium (symbol Cm).
Three radioactive minerals are also named after the Curies: curite, sklodowskite, and cuprosklodowskite.
The Marie Skłodowska-Curie Actions fellowship program of the European Union for young scientists wishing to work in a foreign country
In 2007 a Paris metro station (in Ivry) was renamed after the two Curies.
The Marie-Curie station, a planned underground Réseau express métropolitain (REM) station in the borough of Saint-Laurent in Montreal is named in her honour. A nearby road, Avenue Marie Curie, is also named in her honour.
The Polish research nuclear reactor Maria
The 7000 Curie asteroid
Marie Curie charity, in the United Kingdom
The IEEE Marie Sklodowska-Curie Award, an international award presented for outstanding contributions to the field of nuclear and plasma sciences and engineering, was established by the Institute of Electrical and Electronics Engineers in 2008.
The Marie Curie Medal, an annual science award established in 1996 and conferred by the Polish Chemical Society
The Marie Curie–Sklodowska Medal and Prize, an annual award conferred by the London-based Institute of Physics for distinguished contributions to physics education
Maria Curie-Skłodowska University in Lublin, Poland
Pierre and Marie Curie University in Paris
Maria Skłodowska-Curie National Research Institute of Oncology in Poland
École élémentaire Marie-Curie in London, Ontario, Canada; Curie Metropolitan High School in Chicago, United States; Marie Curie High School in Ho Chi Minh City, Vietnam; Lycée français Marie Curie de Zurich, Switzerland; see Lycée Marie Curie for a list of other schools named after her.
Rue Madame Curie in Beirut, Lebanon
Beetle species – Psammodes sklodowskae Kamiński & Gearner
Numerous biographies are devoted to her, including:
Ève Curie (Marie Curie's daughter), Madame Curie, 1938.
Françoise Giroud, Marie Curie: A Life, 1987.
Susan Quinn, Marie Curie: A Life, 1996.
Barbara Goldsmith, Obsessive Genius: The Inner World of Marie Curie, 2005.
Lauren Redniss, Radioactive: Marie and Pierre Curie, a Tale of Love and Fallout, 2011, adapted into the 2019 British film.
Marie Curie has been the subject of a number of films:
1943: Madame Curie, a U.S. Oscar-nominated film by Mervyn LeRoy starring Greer Garson.
1997: Les Palmes de M. Schutz, a French film adapted from a play of the same title, and directed by Claude Pinoteau. Marie Curie is played by Isabelle Huppert.
2014: Marie Curie, une femme sur le front, a French-Belgian film, directed by and starring Dominique Reymond.
2016: Marie Curie: The Courage of Knowledge, a European co-production by Marie Noëlle starring Karolina Gruszka.
2016: Super Science Friends, an American Internet animated series created by Brett Jubinville with Hedy Gregor as Marie Curie.
2019: Radioactive, a British film by Marjane Satrapi starring Rosamund Pike.
Curie is the subject of the 2013 play False Assumptions by Lawrence Aronovitch, in which the ghosts of three other women scientists observe events in her life.Mixing Science With Theatre , Ottawa Sun, March 2013 Curie has also been portrayed by Susan Marie Frontczak in her play, Manya: The Living History of Marie Curie, a one-woman show which by 2014 had been performed in 30 U.S. states and nine countries. Lauren Gunderson's 2019 play The Half-Life of Marie Curie portrays Curie during the summer after her 1911 Nobel Prize victory, when she was grappling with depression and facing public scorn over the revelation of her affair with Paul Langevin.
The life of the scientist was also the subject of a 2018 Korean musical, titled Marie Curie. The show was since translated in English (as Marie Curie a New Musical) and has been performed several times across Asia and Europe, receiving its official Off West End premiere in London's Charing Cross Theatre in summer 2024.
Curie has appeared on more than 600 postage stamps in many countries across the world.
Between 1989 and 1996, she was depicted on a 20,000-złoty banknote designed by Andrzej Heidrich. In 2011, a commemorative 20-złoty banknote depicting Curie was issued by the National Bank of Poland on the 100th anniversary of the scientist receiving the Nobel Prize in Chemistry.
In 1994, the Bank of France issued a 500-franc banknote depicting Marie and Pierre Curie. Since 2024, Curie has been depicted on French 50 euro cent coins to commemorate her importance in French history.
In 2025, the European Central Bank announced that Curie had been selected to appear on the obverse of twenty euro banknotes in a future redesign, were the theme "European culture" to be selected over "Rivers and birds".
Marie Curie was immortalized in at least one color Autochrome Lumière photograph during her lifetime. It was saved in Musée Curie in Paris.History of Photography by David E. Wolf 12.09.2017
See also
Charlotte Hoffman Kellogg, who sponsored Marie Curie's visit to the US
Eusapia Palladino: Spiritualist medium whose Paris séances were attended by an intrigued Pierre Curie and a sceptical Marie Curie
List of female Nobel laureates
List of female nominees for the Nobel Prize
List of Poles in Chemistry
List of Poles in Physics
List of Polish Nobel laureates
Skłodowski family
Timeline of women in science
Treatise on Radioactivity, by Marie Curie
Women in chemistry
Women in physics
Notes
References
Further reading
Nonfiction
Sobel, Dava (2024). The Elements of Marie Curie: How the Glow of Radium Lit a Path for Women in Science. . .
Fiction
A 2004 novel by Per Olov Enquist featuring Maria Skłodowska-Curie, neurologist Jean-Martin Charcot, and his Salpêtrière patient "Blanche" (Marie Wittman). The English translation was published in 2006.
External links
Category:1867 births
Category:1934 deaths
Category:19th-century French chemists
Category:19th-century French inventors
Category:19th-century French physicists
Category:19th-century French women scientists
Category:19th-century Polish chemists
Category:19th-century Polish inventors
Category:19th-century Polish physicists
Category:19th-century Polish women scientists
Category:20th-century French chemists
Category:20th-century French inventors
Category:20th-century French physicists
Category:20th-century French women scientists
Category:20th-century Polish chemists
Category:20th-century Polish inventors
Category:20th-century Polish physicists
Category:20th-century Polish women scientists
Category:Academic staff of the University of Paris
Category:Burials at the Panthéon, Paris
Category:Corresponding Members of the Russian Academy of Sciences (1917–1925)
Category:Corresponding members of the Saint Petersburg Academy of Sciences
Category:Corresponding Members of the USSR Academy of Sciences
Marie
Category:Deaths from anemia
Category:Discoverers of chemical elements
Category:Emigrants from Congress Poland to France
Category:Experimental physicists
Category:Flying University alumni
Category:Former Roman Catholics
Category:Naturalized citizens of France
Category:French agnostics
Category:French atheists
Category:French Nobel laureates
Category:French nuclear physicists
Category:French women chemists
Category:French women physicists
Category:Honorary members of the USSR Academy of Sciences
Category:Inventors killed by their own invention
Category:Legion of Honour refusals
Category:Members of the Lwów Scientific Society
Category:Nobel laureates in Chemistry
Category:Nobel laureates in Physics
Category:Nobel laureates with multiple Nobel awards
Category:Nuclear chemists
Category:People from Warsaw Governorate
Category:Polish agnostics
Category:Polish atheists
Category:Polish governesses
Category:Polish Nobel laureates
Category:Polish nuclear physicists
Category:Recipients of the Matteucci Medal
Category:Scientists from Warsaw
Category:University of Paris alumni
Category:Deaths by acute radiation syndrome
Category:19th-century women inventors
Category:Women Nobel laureates
Category:Women nuclear physicists
Category:Scientists from Congress Poland
Category:Women's firsts
Maria Salomea
Category:20th-century women inventors
Category:International members of the American Philosophical Society
|
biographies
| 6,582
|
21473
|
Nikola Tesla
|
https://en.wikipedia.org/wiki/Nikola_Tesla
|
Nikola Tesla (10 July 1856 – 7 January 1943) was a Serbian-American engineer, futurist, and inventor. He is known for his contributions to the design of the modern alternating current (AC) electricity supply system.
Born and raised in the Austrian Empire, Tesla first studied engineering and physics in the 1870s without receiving a degree. He then gained practical experience in the early 1880s working in telephony and at Continental Edison in the new electric power industry. In 1884, he immigrated to the United States, where he became a naturalized citizen. He worked for a short time at the Edison Machine Works in New York City before he struck out on his own. With the help of partners to finance and market his ideas, Tesla set up laboratories and companies in New York to develop a range of electrical and mechanical devices. His AC induction motor and related polyphase AC patents, licensed by Westinghouse Electric in 1888, earned him a considerable amount of money and became the cornerstone of the polyphase system, which that company eventually marketed.
Attempting to develop inventions he could patent and market, Tesla conducted a range of experiments with mechanical oscillators/generators, electrical discharge tubes, and early X-ray imaging. He also built a wirelessly controlled boat, one of the first ever exhibited. Tesla became well known as an inventor and demonstrated his achievements to celebrities and wealthy patrons at his lab, and was noted for his showmanship at public lectures. Throughout the 1890s, Tesla pursued his ideas for wireless lighting and worldwide wireless electric power distribution in his high-voltage, high-frequency power experiments in New York and Colorado Springs. In 1893, he made pronouncements on the possibility of wireless communication with his devices. Tesla tried to put these ideas to practical use in his unfinished Wardenclyffe Tower project, an intercontinental wireless communication and power transmitter, but ran out of funding before he could complete it.
After Wardenclyffe, Tesla experimented with a series of inventions in the 1910s and 1920s with varying degrees of success. Having spent most of his money, Tesla lived in a series of New York hotels, leaving behind unpaid bills. He died in New York City in January 1943. Tesla's work fell into relative obscurity following his death, until 1960, when the General Conference on Weights and Measures named the International System of Units (SI) measurement of magnetic flux density the tesla in his honor. There has been a resurgence in popular interest in Tesla since the 1990s. In 2013, Time named Tesla one of the 100 most significant figures of all time.
Early years
Childhood
Nikola Tesla was born on 10 July 1856 in the village of Smiljan, in the Military Frontier of the Austrian Empire (present-day Croatia) into an ethnic Serb family. His father, Milutin Tesla (1819–1879), was a priest of the Eastern Orthodox Church. His father's brother Josif was a lecturer at a military academy who wrote several textbooks on mathematics.
Tesla's mother, Georgina "Đuka" Mandić (1822–1892), whose father was also an Eastern Orthodox priest, had a talent for making home craft tools and mechanical appliances and the ability to memorize Serbian epic poems. Đuka had never received a formal education. Tesla credited his eidetic memory and creative abilities to his mother's genetics and influence.
Tesla was the fourth of five children. In 1861, Tesla attended primary school in Smiljan where he studied German, arithmetic, and religion. In 1862, the Tesla family moved to the nearby town of Gospić, where Tesla's father worked as parish priest. Nikola completed primary school, followed by middle school. Later in his patent applications, before he obtained American citizenship, Tesla would identify himself as "of Smiljan, Lika, border country of Austria-Hungary".
Education
In 1870, Tesla moved to Karlovac to attend high school at the Higher Real Gymnasium where the classes were held in German, as it was usual throughout schools within the Austro-Hungarian Military Frontier. Tesla later wrote that he became interested in his physics professor's demonstrations of electricity. The "mysterious phenomena" made him want "to know more of this wonderful force". He was able to perform integral calculus in his head, prompting his teachers to believe that he was cheating. He finished a four-year term in three years, graduating in 1873.
After graduating Tesla returned to Smiljan but soon contracted cholera, was bedridden for nine months and was near death several times. In a moment of despair, Tesla's father (who had originally wanted him to enter the priesthood), promised to send him to the best engineering school if he recovered from the illness. Tesla later said that he had read Mark Twain's earlier works while recovering from his illness.
The next year Tesla evaded conscription into the Austro-Hungarian Army in Smiljan by running away southeast of Lika to Tomingaj, near Gračac. There he explored the mountains wearing hunter's garb. Tesla said that this contact with nature made him stronger, both physically and mentally. He enrolled at the Imperial-Royal Technical College in Graz in 1875 on a Military Frontier scholarship. Tesla passed nine exams (nearly twice as many as required) and received a letter of commendation from the dean of the technical faculty to his father, which stated, "Your son is a star of first rank." At Graz, Tesla was fascinated by the lectures on electricity presented by professor Jakob Pöschl. But by his third year he was failing in school and never graduated, leaving Graz in December 1878. One biographer suggests Tesla was not studying and may have been expelled for gambling and womanizing.
Tesla's family did not hear from him after he left school. There was a rumor among his classmates that he had drowned in the nearby river Mur but in January one of them ran into Tesla in the town of Maribor and reported that encounter to Tesla's family. It turned out Tesla had been working there as a draftsman for 60 florins per month. In March 1879, Milutin finally located his son and tried to convince him to return home and take up his education in Prague. Tesla returned to Gospić later that month when he was deported for not having a residence permit. Tesla's father died the next month, on 17 April 1879, at the age of 60 after an unspecified illness.
In January 1880, two of Tesla's uncles paid for him to leave Gospić for Prague, where he was to study. He arrived too late to enroll at Charles-Ferdinand University; he had never studied Greek, a required subject; and he was illiterate in Czech, another required subject. He attended lectures in philosophy at the university as an auditor, but he did not receive grades for the courses.
Budapest Telephone Exchange
Tesla moved to Budapest, Hungary, in 1881 to work under Tivadar Puskás at a telegraph company, the Budapest Telephone Exchange. Upon arrival, Tesla realized that the company, then under construction, was not functional, so he worked as a draftsman in the Central Telegraph Office instead. Within a few months, the Budapest Telephone Exchange became functional, and Tesla was allocated the chief electrician position. Tesla later described how he made many improvements to the Central Station equipment including an improved telephone repeater or amplifier.
Working at Edison
In 1882, Tivadar Puskás got Tesla another job in Paris with the Continental Edison Company. Tesla began working in what was then a brand new industry, installing indoor incandescent lighting citywide in large scale electric power utility. The company had several subdivisions and Tesla worked at the Société Electrique Edison, the division in the Ivry-sur-Seine suburb of Paris in charge of installing the lighting system. There he gained a great deal of practical experience in electrical engineering. Management took notice of his advanced knowledge in engineering and physics and soon had him designing and building improved versions of generating dynamos and motors.
Moving to the United States
In 1884, Edison manager Charles Batchelor, who had been overseeing the Paris installation, was brought back to the United States to manage the Edison Machine Works, a manufacturing division situated in New York City, and asked that Tesla be brought to the United States as well. In June 1884, Tesla emigrated and began working almost immediately at the Machine Works on Manhattan's Lower East Side, an overcrowded shop with a workforce of several hundred machinists, laborers, managing staff, and 20 "field engineers" struggling with the task of building the large electric utility in that city. As in Paris, Tesla was working on troubleshooting installations and improving generators.
Historian W. Bernard Carlson notes Tesla may have met company founder Thomas Edison only a couple of times. One of those times was noted in Tesla's autobiography where, after staying up all night repairing the damaged dynamos on the ocean liner , he ran into Batchelor and Edison, who made a quip about their "Parisian" being out all night. After Tesla told them he had been up all night fixing the Oregon, Edison commented to Batchelor that "this is a damned good man". One of the projects given to Tesla was to develop an arc lamp–based street lighting system.Radmilo Ivanković' Dragan Petrović, review of the reprinted "Nikola Tesla: Notebook from the Edison Machine Works 1884–1885" , teslauniverse.com Arc lighting was the most popular type of street lighting but it required high voltages and was incompatible with the Edison low-voltage incandescent system, causing the company to lose contracts in some cities. Tesla's designs were never put into production, possibly because of technical improvements in incandescent street lighting or because of an installation deal that Edison made with an arc lighting company.
Tesla had been working at the Machine Works for a total of six months when he quit. What event precipitated his leaving is unclear. It may have been over a bonus he did not receive, either for redesigning generators or for the arc lighting system that was shelved. Tesla had previous run-ins with the Edison company over unpaid bonuses he believed he had earned. In his autobiography, Tesla stated the manager of the Edison Machine Works offered a $50,000 bonus to design "twenty-four different types of standard machines" "but it turned out to be a practical joke".My Inventions: The Autobiography of Nikola Tesla, 1919, p. 19. Accessed 23 January 2017. Later versions of this story have Thomas Edison himself offering and then reneging on the deal, quipping: "Tesla, you don't understand our American humor". The size of the bonus in either story has been noted as odd, since Machine Works manager Batchelor was stingy with pay, and the company did not have that amount of cash (equal to $ today) on hand. Tesla's diary contains just one comment on what happened at the end of his employment, a note he scrawled across the two pages covering 7 December 1884, to 4 January 1885, saying "Good By to the Edison Machine Works".
Tesla Electric Light and Manufacturing
Soon after leaving the Edison company, Tesla was working on patenting an arc lighting system, possibly the same one he had developed at Edison. In March 1885, he met with patent attorney Lemuel W. Serrell, the same attorney used by Edison, to obtain help with submitting the patents. Serrell introduced Tesla to two businessmen, Robert Lane and Benjamin Vail, who agreed to finance an arc lighting manufacturing and utility company in Tesla's name, the Tesla Electric Light and Manufacturing Company. Tesla worked for the rest of the year obtaining the patents that included an improved DC generator, the first patents issued to Tesla in the US, and building and installing the system in Rahway, New Jersey.
The investors showed little interest in Tesla's ideas for new types of alternating current motors and electrical transmission equipment. After the utility was up and running in 1886, they decided that the manufacturing side of the business was too competitive and opted to simply run an electric utility. They formed a new utility company, abandoning Tesla's company and leaving the inventor penniless. Tesla even lost control of the patents he had generated, since he had assigned them to the company in exchange for stock. He had to work at various electrical repair jobs and as a ditch digger for $2 per day. Later in life, Tesla recounted that part of 1886 as a time of hardship, writing "My high education in various branches of science, mechanics and literature seemed to me like a mockery".
AC and the induction motor
In late 1886, Tesla met Alfred S. Brown, a Western Union superintendent, and New York attorney Charles Fletcher Peck.Charles Fletcher Peck of Englewood, New Jersey per The two men were experienced in setting up companies and promoting inventions and patents for financial gain. Based on Tesla's new ideas for electrical equipment, including a thermo-magnetic motor idea, they agreed to back the inventor financially and handle his patents. Together they formed the Tesla Electric Company in April 1887, with an agreement that profits from generated patents would go to Tesla, to Peck and Brown, and to fund development. They set up a laboratory for Tesla at 89 Liberty Street in Manhattan, where he worked on improving and developing new types of electric motors, generators, and other devices.
In 1887, Tesla developed an induction motor that ran on alternating current (AC), a power system format that was rapidly expanding in Europe and the United States because of its advantages in long-distance, high-voltage transmission. The motor used polyphase current, which generated a rotating magnetic field to turn the motor (a principle that Tesla claimed to have conceived in 1882).Thomas Parke Hughes, Networks of Power: Electrification in Western Society, 1880–1930, pp. 115–118 This innovative electric motor, patented in May 1888, was a simple self-starting design that did not need a commutator, thus avoiding sparking and the high maintenance of constantly servicing and replacing mechanical brushes.Henry G. Prout, A Life of George Westinghouse, p. 129
Along with getting the motor patented, Peck and Brown arranged to get the motor publicized, starting with independent testing to verify it was a functional improvement, followed by press releases sent to technical publications for articles to run concurrently with the issue of the patent. Physicist William Arnold Anthony (who tested the motor) and Electrical World magazine editor Thomas Commerford Martin arranged for Tesla to demonstrate his AC motor on 16 May 1888 at the American Institute of Electrical Engineers. Engineers working for the Westinghouse Electric & Manufacturing Company reported to George Westinghouse that Tesla had a viable AC motor and related power system—something Westinghouse needed for the alternating current system he was already marketing. Westinghouse looked into getting a patent on a similar commutator-less, rotating magnetic field-based induction motor developed in 1885 and presented in a paper in March 1888 by Italian physicist Galileo Ferraris, but decided that Tesla's patent would probably control the market.
In July 1888, Brown and Peck negotiated a licensing deal with George Westinghouse for Tesla's polyphase induction motor and transformer designs for $60,000 in cash and stock and a royalty of $2.50 per AC horsepower produced by each motor. Westinghouse also hired Tesla for one year for the large fee of $2,000 ($ in today's dollars) per month to be a consultant at the Westinghouse Electric & Manufacturing Company's Pittsburgh labs.
During that year, Tesla worked in Pittsburgh, helping to create an alternating current system to power the city's streetcars. He found it a frustrating period because of conflicts with the other Westinghouse engineers over how best to implement AC power. Between them, they settled on a 60-cycle AC system that Tesla proposed (to match the working frequency of Tesla's motor), but they soon found that it would not work for streetcars, since Tesla's induction motor could run only at a constant speed. They ended up using a DC traction motor instead.
Market turmoil
Tesla's demonstration of his induction motor and Westinghouse's subsequent licensing of the patent, both in 1888, came at the time of extreme competition between electric companies.Robert L. Bradley, Jr. (2011). Edison to Enron: Energy Markets and Political Strategies, John Wiley & Sons, pp. 55–58 The three big firms, Westinghouse, Edison, and Thomson-Houston Electric Company, were trying to grow in a capital-intensive business while financially undercutting each other. There was even a "war of currents" propaganda campaign going on, with Edison Electric claiming their direct current system was better and safer than the Westinghouse alternating current system and Thomson-Houston sometimes siding with Edison. Competing in this market meant Westinghouse would not have the cash or engineering resources to develop Tesla's motor and the related polyphase system right away.
Two years after signing the Tesla contract, Westinghouse Electric was in trouble. The near collapse of Barings Bank in London triggered the financial panic of 1890, causing investors to call in their loans to Westinghouse Electric. The sudden cash shortage forced the company to refinance its debts. The new lenders demanded that Westinghouse cut back on what looked like excessive spending on acquisition of other companies, research, and patents, including the per motor royalty in the Tesla contract. At that point, the Tesla induction motor had been unsuccessful and was stuck in development. Westinghouse was paying a $15,000-a-year guaranteed royaltyThomas Parke Hughes, Networks of Power: Electrification in Western Society, 1880–1930 (1983), p. 119 even though operating examples of the motor were rare and polyphase power systems needed to run it were even rarer.
In early 1891, George Westinghouse explained his financial difficulties to Tesla in stark terms, saying that, if he did not meet the demands of his lenders, he would no longer be in control of Westinghouse Electric and Tesla would have to "deal with the bankers" to try to collect future royalties. The advantages of having Westinghouse continue to champion the motor probably seemed obvious to Tesla and he agreed to release the company from the royalty payment clause in the contract. Six years later Westinghouse purchased Tesla's patent for a lump sum payment of $216,000 as part of a patent-sharing agreement signed with General Electric (a company created from the 1892 merger of Edison and Thomson-Houston).Christopher Cooper, The Truth about Tesla: The Myth of the Lone Genius in the History of Innovation, Race Point Publishing. 2015, p. 109Electricity, a Popular Electrical Journal, Volume 13, No. 4, 4 August 1897, Electricity Newspaper Company, pp. 50 Google Books
New York laboratories
The money Tesla made from licensing his AC patents made him independently wealthy and gave him the time and funds to pursue his own interests. In 1889, Tesla moved out of the Liberty Street shop Peck and Brown had rented and for the next dozen years worked out of a series of workshop/laboratory spaces in Manhattan. These included a lab at 175 Grand Street (1889–1892), the fourth floor of 33–35 South Fifth Avenue (1892–1895), and sixth and seventh floors of 46 & 48 East Houston Street (1895–1902).Carlson, W. Bernard (2013). Tesla: Inventor of the Electrical Age, Princeton University Press, p. 218
Tesla coil
In the summer of 1889, Tesla traveled to the 1889 Exposition Universelle in Paris and learned of Heinrich Hertz's 1886–1888 experiments that proved the existence of electromagnetic radiation, including radio waves. In repeating and then expanding on these experiments Tesla tried powering a Ruhmkorff coil with a high speed alternator he had been developing as part of an improved arc lighting system but found that the high-frequency current overheated the iron core and melted the insulation between the primary and secondary windings in the coil. To fix this problem Tesla came up with his "oscillating transformer", with an air gap instead of insulating material between the primary and secondary windings and an iron core that could be moved to different positions in or out of the coil. Later called the Tesla coil, it would be used to produce high-voltage, low-current, high frequency alternating-current electricity. He would use this resonant transformer circuit in his later wireless power work.
Wireless lighting
After 1890, Tesla experimented with transmitting power by inductive and capacitive coupling using high AC voltages generated with his Tesla coil., lecture delivered before the American Institute of Electrical Engineers, Columbia College, New York. Reprinted as a He attempted to develop a wireless lighting system based on near-field inductive and capacitive coupling and conducted a series of public demonstrations where he lit Geissler tubes and even incandescent light bulbs from across a stage. He spent most of the decade working on variations of this new form of lighting with the help of various investors but none of the ventures succeeded in making a commercial product out of his findings.Christopher Cooper (2015). The Truth About Tesla: The Myth of the Lone Genius in the History of Innovation, Race Point Publishing, pp. 143–144
In 1893 at St. Louis, Missouri, the Franklin Institute in Philadelphia, Pennsylvania and the National Electric Light Association, Tesla told onlookers that he was sure a system like his could eventually conduct "intelligible signals or perhaps even power to any distance without the use of wires" by conducting it through the Earth.
On 30 July 1891, aged 35, Tesla became a naturalized citizen of the United States., Naturalization Index, NYC Courts, referenced in Carlson (2013), Tesla: Inventor of the Electrical Age, p. H-41 In the same year, he patented his Tesla coil.
He served as a vice-president of the American Institute of Electrical Engineers from 1892 to 1894, the forerunner of the modern-day Institute of Electrical and Electronics Engineers (IEEE) (along with the Institute of Radio Engineers).
Polyphase system and the Columbian Exposition
By the beginning of 1893, Westinghouse engineer Charles F. Scott and then Benjamin G. Lamme had made progress on an efficient version of Tesla's induction motor. Lamme found a way to make the polyphase system it would need compatible with older single-phase AC and DC systems by developing a rotary converter. Westinghouse Electric now had a way to provide electricity to all potential customers and started branding their polyphase AC system as the "Tesla Polyphase System". They believed that Tesla's patents gave them patent priority over other polyphase AC systems.
Westinghouse Electric asked Tesla to participate in the 1893 World's Columbian Exposition in Chicago where the company had a large space in the "Electricity Building" devoted to electrical exhibits. Westinghouse Electric won the bid to light the Exposition with alternating current and it was a key event in the history of AC power, as the company demonstrated to the American public the safety, reliability, and efficiency of an alternating current system that was polyphase and could also supply the other AC and DC exhibits at the fair.
A special exhibit space was set up to display various forms and models of Tesla's induction motor. The rotating magnetic field that drove them was explained through a series of demonstrations including an Egg of Columbus that used the two-phase coil found in an induction motor to spin a copper egg making it stand on end.Hugo Gernsback, "Tesla's Egg of Columbus, How Tesla Performed the Feat of Columbus Without Cracking the Egg" Electrical Experimenter, 19 March 1919, p. 774
Tesla visited the fair for a week during its six-month run to attend the International Electrical Congress and put on a series of demonstrations at the Westinghouse exhibit.Thomas Commerford Martin, The Inventions, Researches and Writings of Nikola Tesla: With Special Reference to His Work in Polyphase Currents and High Potential Lighting, Electrical Engineer – 1894, Chapter XLII, page 485 A specially darkened room had been set up where Tesla showed his wireless lighting system, using a demonstration he had previously performed throughout America and Europe; these included using high-voltage, high-frequency alternating current to light wireless gas-discharge lamps.
Steam-powered oscillating generator
During his presentation at the International Electrical Congress in the Columbian Exposition Agriculture Hall, Tesla introduced his steam-powered reciprocating electricity generator that he patented that year, something he thought was a better way to generate alternating current. Steam was forced into the oscillator and rushed out through a series of ports, pushing a piston up and down that was attached to an armature. The magnetic armature vibrated up and down at high speed, producing an alternating magnetic field. This induced alternating electric current in the wire coils located adjacent. It did away with the complicated parts of a steam engine/generator, but never caught on as a feasible engineering solution to generate electricity.Reciprocating Engine, , 6 February 1894.
Consulting on Niagara
In 1893, Edward Dean Adams, who headed the Niagara Falls Cataract Construction Company, sought Tesla's opinion on what system would be best to transmit power generated at the falls. Over several years, there had been a series of proposals and open competitions on how best to do it. Among the systems proposed by several US and European companies were two-phase and three-phase AC, high-voltage DC, and compressed air. Adams asked Tesla for information about the current state of all the competing systems. Tesla advised Adams that a two-phased system would be the most reliable and that there was a Westinghouse system to light incandescent bulbs using two-phase alternating current. The company awarded a contract to Westinghouse Electric for building a two-phase AC generating system at the Niagara Falls, based on Tesla's advice and Westinghouse's demonstration at the Columbian Exposition. At the same time, a further contract was awarded to General Electric to build the AC distribution system.
The Nikola Tesla Company
In 1895, Edward Dean Adams, impressed with what he saw when he toured Tesla's lab, agreed to help found the Nikola Tesla Company, set up to fund, develop, and market a variety of previous Tesla patents and inventions as well as new ones. Alfred Brown signed on, bringing along patents developed under Peck and Brown. The board was filled out with William Birch Rankine and Charles F. Coaney.
On 13 March 1895, the South Fifth Avenue building that housed Tesla's lab caught fire. It started in the basement of the building and was so intense Tesla's fourth-floor lab burned and collapsed into the second floor. The fire set back Tesla's ongoing projects, and destroyed a collection of early notes and research material, models, and demonstration pieces, including many that had been exhibited at the 1893 Worlds Colombian Exposition. Tesla told The New York Times "I am in too much grief to talk. What can I say?".Mr. Tesla's Great Loss, All of the Electrician's Valuable Instruments Burned, Work of Half a Lifetime Gone, New York Times, 14 March 1895 (archived at teslauniverse.com )
X-ray experimentation
Starting in 1894, Tesla began investigating what he referred to as radiant energy of "invisible" kinds after he had noticed damaged film in his laboratory in previous experiments (later identified as "Roentgen rays" or "X-rays"). His early experiments were with Crookes tubes, a cold cathode electrical discharge tube. Tesla may have inadvertently captured an X-ray image—predating, by a few weeks, Wilhelm Röntgen's December 1895 announcement of the discovery of X-rays—when he tried to photograph Mark Twain illuminated by a Geissler tube, an earlier type of gas discharge tube. The only thing captured in the image was the metal locking screw on the camera lens.
In March 1896, Tesla conducted experiments in X-ray imaging, developing a high-energy single-terminal vacuum tube that had no target electrode and that worked from the output of the Tesla coil (the modern term for the phenomenon produced by this device is bremsstrahlung or braking radiation). In his research, Tesla devised several experimental setups to produce X-rays. Tesla held that, with his circuits, the "instrument will ... enable one to generate Roentgen rays of much greater power than obtainable with ordinary apparatus".
Tesla noted the hazards of working with his circuit and single-node X-ray-producing devices. In his many notes on the early investigation of this phenomenon, he attributed the skin damage to various causes. He believed early on that damage to the skin was not caused by the Roentgen rays, but by the ozone generated in contact with the skin, and to a lesser extent, by nitrous acid. Tesla incorrectly believed that X-rays were longitudinal waves, such as those produced in waves in plasmas. These plasma waves can occur in force-free magnetic fields.Griffiths, David J. Introduction to Electrodynamics, and Jackson, John D. Classical Electrodynamics, .
Radio remote control
In 1898, Tesla demonstrated a boat that used a coherer-based radio control—which he dubbed "telautomaton"—to the public during an electrical exhibition at Madison Square Garden. Tesla tried to sell his idea to the US military as a type of radio-controlled torpedo, but they showed little interest. Tesla took the opportunity to further demonstrate "Teleautomatics" in an address to a meeting of the Commercial Club in Chicago, while he was traveling to Colorado Springs, on 13 May 1899.
Wireless power
From the 1890s through 1906, Tesla spent a great deal of his time and fortune on a series of projects trying to develop the transmission of electrical power without wires. At the time, there was no feasible way to wirelessly transmit communication signals over long distances, let alone large amounts of power. Tesla had studied radio waves early on, and came to the conclusion that part of the existing study on them, by Hertz, was incorrect. Tesla noted that, even if theories on radio waves were true, they were worthless for his intended purposes, since this form of "invisible light" would diminish over a distance just like any other radiation and would travel in straight lines out into space, becoming "hopelessly lost". He worked on the idea that he might be able to conduct electricity long distance through the Earth or the atmosphere, and began working on experiments to test this idea including setting up a large resonance transformer magnifying transmitter in his East Houston Street lab."Tesla on Electricity Without Wires," Electrical Engineer – N.Y., 8 January 1896, p. 52. (Refers to letter by Tesla in the New York Herald, 31 December 1895.)Mining & Scientific Press, "Electrical Progress" Nikola Tesla Is Credited With Statement", 11 April 1896
Colorado Springs
To further study the conductive nature of low-pressure air, Tesla set up an experimental station at high altitude in Colorado Springs during 1899.Nikola Tesla on his Work with Alternating Currents and their Application to Wireless Telegraphy, Telephony, and Transmission of Power, Leland I. Anderson, 21st Century Books, 2002, p. 109, . There he could safely operate much larger coils than in his New York lab, and the El Paso Electric Light Company supplied alternating current free of charge. To fund his experiments, he convinced John Jacob Astor IV to invest $100,000 ($ in today's dollars) to become a majority shareholder in the Nikola Tesla Company. Upon his arrival, he told reporters that he planned to conduct wireless telegraphy experiments, transmitting signals from Pikes Peak to Paris.
There, he experimented with a large coil operating in the megavolts range, producing artificial lightning (and thunder) consisting of millions of volts and discharges of up to in length,Gillispie, Charles Coulston, "Dictionary of Scientific Biography;" Tesla, Nikola. Charles Scribner's Sons, New York. and, at one point, inadvertently burned out the generator in El Paso, causing a power outage. The observations he made of the electronic noise of lightning strikes led him to (incorrectly) conclude that he could use the entire globe of the Earth to conduct electrical energy.
During his time at his laboratory, Tesla observed unusual signals from his receiver which he speculated to be communications from another planet. He mentioned them in a letter to a reporter in December 1899Daniel Blair Stewart (1999). Tesla: The Modern Sorcerer, Frog Book. p. 372 and to the Red Cross Society in December 1900. Reporters treated it as a sensational story and jumped to the conclusion Tesla was hearing signals from Mars. He expanded on the signals he heard in a 9 February 1901 Collier's Weekly article entitled "Talking With Planets", where he said it had not been immediately apparent to him that he was hearing "intelligently controlled signals" and that the signals could have come from Mars, Venus, or other planets.
Tesla had an agreement with the editor of The Century Magazine to produce an article on his findings. The magazine sent a photographer to Colorado to photograph the work being done there. The article, titled "The Problem of Increasing Human Energy", appeared in the June 1900 edition of the magazine. He explained the superiority of the wireless system he envisioned but the article was more of a lengthy philosophical treatise than an understandable scientific description of his work.
Wardenclyffe
Tesla made the rounds in New York trying to find investors for what he thought would be a viable system of wireless transmission, wining and dining them at the Waldorf-Astoria's Palm Garden (the hotel where he was living at the time), The Players Club, and Delmonico's. In March 1901, he obtained $150,000 ($ in today's dollars) from J. P. Morgan in return for a 51% share of any generated wireless patents, and began planning the Wardenclyffe Tower facility to be built in Shoreham, New York, east of the city on the North Shore of Long Island.
By July 1901, Tesla had expanded his plans to build a more powerful transmitter to leap ahead of Marconi's radio-based system, which Tesla thought was a copy of his own. In December 1901, Marconi transmitted the letter S from England to Newfoundland, defeating Tesla in the race to be first to complete such a transmission. In June 1902, Tesla moved his lab operations from Houston Street to Wardenclyffe.
Investors on Wall Street put money into Marconi's system, and some in the press began turning against Tesla's project, claiming it was a hoax.Malanowski, Gregory, The Race for Wireless, AuthorHouse, p. 35 The project came to a halt in 1905, perhaps contributing to what biographer Marc J. Seifer suspects was a nervous breakdown on Tesla's part in 1906. Tesla mortgaged the Wardenclyffe property to cover his debts at the Waldorf-Astoria, which eventually amounted to $20,000 ($ in today's dollars).
Later years
After Wardenclyffe closed, Tesla continued to write to Morgan; after "the great man" died, Tesla wrote to Morgan's son Jack, trying to get further funding for the project. In 1906, Tesla opened offices at 165 Broadway in Manhattan, trying to raise further funds by developing and marketing his patents. He went on to have offices at the Metropolitan Life Tower from 1910 to 1914; rented for a few months at the Woolworth Building, moving out because he could not afford the rent; and then to office space at 8 West 40th Street from 1915 to 1925. After moving to 8 West 40th Street, he was effectively bankrupt. Most of his patents had run out and he was having trouble with the new inventions he was trying to develop.
Bladeless turbine
On his 50th birthday, in 1906, Tesla demonstrated a 16,000 rpm bladeless turbine. During 1910–1911, at the Waterside Power Station in New York, several of his bladeless turbine engines were tested at 100–5,000 hp. Tesla worked with several companies including from 1919 to 1922 in Milwaukee, for Allis-Chalmers. Tesla licensed the idea to a precision instrument company, and it found use in the form of luxury car speedometers and other instruments.
Wireless lawsuits
When World War I broke out, the British cut the transatlantic telegraph cable linking the US to Germany in order to control the flow of information between the two countries. They also tried to shut off German wireless communication to and from the US by having the US Marconi Company sue the German radio company Telefunken for patent infringement. Telefunken brought in the physicists Jonathan Zenneck and Karl Ferdinand Braun for their defense, and hired Tesla as a witness for two years for $1,000 a month. The case stalled and then went moot when the US entered the war against Germany in 1917.
In 1915, Tesla attempted to sue the Marconi Company for infringement of his wireless tuning patents. Marconi's initial radio patent had been awarded in the US in 1897, but his 1900 patent submission covering improvements to radio transmission had been rejected several times on the grounds that it infringed on other existing patents, including two 1897 Tesla wireless power tuning patents, before it was finally approved in 1904.Howard B. Rockman, Intellectual Property Law for Engineers and Scientists, John Wiley & Sons – 2004, p. 198. Tesla's 1915 case went nowhere, but in a related case, where the Marconi Company tried to sue the US government over WWI patent infringements, a Supreme Court of the United States 1943 decision restored the prior patents of Oliver Lodge, John Stone, and Tesla. The court declared that their decision had no bearing on Marconi's claim as the first to achieve radio transmission, just that since Marconi's claim to certain patented improvements were questionable, the company could not claim infringement on those same patents.
Other ideas
thumb |Second banquet meeting of the Institute of Radio Engineers, 23 April 1915. Tesla is seen standing in the center.
Tesla attempted to market several devices based on the production of ozone. These included his 1900 Tesla Ozone Company selling an 1896 patented device based on his Tesla coil, used to bubble ozone through different types of oils to make a therapeutic gel.Anand Kumar Sethi (2016). The European Edisons: Volta, Tesla, and Tigerstedt, Springer. pp. 53–54 He tried to develop a variation of this a few years later as a room sanitizer for hospitals.
He theorized that the application of electricity to the brain enhanced intelligence. In 1912, he crafted "a plan to make dull students bright by saturating them unconsciously with electricity," wiring the walls of a schoolroom and, "saturating [the schoolroom] with infinitesimal electric waves vibrating at high frequency. The whole room will thus, Mr. Tesla claims, be converted into a health-giving and stimulating electromagnetic field or 'bath." The plan was, at least provisionally, approved by then superintendent of New York City schools, William H. Maxwell.
In the August 1917 edition of the magazine Electrical Experimenter, Tesla postulated that electricity could be used to locate submarines via using the reflection of an "electric ray" of "tremendous frequency," with the signal being viewed on a fluorescent screen (a system that has been noted to have a superficial resemblance to modern radar).Margaret Cheney, Robert Uth, Jim Glenn, Tesla, Master of Lightning, pp. 128–129 Tesla was incorrect in his assumption that high-frequency radio waves would penetrate water. Émile Girardeau, who helped develop France's first radar system in the 1930s, noted in 1953 that Tesla's general speculation that a very strong high-frequency signal would be needed was correct. Girardeau said, "(Tesla) was prophesying or dreaming, since he had at his disposal no means of carrying them out, but one must add that if he was dreaming, at least he was dreaming correctly".
In 1928, Tesla received patent, , for a biplane design capable of vertical take-off and landing (VTOL), which "gradually tilted through manipulation of the elevator devices" in flight until it was flying like a conventional plane. This impractical design was something Tesla thought would sell for less than $1,000.
Living circumstances
Tesla lived at the Waldorf Astoria Hotel in New York City from 1900 and ran up a large bill. He moved to the St. Regis Hotel in 1922 and followed a pattern from then on of moving to a different hotel every few years and leaving unpaid bills behind.
Tesla walked to the park every day to feed the pigeons. He began feeding them at the window of his hotel room and nursed injured birds back to health. He said that he had been visited by a certain injured white pigeon daily. He spent over $2,000 () to care for the bird, including a device he built to support her comfortably while her broken wing and leg healed. Tesla's unpaid bills, as well as complaints about the mess made by pigeons, led to his eviction from St. Regis in 1923. He was forced to leave the Hotel Pennsylvania in 1930 and the Hotel Governor Clinton in 1934. At one point he took rooms at the Hotel Marguery.
Tesla moved to the Hotel New Yorker in 1934. At this time Westinghouse Electric & Manufacturing Company began paying him $125 () per month in addition to paying his rent. Accounts of how this came about vary. Several sources claim that Westinghouse was concerned, or possibly warned, about potential bad publicity arising from the impoverished conditions in which their former star inventor was living. The payment has been described as being couched as a "consulting fee" to get around Tesla's aversion to accepting charity. Tesla biographer Marc Seifer described the Westinghouse payments as a type of "unspecified settlement".
Birthday press conferences
In 1931, a young journalist whom Tesla befriended, Kenneth M. Swezey, organized a celebration for the inventor's 75th birthday. Tesla received congratulations from figures in science and engineering such as Albert Einstein, and he was also featured on the cover of Time magazine. The cover caption "All the world's his power house" noted his contribution to electrical power generation. The party went so well that Tesla made it an annual event, an occasion where he would put out a large spread of food and drink—featuring dishes of his own creation. He invited the press in order to see his inventions and hear stories about his past exploits, views on current events, and sometimes baffling claims.
At the 1932 party, Tesla claimed he had invented a motor that would run on cosmic rays.
In 1933, at age 77, Tesla told reporters at the event that, after 35 years of work, he was on the verge of producing proof of a new form of energy. He claimed it was a theory of energy that was "violently opposed" to Einsteinian physics and could be tapped with an apparatus that would be cheap to run and last 500 years. He also told reporters he was working on a way to transmit individualized private radio wavelengths, working on breakthroughs in metallurgy, and developing a way to photograph the retina to record thought.Tesla Predicts New Source of Power in Year, New York Herald Tribune, 9 July 1933
At the 1934 occasion, Tesla told reporters he had designed a superweapon he claimed would end all war. He called it "teleforce", but was usually referred to as his death ray.Cheney, Margaret & Uth, Robert (2001). Tesla: Master of Lightning. Barnes & Noble Books. p. 158 In 1940, the New York Times gave a range for the ray of , with an expected development cost of US$2 million (equivalent to $ million in ). Tesla described it as a defensive weapon that would be put up along the border of a country and be used against attacking ground-based infantry or aircraft. Tesla never revealed detailed plans of how the weapon worked during his lifetime but, in 1984, they surfaced at the Nikola Tesla Museum archive in Belgrade. The treatise, The New Art of Projecting Concentrated Non-dispersive Energy through the Natural Media, described an open-ended vacuum tube with a gas jet seal that allows particles to exit, a method of charging slugs of tungsten or mercury to millions of volts, and directing them in streams (through electrostatic repulsion). Tesla tried to attract interest of the US War Department,"Aerial Defense 'Death-Beam' Offered to U.S. By Tesla" 12 July 1940 United Kingdom, Soviet Union, and Yugoslavia in the device.
In 1935, at his 79th birthday party, Tesla covered many topics. He claimed to have discovered the cosmic ray in 1896 and invented a way to produce direct current by induction, and made many claims about his mechanical oscillator.Earl Sparling, Nikola Tesla, at 79, Uses Earth to Transmit Signals: Expects to Have $100,000,000 within Two Years, New York World-Telegram, 11 July 1935 Describing the device (which he expected would earn him $100 million within two years) he told reporters that a version of his oscillator had caused an earthquake in his 46 East Houston Street lab and neighboring streets in Lower Manhattan in 1898. He went on to tell reporters his oscillator could destroy the Empire State Building with of air pressure. He also proposed using his oscillators to transmit vibrations into the ground. He claimed it would work over any distance and could be used for communication or locating underground mineral deposits, a technique he called "telegeodynamics".
In 1937, at his event in the Grand Ballroom of the Hotel New Yorker, Tesla received the Order of the White Lion from the Czechoslovak ambassador and a medal from the Yugoslav ambassador. On questions concerning the death ray, Tesla stated: "But it is not an experiment ... I have built, demonstrated and used it. Only a little time will pass before I can give it to the world."
Awards
Tesla won numerous medals and awards. They include:
Elliott Cresson Medal (Franklin Institute, US, 1894)
Grand Cross of the Order of Prince Danilo I (Montenegro, 1895)
Member of the American Philosophical Society (US, 1896)
AIEE Edison Medal (Institute of Electrical and Electronics Engineers, US, 1916)
Grand Cross of the Order of St. Sava (Yugoslavia, 1926)
John Scott Medal (Franklin Institute & Philadelphia City Council, US, 1934)
Order of the White Eagle (Yugoslavia, 1936)
Grand Cross of the Order of the White Lion (Czechoslovakia, 1937)
Death
In the fall of 1937 at the age of 81, after midnight one night, Tesla left the Hotel New Yorker to make his regular commute to St. Patrick's Cathedral and the Public Library to feed the pigeons. While crossing a street a couple of blocks from the hotel, Tesla was struck by a moving taxicab and was thrown to the ground. His back was severely wrenched and three of his ribs were broken in the accident. The full extent of his injuries was never known; Tesla refused to consult a doctor, an almost lifelong custom, and never fully recovered. On the night of 7 January 1943, at the age of 86, Tesla died alone in his hotel room. His body was found by a maid on the next day when she entered his room, ignoring the "do not disturb" sign that had been placed on his door three days earlier. An assistant medical examiner examined the body, estimated the time of death as 10:30p.m. and ruled that the cause of death had been coronary thrombosis.
Two days later the Federal Bureau of Investigation ordered the Alien Property Custodian to seize Tesla's belongings. John G. Trump, a professor at M.I.T. and a well-known electrical engineer serving as a technical aide to the National Defense Research Committee, was called in to analyze the Tesla items. After a three-day investigation, Trump's report concluded that there was nothing which would constitute a hazard in unfriendly hands. In a box purported to contain a part of Tesla's "death ray", Trump found a 45-year-old multidecade resistance box. On 10 January 1943, New York City mayor Fiorello La Guardia read a eulogy for Tesla at his funeral at the Cathedral of St. John the Divine.
Personal life and character
Tesla was a lifelong bachelor, who had once explained that his chastity was very helpful to his scientific abilities. In an interview with the Galveston Daily News on 10 August 1924 he stated, "Now the soft-voiced gentlewoman of my reverent worship has all but vanished. In her place has come the woman who thinks that her chief success in life lies in making herself as much as possible like man—in dress, voice and actions..." He told a reporter in later years that he sometimes felt that by not marrying, he had made too great a sacrifice to his work.
Tesla was a good friend of Francis Marion Crawford, Robert Underwood Johnson, Stanford White, Fritz Lowenstein, George Scherff, and Kenneth Swezey. In middle age, Tesla became a close friend of Mark Twain; they spent a lot of time together in his lab and elsewhere. Twain notably described Tesla's induction motor invention as "the most valuable patent since the telephone". At a party thrown by actress Sarah Bernhardt in 1896, Tesla met Indian Hindu monk Swami Vivekananda. Vivekananda later wrote that Tesla said he could demonstrate mathematically the relationship between matter and energy, something Vivekananda hoped would give a scientific foundation to Vedantic cosmology.Kak, S. (2017) Tesla, wireless energy transmission and Vivekananda. Current Science, vol. 113, 2207–2210. The meeting with Swami Vivekananda stimulated Tesla's interest in Eastern Science, which led to Tesla studying Hindu and Vedic philosophy for a number of years. Tesla later wrote an article titled "Man's Greatest Achievement" using Sanskrit terms akasha and prana to describe the relationship between matter and energy. In the late 1920s, Tesla befriended George Sylvester Viereck, a poet, writer, mystic, and later a Nazi propagandist. Tesla occasionally attended dinner parties held by Viereck and his wife.
Tesla could be harsh at times and openly expressed disgust for overweight people, such as when he fired a secretary because of her weight. He was quick to criticize clothing; on several occasions, Tesla directed a subordinate to go home and change her dress. When Thomas Edison died in 1931, Tesla contributed the only negative opinion to The New York Times. He became a vegetarian in his later years, living on only milk, bread, honey, and vegetable juices.
Views and beliefs
On experimental and theoretical physics
Tesla disagreed with the theory that atoms were composed of smaller subatomic particles, stating there was no such thing as an electron creating an electric charge. He believed that if electrons existed at all, they were some fourth state of matter or "sub-atom" that could exist only in an experimental vacuum, and that they had nothing to do with electricity. Tesla believed that atoms are immutable—they could not change state or be split in any way. He was a believer in the 19th-century concept of an all-pervasive ether that transmitted electrical energy.
Tesla opposed the equivalence of matter and energy. He was critical of Einstein's theory of relativity, saying "I hold that space cannot be curved, for the simple reason that it can have no properties. It might as well be said that God has properties."New York Herald Tribune, 11 September 1932 In 1935 he described relativity as "a beggar wrapped in purple whom ignorant people take for a king" and said his own experiments had measured the speed of cosmic rays from Antares as fifty times the speed of light. Tesla claimed to have developed his own physical principle regarding matter and energy that he started working on in 1892, and in 1937, at age 81, claimed in a letter to have completed a "dynamic theory of gravity" that "[would] put an end to idle speculations and false conceptions, as that of curved space". He stated that the theory was "worked out in all details" and that he hoped to soon give it to the world.Prepared Statement by Nikola Tesla downloadable from http://www.tesla.hu Further elucidation of his theory was never found in his writings.
On society
Tesla is widely considered by his biographers to have been a humanist in philosophical outlook. He expressed the belief that human "pity" had come to interfere with the natural "ruthless workings of nature". Though his argumentation did not depend on a concept of a "master race" or the inherent superiority of one person over another, he advocated for eugenics. In 1926, Tesla commented on the ills of the social subservience of women and the struggle of women for gender equality. He indicated that humanity's future would be run by "Queen Bees". He believed that women would become the dominant sex in the future.Kennedy, John B., "When woman is boss , An interview with Nikola Tesla." Colliers, 30 January 1926. He made predictions about the relevant issues of a post-World War I environment in an article entitled "Science and Discovery are the great Forces which will lead to the Consummation of the War" (20 December 1914).
On religion
Tesla was raised in the faith of the Eastern Orthodox Church. Later in life he did not consider himself to be a "believer in the orthodox sense", said he opposed religious fanaticism, and said "Buddhism and Christianity are the greatest religions both in number of disciples and in importance." He also said "To me, the universe is simply a great machine which never came into being and never will end" and "what we call 'soul' or 'spirit,' is nothing more than the sum of the functionings of the body. When this functioning ceases, the 'soul' or the 'spirit' ceases likewise."
Literary works
Tesla wrote a number of books and articles for magazines and journals. Among his books are My Inventions: The Autobiography of Nikola Tesla, compiled and edited by Ben Johnston in 1983 from a series of 1919 magazine articles by Tesla which were republished in 1977; The Fantastic Inventions of Nikola Tesla (1993), compiled and edited by David Hatcher Childress; and The Tesla Papers. Many of his writings are freely available online, including the article "The Problem of Increasing Human Energy", published in The Century Magazine in 1900, and the article "Experiments with Alternate Currents of High Potential and High Frequency", published in his book Inventions, Researches and Writings of Nikola Tesla.
Legacy
In 1952, following pressure from Tesla's nephew, Yugoslav politician , Tesla's entire estate was shipped to Belgrade in 80 trunks marked "N.T.". In 1957, Kosanović's secretary Charlotte Muzar transported Tesla's ashes from the United States to Belgrade. They are displayed in a gold-plated sphere on a marble pedestal in the Nikola Tesla Museum. His archive consists of over 160,000 documents and is included in the UNESCO Memory of the World Programme.
Tesla obtained around 300 patents worldwide for his inventions. Some of Tesla's patents are not accounted for, and some that have lain hidden in patent archives have been rediscovered. There are at least 278 known patents issued to Tesla in 26 countries. Many were in the United States, Britain, and Canada, but many others were approved in countries around the globe.
See also
Notes
Footnotes
Citations
References
(see also Prodigal Genius: The Life of Nikola Tesla; also ; reprinted 2007 by Book Tree, )
Further reading
Books
Tesla, Nikola, My Inventions, Parts I through V published in the Electrical Experimenter monthly magazine from February through June 1919. Part VI published October 1919. Reprint edition with introductory notes by Ben Johnson, New York: Barnes and Noble, 1982; also online at Lucid Cafe , et cetera as My Inventions: The Autobiography of Nikola Tesla, 1919.
Glenn, Jim (1994). The Complete Patents of Nikola Tesla.
Lomas, Robert (1999). The Man Who Invented the Twentieth Century: Nikola Tesla, forgotten genius of electricity. London: Headline.
Martin, Thomas C. (editor) (1894, 1996 reprint, copyright expired), The Inventions, Researches, and Writings of Nikola Tesla, includes some lectures, Montana: Kessinger.
Trinkaus, George (2002). Tesla: The Lost Inventions, High Voltage Press.
Valone, Thomas (2002). Harnessing the Wheelwork of Nature: Tesla's Science of Energy.
Publications
A New System of Alternating Current Motors and Transformers, American Institute of Electrical Engineers, May 1888.
Selected Tesla Writings, Scientific papers and articles written by Tesla and others, spanning the years 1888–1940.
Light Without Heat , The Manufacturer and Builder, January 1892, Vol. 24
Biography: Nikola Tesla , The Century Magazine, November 1893, Vol. 47
Tesla's Oscillator and Other Inventions , The Century Magazine, November 1894, Vol. 49
The New Telegraphy. Recent Experiments in Telegraphy with Sparks , The Century Magazine, November 1897, Vol. 55
Journals
Carlson, W. Bernard, "Inventor of dreams". Scientific American, March 2005 Vol. 292 Issue 3 p. 78(7).
Lawren, B., "Rediscovering Tesla". Omni, March 1988, Vol. 10 Issue 6.
Thibault, Ghislain, "The Automatization of Nikola Tesla: Thinking Invention in the Late Nineteenth Century". Configurations , Volume 21, Number 1, Winter 2013, pp. 27–52.
Martin, Thomas Commerford, "The Inventions, Researches, and Writings of Nikola Tesla", New York: The Electrical Engineer, 1894 (3rd Ed.); reprinted by Barnes & Noble, 1995
Anil K. Rajvanshi, "Nikola Tesla – The Creator of Electric Age", Resonance, March 2007.
Roguin, Ariel, "Historical Note: Nikola Tesla: The man behind the magnetic field unit". J. Magn. Reson. Imaging 2004;19:369–374. 2004 Wiley-Liss, Inc.
Sellon, J. L., "The impact of Nikola Tesla on the cement industry". Behrent Eng. Co., Wheat Ridge, Colorado. Cement Industry Technical Conference. 1997. XXXIX Conference Record., 1997 IEEE/PC. Page(s) 125–133.
Valentinuzzi, M.E., "Nikola Tesla: why was he so much resisted and forgotten?" Inst. de Bioingenieria, Univ. Nacional de Tucuman; Engineering in Medicine and Biology Magazine, IEEE. July/August 1998, 17:4, pp. 74–75.
Secor, H. Winfield, "Tesla's views on Electricity and the War", Electrical Experimenter, Volume 5, Number 4 August 1917.
Florey, Glen, "Tesla and the Military". Engineering 24, 5 December 2000.
Corum, K. L., J. F. Corum, Nikola Tesla, Lightning Observations, and Stationary Waves. 1994.
Corum, K. L., J. F. Corum, and A. H. Aidinejad, Atmospheric Fields, Tesla's Receivers and Regenerative Detectors. 1994.
Meyl, Konstantin, H. Weidner, E. Zentgraf, T. Senkel, T. Junker, and P. Winkels, Experiments to proof the evidence of scalar waves Tests with a Tesla reproduction. Institut für Gravitationsforschung (IGF), Am Heerbach 5, D-63857 Waldaschaff.
Anderson, L. I., "John Stone Stone on Nikola Tesla's Priority in Radio and Continuous Wave Radiofrequency Apparatus". The AWA Review, Vol. 1, 1986, pp. 18–41.
Anderson, L. I., "Priority in Invention of Radio, Tesla v. Marconi". Antique Wireless Association monograph, March 1980.
Marincic, A., and D. Budimir, "Tesla's contribution to radiowave propagation". Dept. of Electron. Eng., Belgrade Univ. (5th International Conference on Telecommunications in Modern Satellite, Cable and Broadcasting Service, 2001. TELSIKS 2001. pp. 327–331 vol.1)
External links
Tesla memorial society by his grand-nephew William H. Terbo
Tesla Science Center at Wardenclyffe
"Tesla's pigeon" – Amanda Gefter
Category:1856 births
Category:1943 deaths
Category:19th-century American engineers
Category:19th-century Serbian engineers
Category:20th-century American engineers
Category:20th-century Serbian engineers
Category:American electrical engineers
Category:American eugenicists
Category:American futurologists
Category:American humanists
Category:American inventors
Category:American mechanical engineers
Category:Deaths from coronary thrombosis
Category:Emigrants from Austria-Hungary to the United States
Category:Engineers from Austria-Hungary
Category:Fellows of the American Association for the Advancement of Science
Category:Fellows of the IEEE
Category:Grand Crosses of the Order of St. Sava
Category:Grand Crosses of the Order of the White Lion
Category:Graz University of Technology alumni
Category:Great Officers of the Order of St. Sava
Category:IEEE Edison Medal recipients
Category:Inventors from Austria-Hungary
Category:Members of the American Philosophical Society
Category:Members of The Lambs Club
Category:Members of the Serbian Academy of Sciences and Arts
Category:Mental calculators
Category:Naturalized citizens of the United States
Category:People associated with electricity
Category:People from Colorado Springs, Colorado
Category:People from Gospić
Category:People from Karlovac
Category:People from Manhattan
Category:People from the Military Frontier
Category:Radio pioneers
Category:Radiophysicists
Category:Recipients of the Order of the Yugoslav Crown
Category:Serbian inventors
Category:Serbs in Austria-Hungary
Category:Serbs in the Habsburg monarchy
Category:Serbs of Croatia
Category:Wireless energy transfer
|
biographies
| 9,729
|
21523
|
Neural network (machine learning)
|
https://en.wikipedia.org/wiki/Neural_network_(machine_learning)
|
In machine learning, a neural network or neural net (NN), also called artificial neural network (ANN), is a computational model inspired by the structure and functions of biological neural networks.
A neural network consists of connected units or nodes called artificial neurons, which loosely model the neurons in the brain. Artificial neuron models that mimic biological neurons more closely have also been recently investigated and shown to significantly improve performance. These are connected by edges, which model the synapses in the brain. Each artificial neuron receives signals from connected neurons, then processes them and sends a signal to other connected neurons. The "signal" is a real number, and the output of each neuron is computed by some non-linear function of the totality of its inputs, called the activation function. The strength of the signal at each connection is determined by a weight, which adjusts during the learning process.
Typically, neurons are aggregated into layers. Different layers may perform different transformations on their inputs. Signals travel from the first layer (the input layer) to the last layer (the output layer), possibly passing through multiple intermediate layers (hidden layers). A network is typically called a deep neural network if it has at least two hidden layers.
Artificial neural networks are used for various tasks, including predictive modeling, adaptive control, and solving problems in artificial intelligence. They can learn from experience, and can derive conclusions from a complex and seemingly unrelated set of information.
Training
Neural networks are typically trained through empirical risk minimization, which is based on the idea of optimizing, the network's parameters to minimize the difference, or empirical risk, between the predicted output and the actual target values in a given dataset. Gradient-based methods such as backpropagation are usually used to estimate the parameters of the network. During the training phase, ANNs learn from labeled training data by iteratively updating their parameters to minimize a defined loss function. This method allows the network to generalize to unseen data.
History
Early work
Today's deep neural networks are based on early work in statistics over 200 years ago. The simplest kind of feedforward neural network (FNN) is a linear network, which consists of a single layer of output nodes with linear activation functions; the inputs are fed directly to the outputs via a series of weights. The sum of the products of the weights and the inputs is calculated at each node. The mean squared errors between these calculated outputs and the given target values are minimized by creating an adjustment to the weights. This technique has been known for over two centuries as the method of least squares or linear regression. It was used as a means of finding a good rough linear fit to a set of points by Legendre (1805) and Gauss (1795) for the prediction of planetary movement.Mansfield Merriman, "A List of Writings Relating to the Method of Least Squares"
Historically, digital computers such as the von Neumann model operate via the execution of explicit instructions with access to memory by a number of processors. Some neural networks, on the other hand, originated from efforts to model information processing in biological systems through the framework of connectionism. Unlike the von Neumann model, connectionist computing does not separate memory and processing.
Warren McCulloch and Walter Pitts (1943) considered a non-learning computational model for neural networks. This model paved the way for research to split into two approaches. One approach focused on biological processes while the other focused on the application of neural networks to artificial intelligence.
In the late 1940s, D. O. Hebb proposed a learning hypothesis based on the mechanism of neural plasticity that became known as Hebbian learning. It was used in many early neural networks, such as Rosenblatt's perceptron and the Hopfield network. Farley and Clark (1954) used computational machines to simulate a Hebbian network. Other neural network computational machines were created by Rochester, Holland, Habit and Duda (1956).
In 1958, psychologist Frank Rosenblatt described the perceptron, one of the first implemented artificial neural networks,Haykin (2008) Neural Networks and Learning Machines, 3rd edition funded by the United States Office of Naval Research.
R. D. Joseph (1960) mentions an even earlier perceptron-like device by Farley and Clark: "Farley and Clark of MIT Lincoln Laboratory actually preceded Rosenblatt in the development of a perceptron-like device." However, "they dropped the subject."
The perceptron raised public excitement for research in Artificial Neural Networks, causing the US government to drastically increase funding. This contributed to "the Golden Age of AI" fueled by the optimistic claims made by computer scientists regarding the ability of perceptrons to emulate human intelligence.
The first perceptrons did not have adaptive hidden units. However, Joseph (1960) also discussed multilayer perceptrons with an adaptive hidden layer. Rosenblatt (1962) cited and adopted these ideas, also crediting work by H. D. Block and B. W. Knight. Unfortunately, these early efforts did not lead to a working learning algorithm for hidden units, i.e., deep learning.
Deep learning breakthroughs in the 1960s and 1970s
Fundamental research was conducted on ANNs in the 1960s and 1970s. The first working deep learning algorithm was the Group method of data handling, a method to train arbitrarily deep neural networks, published by Alexey Ivakhnenko and Lapa in the Soviet Union (1965). They regarded it as a form of polynomial regression, or a generalization of Rosenblatt's perceptron. A 1971 paper described a deep network with eight layers trained by this method, which is based on layer by layer training through regression analysis. Superfluous hidden units are pruned using a separate validation set. Since the activation functions of the nodes are Kolmogorov-Gabor polynomials, these were also the first deep networks with multiplicative units or "gates."
The first deep learning multilayer perceptron trained by stochastic gradient descent was published in 1967 by Shun'ichi Amari. In computer experiments conducted by Amari's student Saito, a five layer MLP with two modifiable layers learned internal representations to classify non-linearily separable pattern classes. Subsequent developments in hardware and hyperparameter tunings have made end-to-end stochastic gradient descent the currently dominant training technique.
In 1969, Kunihiko Fukushima introduced the ReLU (rectified linear unit) activation function. The rectifier has become the most popular activation function for deep learning.
Nevertheless, research stagnated in the United States following the work of Minsky and Papert (1969), who emphasized that basic perceptrons were incapable of processing the exclusive-or circuit. This insight was irrelevant for the deep networks of Ivakhnenko (1965) and Amari (1967).
In 1976 transfer learning was introduced in neural networks learning.Bozinovski S. and Fulgosi A. (1976). "The influence of pattern similarity and transfer learning on the base perceptron training" (original in Croatian) Proceedings of Symposium Informatica 3-121-5, Bled.Bozinovski S.(2020) "Reminder of the first paper on transfer learning in neural networks, 1976". Informatica 44: 291–302.
Deep learning architectures for convolutional neural networks (CNNs) with convolutional layers and downsampling layers and weight replication began with the neocognitron introduced by Kunihiko Fukushima in 1979, though not trained by backpropagation.
Backpropagation
Backpropagation is an efficient application of the chain rule derived by Gottfried Wilhelm Leibniz in 1673 to networks of differentiable nodes. The terminology "back-propagating errors" was actually introduced in 1962 by Rosenblatt, but he did not know how to implement this, although Henry J. Kelley had a continuous precursor of backpropagation in 1960 in the context of control theory. In 1970, Seppo Linnainmaa published the modern form of backpropagation in his Master's thesis (1970). G.M. Ostrovski et al. republished it in 1971.Ostrovski, G.M., Volin,Y.M., and Boris, W.W. (1971). On the computation of derivatives. Wiss. Z. Tech. Hochschule for Chemistry, 13:382–384. Paul Werbos applied backpropagation to neural networks in 1982 (his 1974 PhD thesis, reprinted in a 1994 book, did not yet describe the algorithm). In 1986, David E. Rumelhart et al. popularised backpropagation but did not cite the original work.
Convolutional neural networks
Kunihiko Fukushima's convolutional neural network (CNN) architecture of 1979 also introduced max pooling, a popular downsampling procedure for CNNs. CNNs have become an essential tool for computer vision.
The time delay neural network (TDNN) was introduced in 1987 by Alex Waibel to apply CNN to phoneme recognition. It used convolutions, weight sharing, and backpropagation.Alexander Waibel et al., Phoneme Recognition Using Time-Delay Neural Networks IEEE Transactions on Acoustics, Speech, and Signal Processing, Volume 37, No. 3, pp. 328. – 339 March 1989. In 1988, Wei Zhang applied a backpropagation-trained CNN to alphabet recognition.
In 1989, Yann LeCun et al. created a CNN called LeNet for recognizing handwritten ZIP codes on mail. Training required 3 days.LeCun et al., "Backpropagation Applied to Handwritten Zip Code Recognition", Neural Computation, 1, pp. 541–551, 1989. In 1990, Wei Zhang implemented a CNN on optical computing hardware. In 1991, a CNN was applied to medical image object segmentation and breast cancer detection in mammograms. LeNet-5 (1998), a 7-level CNN by Yann LeCun et al., that classifies digits, was applied by several banks to recognize hand-written numbers on checks digitized in 32×32 pixel images.
From 1988 onward,Qian, Ning, and Terrence J. Sejnowski. "Predicting the secondary structure of globular proteins using neural network models." Journal of molecular biology 202, no. 4 (1988): 865–884.Bohr, Henrik, Jakob Bohr, Søren Brunak, Rodney MJ Cotterill, Benny Lautrup, Leif Nørskov, Ole H. Olsen, and Steffen B. Petersen. "Protein secondary structure and homology by neural networks The α-helices in rhodopsin." FEBS letters 241, (1988): 223–228 the use of neural networks transformed the field of protein structure prediction, in particular when the first cascading networks were trained on profiles (matrices) produced by multiple sequence alignments.Rost, Burkhard, and Chris Sander. "Prediction of protein secondary structure at better than 70% accuracy." Journal of molecular biology 232, no. 2 (1993): 584–599.
Recurrent neural networks
One origin of RNN was statistical mechanics. In 1972, Shun'ichi Amari proposed to modify the weights of an Ising model by Hebbian learning rule as a model of associative memory, adding in the component of learning. This was popularized as the Hopfield network by John Hopfield (1982). Another origin of RNN was neuroscience. The word "recurrent" is used to describe loop-like structures in anatomy. In 1901, Cajal observed "recurrent semicircles" in the cerebellar cortex. Hebb considered "reverberating circuit" as an explanation for short-term memory. The McCulloch and Pitts paper (1943) considered neural networks that contain cycles, and noted that the current activity of such networks can be affected by activity indefinitely far in the past.
In 1982 a recurrent neural network with an array architecture (rather than a multilayer perceptron architecture), namely a Crossbar Adaptive Array, Bozinovski, S. (1982). "A self-learning system using secondary reinforcement". In Trappl, Robert (ed.). Cybernetics and Systems Research: Proceedings of the Sixth European Meeting on Cybernetics and Systems Research. North-Holland. pp. 397–402. ISBN 978-0-444-86488-8Bozinovski S. (1995) "Neuro genetic agents and structural theory of self-reinforcement learning systems". CMPSCI Technical Report 95-107, University of Massachusetts at Amherst used direct recurrent connections from the output to the supervisor (teaching) inputs. In addition of computing actions (decisions), it computed internal state evaluations (emotions) of the consequence situations. Eliminating the external supervisor, it introduced the self-learning method in neural networks.
In cognitive psychology, the journal American Psychologist in early 1980's carried out a debate on the relation between cognition and emotion. Zajonc in 1980 stated that emotion is computed first and is independent from cognition, while Lazarus in 1982 stated that cognition is computed first and is inseparable from emotion.Lazarus R. (1982) "Thoughts on the relations between emotion and cognition" American Psychologist 37 (9): 1019-1024 In 1982 the Crossbar Adaptive Array gave a neural network model of cognition-emotion relation.Bozinovski, S. (2014) "Modeling mechanisms of cognition-emotion interaction in artificial neural networks, since 1981" Procedia Computer Science p. 255-263 (https://core.ac.uk/download/pdf/81973924.pdf ) It was an example of a debate where an AI system, a recurrent neural network, contributed to an issue in the same time addressed by cognitive psychology.
Two early influential works were the Jordan network (1986) and the Elman network (1990), which applied RNN to study cognitive psychology.
In the 1980s, backpropagation did not work well for deep RNNs. To overcome this problem, in 1991, Jürgen Schmidhuber proposed the "neural sequence chunker" or "neural history compressor" which introduced the important concepts of self-supervised pre-training (the "P" in ChatGPT) and neural knowledge distillation. In 1993, a neural history compressor system solved a "Very Deep Learning" task that required more than 1000 subsequent layers in an RNN unfolded in time. Page 150 ff demonstrates credit assignment across the equivalent of 1,200 layers in an unfolded RNN.
In 1991, Sepp Hochreiter's diploma thesisS. Hochreiter., "Untersuchungen zu dynamischen neuronalen Netzen", , Diploma thesis. Institut f. Informatik, Technische Univ. Munich. Advisor: J. Schmidhuber, 1991. identified and analyzed the vanishing gradient problem and proposed recurrent residual connections to solve it. He and Schmidhuber introduced long short-term memory (LSTM), which set accuracy records in multiple applications domains. This was not yet the modern version of LSTM, which required the forget gate, which was introduced in 1999. It became the default choice for RNN architecture.
During 1985–1995, inspired by statistical mechanics, several architectures and methods were developed by Terry Sejnowski, Peter Dayan, Geoffrey Hinton, etc., including the Boltzmann machine, restricted Boltzmann machine, Helmholtz machine, and the wake-sleep algorithm. These were designed for unsupervised learning of deep generative models.
Deep learning
Between 2009 and 2012, ANNs began winning prizes in image recognition contests, approaching human level performance on various tasks, initially in pattern recognition and handwriting recognition.2012 Kurzweil AI Interview with Juergen Schmidhuber on the eight competitions won by his Deep Learning team 2009–2012 In 2011, a CNN named DanNet by Dan Ciresan, Ueli Meier, Jonathan Masci, Luca Maria Gambardella, and Jürgen Schmidhuber achieved for the first time superhuman performance in a visual pattern recognition contest, outperforming traditional methods by a factor of 3. It then won more contests. They also showed how max-pooling CNNs on GPU improved performance significantly.
In October 2012, AlexNet by Alex Krizhevsky, Ilya Sutskever, and Geoffrey Hinton won the large-scale ImageNet competition by a significant margin over shallow machine learning methods. Further incremental improvements included the VGG-16 network by Karen Simonyan and Andrew Zisserman and Google's Inceptionv3.
In 2012, Ng and Dean created a network that learned to recognize higher-level concepts, such as cats, only from watching unlabeled images. Unsupervised pre-training and increased computing power from GPUs and distributed computing allowed the use of larger networks, particularly in image and visual recognition problems, which became known as "deep learning".
Radial basis function and wavelet networks were introduced in 2013. These can be shown to offer best approximation properties and have been applied in nonlinear system identification and classification applications.
Generative adversarial network (GAN) (Ian Goodfellow et al., 2014) became state of the art in generative modeling during 2014–2018 period. The GAN principle was originally published in 1991 by Jürgen Schmidhuber who called it "artificial curiosity": two neural networks contest with each other in the form of a zero-sum game, where one network's gain is the other network's loss. The first network is a generative model that models a probability distribution over output patterns. The second network learns by gradient descent to predict the reactions of the environment to these patterns. Excellent image quality is achieved by Nvidia's StyleGAN (2018) based on the Progressive GAN by Tero Karras et al. Here, the GAN generator is grown from small to large scale in a pyramidal fashion. Image generation by GAN reached popular success, and provoked discussions concerning deepfakes. Diffusion models (2015) eclipsed GANs in generative modeling since then, with systems such as DALL·E 2 (2022) and Stable Diffusion (2022).
In 2014, the state of the art was training "very deep neural network" with 20 to 30 layers. Stacking too many layers led to a steep reduction in training accuracy, known as the "degradation" problem. In 2015, two techniques were developed to train very deep networks: the highway network was published in May 2015, and the residual neural network (ResNet) in December 2015. ResNet behaves like an open-gated Highway Net.
During the 2010s, the seq2seq model was developed, and attention mechanisms were added. It led to the modern Transformer architecture in 2017 in Attention Is All You Need.
It requires computation time that is quadratic in the size of the context window. Jürgen Schmidhuber's fast weight controller (1992) scales linearly and was later shown to be equivalent to the unnormalized linear Transformer.
Transformers have increasingly become the model of choice for natural language processing. Many modern large language models such as ChatGPT, GPT-4, and BERT use this architecture.
Models
ANNs began as an attempt to exploit the architecture of the human brain to perform tasks that conventional algorithms had little success with. They soon reoriented towards improving empirical results, abandoning attempts to remain true to their biological precursors. ANNs have the ability to learn and model non-linearities and complex relationships. This is achieved by neurons being connected in various patterns, allowing the output of some neurons to become the input of others. The network forms a directed, weighted graph.
An artificial neural network consists of simulated neurons. Each neuron is connected to other nodes via links like a biological axon-synapse-dendrite connection. All the nodes connected by links take in some data and use it to perform specific operations and tasks on the data. Each link has a weight, determining the strength of one node's influence on another, allowing weights to choose the signal between neurons.
Artificial neurons
ANNs are composed of artificial neurons which are conceptually derived from biological neurons. Each artificial neuron has inputs and produces a single output which can be sent to multiple other neurons. The inputs can be the feature values of a sample of external data, such as images or documents, or they can be the outputs of other neurons. The outputs of the final output neurons of the neural net accomplish the task, such as recognizing an object in an image.
To find the output of the neuron we take the weighted sum of all the inputs, weighted by the weights of the connections from the inputs to the neuron. We add a bias term to this sum. This weighted sum is sometimes called the activation. This weighted sum is then passed through a (usually nonlinear) activation function to produce the output. The initial inputs are external data, such as images and documents. The ultimate outputs accomplish the task, such as recognizing an object in an image.
Organization
The neurons are typically organized into multiple layers, especially in deep learning. Neurons of one layer connect only to neurons of the immediately preceding and immediately following layers. The layer that receives external data is the input layer. The layer that produces the ultimate result is the output layer. In between them are zero or more hidden layers. Single layer and unlayered networks are also used. Between two layers, multiple connection patterns are possible. They can be 'fully connected', with every neuron in one layer connecting to every neuron in the next layer. They can be pooling, where a group of neurons in one layer connects to a single neuron in the next layer, thereby reducing the number of neurons in that layer. Neurons with only such connections form a directed acyclic graph and are known as feedforward networks. Alternatively, networks that allow connections between neurons in the same or previous layers are known as recurrent networks.
Hyperparameter
A hyperparameter is a constant parameter defining any configurable part of the learning process, whose value is set prior to training. Examples of hyperparameters include learning rate, batch size and regularization parameters.. The performance of a neural network is strongly influenced by the choice of hyperparameter values, and thus the hyperparameters are often optimized as part of the training process, a process called hyperparameter tuning or hyperparameter optimization.
Learning
Learning is the adaptation of the network to better handle a task by considering sample observations. Learning involves adjusting the weights (and optional thresholds) of the network to improve the accuracy of the result. This is done by minimizing the observed errors. Learning is complete when examining additional observations does not usefully reduce the error rate. Even after learning, the error rate typically does not reach 0. If after learning, the error rate is too high, the network typically must be redesigned. Practically this is done by defining a cost function that is evaluated periodically during learning. As long as its output continues to decline, learning continues. The cost is frequently defined as a statistic whose value can only be approximated. The outputs are actually numbers, so when the error is low, the difference between the output (almost certainly a cat) and the correct answer (cat) is small. Learning attempts to reduce the total of the differences across the observations. Most learning models can be viewed as a straightforward application of optimization theory and statistical estimation.
Learning rate
The learning rate defines the size of the corrective steps that the model takes to adjust for errors in each observation. A high learning rate shortens the training time, but with lower ultimate accuracy, while a lower learning rate takes longer, but with the potential for greater accuracy. Optimizations such as Quickprop are primarily aimed at speeding up error minimization, while other improvements mainly try to increase reliability. In order to avoid oscillation inside the network such as alternating connection weights, and to improve the rate of convergence, refinements use an adaptive learning rate that increases or decreases as appropriate. The concept of momentum allows the balance between the gradient and the previous change to be weighted such that the weight adjustment depends to some degree on the previous change. A momentum close to 0 emphasizes the gradient, while a value close to 1 emphasizes the last change.
Cost function
While it is possible to define a cost function ad hoc, frequently the choice is determined by the function's desirable properties (such as convexity) because it arises from the model (e.g. in a probabilistic model, the model's posterior probability can be used as an inverse cost).
Backpropagation
Backpropagation is a method used to adjust the connection weights to compensate for each error found during learning. The error amount is effectively divided among the connections. Technically, backpropagation calculates the gradient (the derivative) of the cost function associated with a given state with respect to the weights. The weight updates can be done via stochastic gradient descent or other methods, such as extreme learning machines, "no-prop" networks, training without backtracking, "weightless" networks,ESANN. 2009. and non-connectionist neural networks.
Learning paradigms
Machine learning is commonly separated into three main learning paradigms, supervised learning, unsupervised learning and reinforcement learning. Each corresponds to a particular learning task.
Supervised learning
Supervised learning uses a set of paired inputs and desired outputs. The learning task is to produce the desired output for each input. In this case, the cost function is related to eliminating incorrect deductions. A commonly used cost is the mean-squared error, which tries to minimize the average squared error between the network's output and the desired output. Tasks suited for supervised learning are pattern recognition (also known as classification) and regression (also known as function approximation). Supervised learning is also applicable to sequential data (e.g., for handwriting, speech and gesture recognition). This can be thought of as learning with a "teacher", in the form of a function that provides continuous feedback on the quality of solutions obtained thus far.
Unsupervised learning
In unsupervised learning, input data is given along with the cost function, some function of the data and the network's output. The cost function is dependent on the task (the model domain) and any a priori assumptions (the implicit properties of the model, its parameters and the observed variables). As a trivial example, consider the model where is a constant and the cost . Minimizing this cost produces a value of that is equal to the mean of the data. The cost function can be much more complicated. Its form depends on the application: for example, in compression it could be related to the mutual information between and , whereas in statistical modeling, it could be related to the posterior probability of the model given the data (note that in both of those examples, those quantities would be maximized rather than minimized). Tasks that fall within the paradigm of unsupervised learning are in general estimation problems; the applications include clustering, the estimation of statistical distributions, compression and filtering.
Reinforcement learning
In applications such as playing video games, an actor takes a string of actions, receiving a generally unpredictable response from the environment after each one. The goal is to win the game, i.e., generate the most positive (lowest cost) responses. In reinforcement learning, the aim is to weight the network (devise a policy) to perform actions that minimize long-term (expected cumulative) cost. At each point in time the agent performs an action and the environment generates an observation and an instantaneous cost, according to some (usually unknown) rules. The rules and the long-term cost usually only can be estimated. At any juncture, the agent decides whether to explore new actions to uncover their costs or to exploit prior learning to proceed more quickly.
Formally, the environment is modeled as a Markov decision process (MDP) with states and actions . Because the state transitions are not known, probability distributions are used instead: the instantaneous cost distribution , the observation distribution and the transition distribution , while a policy is defined as the conditional distribution over actions given the observations. Taken together, the two define a Markov chain (MC). The aim is to discover the lowest-cost MC.
ANNs serve as the learning component in such applications. Dynamic programming coupled with ANNs (giving neurodynamic programming) has been applied to problems such as those involved in vehicle routing, video games, natural resource management and medicine because of ANNs ability to mitigate losses of accuracy even when reducing the discretization grid density for numerically approximating the solution of control problems. Tasks that fall within the paradigm of reinforcement learning are control problems, games and other sequential decision making tasks.
Self-learning
Self-learning in neural networks was introduced in 1982 along with a neural network capable of self-learning named crossbar adaptive array (CAA).Bozinovski, S. (1982). "A self-learning system using secondary reinforcement". In R. Trappl (ed.) Cybernetics and Systems Research: Proceedings of the Sixth European Meeting on Cybernetics and Systems Research. North Holland. pp. 397–402. . It is a system with only one input, situation s, and only one output, action (or behavior) a. It has neither external advice input nor external reinforcement input from the environment. The CAA computes, in a crossbar fashion, both decisions about actions and emotions (feelings) about encountered situations. The system is driven by the interaction between cognition and emotion.Bozinovski, S. (2014) "Modeling mechanisms of cognition-emotion interaction in artificial neural networks, since 1981 ." Procedia Computer Science p. 255-263 Given the memory matrix, W =||w(a,s)||, the crossbar self-learning algorithm in each iteration performs the following computation:
In situation s perform action a;
Receive consequence situation s';
Compute emotion of being in consequence situation v(s');
Update crossbar memory w'(a,s) = w(a,s) + v(s').
The backpropagated value (secondary reinforcement) is the emotion toward the consequence situation. The CAA exists in two environments, one is behavioral environment where it behaves, and the other is genetic environment, where from it receives initial emotions (only once) about to be encountered situations in the behavioral environment. Having received the genome vector (species vector) from the genetic environment, the CAA will learn a goal-seeking behavior, in the behavioral environment that contains both desirable and undesirable situations.
Neuroevolution
Neuroevolution can create neural network topologies and weights using evolutionary computation. It is competitive with sophisticated gradient descent approaches. One advantage of neuroevolution is that it may be less prone to get caught in "dead ends".
Stochastic neural network
Stochastic neural networks originating from Sherrington–Kirkpatrick models are a type of artificial neural network built by introducing random variations into the network, either by giving the network's artificial neurons stochastic transfer functions , or by giving them stochastic weights. This makes them useful tools for optimization problems, since the random fluctuations help the network escape from local minima. Stochastic neural networks trained using a Bayesian approach are known as Bayesian neural networks.
Topological deep learning
Topological deep learning, first introduced in 2017, is an emerging approach in machine learning that integrates topology with deep neural networks to address highly intricate and high-order data. Initially rooted in algebraic topology, TDL has since evolved into a versatile framework incorporating tools from other mathematical disciplines, such as differential topology and geometric topology. As a successful example of mathematical deep learning, TDL continues to inspire advancements in mathematical artificial intelligence, fostering a mutually beneficial relationship between AI and mathematics.
Other
In a Bayesian framework, a distribution over the set of allowed models is chosen to minimize the cost. Evolutionary methods, gene expression programming, simulated annealing, expectation–maximization, non-parametric methods and particle swarm optimization are other learning algorithms. Convergent recursion is a learning algorithm for cerebellar model articulation controller (CMAC) neural networks.
Modes
Two modes of learning are available: stochastic and batch. In stochastic learning, each input creates a weight adjustment. In batch learning, weights are adjusted based on a batch of inputs, accumulating errors over the batch. Stochastic learning introduces "noise" into the process, using the local gradient calculated from one data point; this reduces the chance of the network getting stuck in local minima. However, batch learning typically yields a faster, more stable descent to a local minimum, since each update is performed in the direction of the batch's average error. A common compromise is to use "mini-batches", small batches with samples in each batch selected stochastically from the entire data set.
Types
ANNs have evolved into a broad family of techniques that have advanced the state of the art across multiple domains. The simplest types have one or more static components, including number of units, number of layers, unit weights and topology. Dynamic types allow one or more of these to evolve via learning. The latter is much more complicated but can shorten learning periods and produce better results. Some types allow/require learning to be "supervised" by the operator, while others operate independently. Some types operate purely in hardware, while others are purely software and run on general purpose computers.
Some of the main breakthroughs include:
Convolutional neural networks that have proven particularly successful in processing visual and other two-dimensional data;Yann LeCun (2016). Slides on Deep Learning Online where long short-term memory avoids the vanishing gradient problem and can handle signals that have a mix of low and high frequency components aiding large-vocabulary speech recognition, text-to-speech synthesis, and photo-real talking heads.Convolutional Neural Networks have also been applied to fraud detection.
Competitive networks such as generative adversarial networks in which multiple networks (of varying structure) compete with each other, on tasks such as winning a game or on deceiving the opponent about the authenticity of an input.
Network design
Using artificial neural networks requires an understanding of their characteristics.
Choice of model: This depends on the data representation and the application. Model parameters include the number, type, and connectedness of network layers, as well as the size of each and the connection type (full, pooling, etc.). Overly complex models learn slowly.
Learning algorithm: Numerous trade-offs exist between learning algorithms. Almost any algorithm will work well with the correct hyperparameters for training on a particular data set. However, selecting and tuning an algorithm for training on unseen data requires significant experimentation.
Robustness: If the model, cost function and learning algorithm are selected appropriately, the resulting ANN can become robust.
Neural architecture search (NAS) uses machine learning to automate ANN design. Various approaches to NAS have designed networks that compare well with hand-designed systems. The basic search algorithm is to propose a candidate model, evaluate it against a dataset, and use the results as feedback to teach the NAS network. Available systems include AutoML and AutoKeras. scikit-learn library provides functions to help with building a deep network from scratch. We can then implement a deep network with TensorFlow or Keras.
Hyperparameters must also be defined as part of the design (they are not learned), governing matters such as how many neurons are in each layer, learning rate, step, stride, depth, receptive field and padding (for CNNs), etc.
def train(X, y, n_hidden, learning_rate, n_iter):
"""Training function.
Args:
X: Argument X.
y: Argument y.
n_hidden: The number of hidden layer units.
learning_rate: The learning rate.
n_iter: The number of iterations.
Returns:
dict: A dictionary.
"""
m, n_input = X.shape
# 1. random initialize weights and biases
w1 = np.random.randn(n_input, n_hidden)
b1 = np.zeros((1, n_hidden))
w2 = np.random.randn(n_hidden, 1)
b2 = np.zeros((1, 1))
# 2. in each iteration, feed all layers with the latest weights and biases
for i in range(n_iter + 1):
z2 = np.dot(X, w1) + b1
a2 = sigmoid(z2)
z3 = np.dot(a2, w2) + b2
a3 = z3
dz3 = a3 - y
dw2 = np.dot(a2.T, dz3)
db2 = np.sum(dz3, axis=0, keepdims=True)
dz2 = np.dot(dz3, w2.T) * sigmoid_derivative(z2)
dw1 = np.dot(X.Y, dz2)
db1 = np.sum(dz2, axis=0)
# 3. update weights and biases with gradients
w1 -= learning_rate * dw1 / m
w2 -= learning_rate * dw2 / m
b1 -= learning_rate * db1 / m
b2 -= learning_rate * db2 / m
if i % 1000 == 0:
print("Epoch", i, "loss: ", np.mean(np.square(dz3)))
model = {"w1": w1, "b1": b1, "w2": w2, "b2": b2}
return model
Monitoring and concept drift detection of ANNs
When neural networks are deployed in real-world applications, the statistical properties of the input data may change over time, a phenomenon known as concept drift or non-stationarity. Drift can reduce predictive accuracy and lead to unreliable or biased decisions if it is not detected and corrected. In practice, this means that the model's accuracy in deployment may differ substantially from the levels observed during training or cross-validation.
Several strategies have been developed to monitor neural networks for drift and degradation:
Error-based monitoring: comparing current predictions against ground-truth labels when they become available. This approach directly quantifies predictive performance but may be impractical when labels are delayed or costly to obtain.
Data distribution monitoring: detecting changes in the input data distribution using statistical tests, divergence measures, or density-ratio estimation.
Representation monitoring: tracking the distribution of internal embeddings or hidden-layer features. Shifts in the latent representation can indicate nonstationarity even when labels are unavailable. Statistical methods such as statistical process control charts have been adapted for this purpose.
Applications
Because of their ability to model and reproduce nonlinear processes, artificial neural networks have found applications in many disciplines. These include:
Function approximation, or regression analysis, (including time series prediction, fitness approximation, and modeling)
Data processing (including filtering, clustering, blind source separation, and compression)
Nonlinear system identification and control (including vehicle control, trajectory prediction, adaptive control, process control, and natural resource management)
Pattern recognition (including radar systems, face identification, signal classification, novelty detection, 3D reconstruction,Choy, Christopher B., et al. "3d-r2n2: A unified approach for single and multi-view 3d object reconstruction ." European conference on computer vision. Springer, Cham, 2016. object recognition, and sequential decision making)
Sequence recognition (including gesture, speech, and handwritten and printed text recognition)
Sensor data analysis (including image analysis)
Robotics (including directing manipulators and prostheses)
Data mining (including knowledge discovery in databases)
Finance (such as ex-ante models for specific financial long-run forecasts and artificial financial markets)
Quantum chemistry
General game playing
Generative AI
Data visualization
Machine translation
Social network filtering
E-mail spam filtering
Medical diagnosis
ANNs have been used to diagnose several types of cancers and to distinguish highly invasive cancer cell lines from less invasive lines using only cell shape information.
ANNs have been used to accelerate reliability analysis of infrastructures subject to natural disasters and to predict foundation settlements. It can also be useful to mitigate flood by the use of ANNs for modelling rainfall-runoff. ANNs have also been used for building black-box models in geoscience: hydrology, ocean modelling and coastal engineering, and geomorphology. ANNs have been employed in cybersecurity, with the objective to discriminate between legitimate activities and malicious ones. For example, machine learning has been used for classifying Android malware, for identifying domains belonging to threat actors and for detecting URLs posing a security risk. Research is underway on ANN systems designed for penetration testing, for detecting botnets, credit cards frauds and network intrusions.
ANNs have been proposed as a tool to solve partial differential equations in physics and simulate the properties of many-body open quantum systems. In brain research ANNs have studied short-term behavior of individual neurons, the dynamics of neural circuitry arise from interactions between individual neurons and how behavior can arise from abstract neural modules that represent complete subsystems. Studies considered long-and short-term plasticity of neural systems and their relation to learning and memory from the individual neuron to the system level.
It is possible to create a profile of a user's interests from pictures, using artificial neural networks trained for object recognition.
Beyond their traditional applications, artificial neural networks are increasingly being utilized in interdisciplinary research, such as materials science. For instance, graph neural networks (GNNs) have demonstrated their capability in scaling deep learning for the discovery of new stable materials by efficiently predicting the total energy of crystals. This application underscores the adaptability and potential of ANNs in tackling complex problems beyond the realms of predictive modeling and artificial intelligence, opening new pathways for scientific discovery and innovation.
Theoretical properties
Computational power
The multilayer perceptron is a universal function approximator, as proven by the universal approximation theorem. However, the proof is not constructive regarding the number of neurons required, the network topology, the weights and the learning parameters.
A specific recurrent architecture with rational-valued weights (as opposed to full precision real number-valued weights) has the power of a universal Turing machine, using a finite number of neurons and standard linear connections. Further, the use of irrational values for weights results in a machine with super-Turing power.
Capacity
A model's "capacity" property corresponds to its ability to model any given function. It is related to the amount of information that can be stored in the network and to the notion of complexity.
Two notions of capacity are known by the community. The information capacity and the VC Dimension. The information capacity of a perceptron is intensively discussed in Sir David MacKay's book which summarizes work by Thomas Cover. The capacity of a network of standard neurons (not convolutional) can be derived by four rules that derive from understanding a neuron as an electrical element. The information capacity captures the functions modelable by the network given any data as input. The second notion, is the VC dimension. VC Dimension uses the principles of measure theory and finds the maximum capacity under the best possible circumstances. This is, given input data in a specific form. As noted in, the VC Dimension for arbitrary inputs is half the information capacity of a perceptron. The VC Dimension for arbitrary points is sometimes referred to as Memory Capacity.
Convergence
Models may not consistently converge on a single solution, firstly because local minima may exist, depending on the cost function and the model. Secondly, the optimization method used might not guarantee to converge when it begins far from any local minimum. Thirdly, for sufficiently large data or parameters, some methods become impractical.
Another issue worthy to mention is that training may cross some saddle point which may lead the convergence to the wrong direction.
The convergence behavior of certain types of ANN architectures are more understood than others. When the width of network approaches to infinity, the ANN is well described by its first order Taylor expansion throughout training, and so inherits the convergence behavior of affine models. Another example is when parameters are small, it is observed that ANNs often fit target functions from low to high frequencies. This behavior is referred to as the spectral bias, or frequency principle, of neural networks. This phenomenon is the opposite to the behavior of some well studied iterative numerical schemes such as Jacobi method. Deeper neural networks have been observed to be more biased towards low frequency functions.
Generalization and statistics
Applications whose goal is to create a system that generalizes well to unseen examples, face the possibility of over-training. This arises in convoluted or over-specified systems when the network capacity significantly exceeds the needed free parameters.
Two approaches address over-training. The first is to use cross-validation and similar techniques to check for the presence of over-training and to select hyperparameters to minimize the generalization error. The second is to use some form of regularization. This concept emerges in a probabilistic (Bayesian) framework, where regularization can be performed by selecting a larger prior probability over simpler models; but also in statistical learning theory, where the goal is to minimize over two quantities: the 'empirical risk' and the 'structural risk', which roughly corresponds to the error over the training set and the predicted error in unseen data due to overfitting.
Supervised neural networks that use a mean squared error (MSE) cost function can use formal statistical methods to determine the confidence of the trained model. The MSE on a validation set can be used as an estimate for variance. This value can then be used to calculate the confidence interval of network output, assuming a normal distribution. A confidence analysis made this way is statistically valid as long as the output probability distribution stays the same and the network is not modified.
By assigning a softmax activation function, a generalization of the logistic function, on the output layer of the neural network (or a softmax component in a component-based network) for categorical target variables, the outputs can be interpreted as posterior probabilities. This is useful in classification as it gives a certainty measure on classifications.
The softmax activation function is:
Criticism
Training
A common criticism of neural networks, particularly in robotics, is that they require too many training samples for real-world operation.
Any learning machine needs sufficient representative examples in order to capture the underlying structure that allows it to generalize to new cases. Potential solutions include randomly shuffling training examples, by using a numerical optimization algorithm that does not take too large steps when changing the network connections following an example, grouping examples in so-called mini-batches and/or introducing a recursive least squares algorithm for CMAC.
Dean Pomerleau uses a neural network to train a robotic vehicle to drive on multiple types of roads (single lane, multi-lane, dirt, etc.), and a large amount of his research is devoted to extrapolating multiple training scenarios from a single training experience, and preserving past training diversity so that the system does not become overtrained (if, for example, it is presented with a series of right turns—it should not learn to always turn right).Dean Pomerleau, "Knowledge-based Training of Artificial Neural Networks for Autonomous Robot Driving"
Theory
A central claim of ANNs is that they embody new and powerful general principles for processing information. These principles are ill-defined. This allows simple statistical association (the basic function of artificial neural networks) to be described as learning or recognition. In 1997, Alexander Dewdney, a former Scientific American columnist, commented that as a result, artificial neural networks have a
One response to Dewdney is that neural networks have been successfully used to handle many complex and diverse tasks, ranging from autonomously flying aircraftNASA – Dryden Flight Research Center – News Room: News Releases: NASA NEURAL NETWORK PROJECT PASSES MILESTONE . Nasa.gov. Retrieved on 20 November 2013. to detecting credit card fraud to mastering the game of Go.
Technology writer Roger Bridgman commented:
Although it is true that analyzing what has been learned by an artificial neural network is difficult, it is much easier to do so than to analyze what has been learned by a biological neural network. Moreover, recent emphasis on the explainability of AI has contributed towards the development of methods, notably those based on attention mechanisms, for visualizing and explaining learned neural networks. Furthermore, researchers involved in exploring learning algorithms for neural networks are gradually uncovering generic principles that allow a learning machine to be successful. For example, Bengio and LeCun (2007) wrote an article regarding local vs non-local learning, as well as shallow vs deep architecture.
Biological brains use both shallow and deep circuits as reported by brain anatomy,D. J. Felleman and D. C. Van Essen, "Distributed hierarchical processing in the primate cerebral cortex," Cerebral Cortex, 1, pp. 1–47, 1991. displaying a wide variety of invariance. WengJ. Weng, "Natural and Artificial Intelligence: Introduction to Computational Brain-Mind ," BMI Press, , 2012. argued that the brain self-wires largely according to signal statistics and therefore, a serial cascade cannot catch all major statistical dependencies.
Hardware
Large and effective neural networks require considerable computing resources. While the brain has hardware tailored to the task of processing signals through a graph of neurons, simulating even a simplified neuron on von Neumann architecture may consume vast amounts of memory and storage. Furthermore, the designer often needs to transmit signals through many of these connections and their associated neurons which require enormous CPU power and time.
Some argue that the resurgence of neural networks in the twenty-first century is largely attributable to advances in hardware: from 1991 to 2015, computing power, especially as delivered by GPGPUs (on GPUs), has increased around a million-fold, making the standard backpropagation algorithm feasible for training networks that are several layers deeper than before. The use of accelerators such as FPGAs and GPUs can reduce training times from months to days.
Neuromorphic engineering or a physical neural network addresses the hardware difficulty directly, by constructing non-von-Neumann chips to directly implement neural networks in circuitry. Another type of chip optimized for neural network processing is called a Tensor Processing Unit, or TPU.
Practical counterexamples
Analyzing what has been learned by an ANN is much easier than analyzing what has been learned by a biological neural network. Furthermore, researchers involved in exploring learning algorithms for neural networks are gradually uncovering general principles that allow a learning machine to be successful. For example, local vs. non-local learning and shallow vs. deep architecture.
Hybrid approaches
Advocates of hybrid models (combining neural networks and symbolic approaches) say that such a mixture can better capture the mechanisms of the human mind.Sun and Bookman, 1990
Dataset bias
Neural networks are dependent on the quality of the data they are trained on, thus low quality data with imbalanced representativeness can lead to the model learning and perpetuating societal biases. These inherited biases become especially critical when the ANNs are integrated into real-world scenarios where the training data may be imbalanced due to the scarcity of data for a specific race, gender or other attribute. This imbalance can result in the model having inadequate representation and understanding of underrepresented groups, leading to discriminatory outcomes that exacerbate societal inequalities, especially in applications like facial recognition, hiring processes, and law enforcement. For example, in 2018, Amazon had to scrap a recruiting tool because the model favored men over women for jobs in software engineering due to the higher number of male workers in the field. The program would penalize any resume with the word "woman" or the name of any women's college. However, the use of synthetic data can help reduce dataset bias and increase representation in datasets.
Gallery
Recent advancements and future directions
Artificial neural networks (ANNs) have undergone significant advancements, particularly in their ability to model complex systems, handle large data sets, and adapt to various types of applications. Their evolution over the past few decades has been marked by a broad range of applications in fields such as image processing, speech recognition, natural language processing, finance, and medicine.
Image processing
In the realm of image processing, ANNs are employed in tasks such as image classification, object recognition, and image segmentation. For instance, deep convolutional neural networks (CNNs) have been important in handwritten digit recognition, achieving state-of-the-art performance. This demonstrates the ability of ANNs to effectively process and interpret complex visual information, leading to advancements in fields ranging from automated surveillance to medical imaging.
Speech recognition
By modeling speech signals, ANNs are used for tasks like speaker identification and speech-to-text conversion. Deep neural network architectures have introduced significant improvements in large vocabulary continuous speech recognition, outperforming traditional techniques. These advancements have enabled the development of more accurate and efficient voice-activated systems, enhancing user interfaces in technology products.
Natural language processing
In natural language processing, ANNs are used for tasks such as text classification, sentiment analysis, and machine translation. They have enabled the development of models that can accurately translate between languages, understand the context and sentiment in textual data, and categorize text based on content. This has implications for automated customer service, content moderation, and language understanding technologies.
Control systems
In the domain of control systems, ANNs are used to model dynamic systems for tasks such as system identification, control design, and optimization. For instance, deep feedforward neural networks are important in system identification and control applications.
Finance
ANNs are used for stock market prediction and credit scoring:
In investing, ANNs can process vast amounts of financial data, recognize complex patterns, and forecast stock market trends, aiding investors and risk managers in making informed decisions.
In credit scoring, ANNs offer data-driven, personalized assessments of creditworthiness, improving the accuracy of default predictions and automating the lending process.
ANNs require high-quality data and careful tuning, and their "black-box" nature can pose challenges in interpretation. Nevertheless, ongoing advancements suggest that ANNs continue to play a role in finance, offering valuable insights and enhancing risk management strategies.
Medicine
ANNs are able to process and analyze vast medical datasets. They enhance diagnostic accuracy, especially by interpreting complex medical imaging for early disease detection, and by predicting patient outcomes for personalized treatment planning. In drug discovery, ANNs speed up the identification of potential drug candidates and predict their efficacy and safety, significantly reducing development time and costs. Additionally, their application in personalized medicine and healthcare data analysis allows tailored therapies and efficient patient care management. Ongoing research is aimed at addressing remaining challenges such as data privacy and model interpretability, as well as expanding the scope of ANN applications in medicine.
Content creation
ANNs such as generative adversarial networks (GAN) and transformers are used for content creation across numerous industries. This is because deep learning models are able to learn the style of an artist or musician from huge datasets and generate completely new artworks and music compositions. For instance, DALL-E is a deep neural network trained on 650 million pairs of images and texts across the internet that can create artworks based on text entered by the user. In the field of music, transformers are used to create original music for commercials and documentaries through companies such as AIVA and Jukedeck. In the marketing industry, generative models are used to create personalized advertisements for consumers. Additionally, major film companies are partnering with technology companies to analyze the financial success of a film, such as the partnership between Warner Bros and technology company Cinelytic established in 2020. Furthermore, neural networks have found uses in video game creation, where non-player characters (NPCs) can make decisions based on all the characters currently in the game.
See also
ADALINE
Autoencoder
Bio-inspired computing
Blue Brain Project
Catastrophic interference
Cognitive architecture
Connectionist expert system
Connectomics
Deep image prior
Digital morphogenesis
Efficiently updatable neural network
Evolutionary algorithm
Family of curves
Genetic algorithm
Hyperdimensional computing
In situ adaptive tabulation
Large width limits of neural networks
List of machine learning concepts
Memristor
Mind uploading
Neural gas
Neural network software
Optical neural network
Parallel distributed processing
Philosophy of artificial intelligence
Predictive analytics
Quantum neural network
Support vector machine
Spiking neural network
Stochastic parrot
Tensor product network
Topological deep learning
References
Bibliography
PDF
created for National Science Foundation, Contract Number EET-8716324, and Defense Advanced Research Projects Agency (DOD), ARPA Order No. 4976 under Contract F33615-87-C-1499.
External links
A Brief Introduction to Neural Networks (D. Kriesel) – Illustrated, bilingual manuscript about artificial neural networks; Topics so far: Perceptrons, Backpropagation, Radial Basis Functions, Recurrent Neural Networks, Self Organizing Maps, Hopfield Networks.
Review of Neural Networks in Materials Science
Artificial Neural Networks Tutorial in three languages (Univ. Politécnica de Madrid)
Another introduction to ANN
Next Generation of Neural Networks – Google Tech Talks
Performance of Neural Networks
Neural Networks and Information
Category:Computational statistics
Category:Classification algorithms
Category:Computational neuroscience
Category:Market research
Category:Mathematical psychology
Category:Mathematical and quantitative methods (economics)
Category:Bioinspiration
|
computer_science
| 8,720
|
24408
|
Polar bear
|
https://en.wikipedia.org/wiki/Polar_bear
|
The polar bear (Ursus maritimus) is a large bear native to the Arctic and nearby areas. It is closely related to the brown bear, and the two species can interbreed. The polar bear is the largest extant species of bear and land carnivore by body mass, with adult males weighing . The species is sexually dimorphic, as adult females are much smaller. The polar bear is white- or yellowish-furred with black skin and a thick layer of fat. It is more slender than the brown bear, with a narrower skull, longer neck and lower shoulder hump. Its teeth are sharper and more adapted to cutting meat. The paws are large and allow the bear to walk on ice and paddle in the water.
Polar bears are both terrestrial and pagophilic (ice-living) and are considered marine mammals because of their dependence on marine ecosystems. They prefer the annual sea ice but live on land when the ice melts in the summer. They are mostly carnivorous and specialized for preying on seals, particularly ringed seals. Such prey is typically taken by ambush; the bear may stalk its prey on the ice or in the water, but also will stay at a breathing hole or ice edge to wait for prey to swim by. The bear primarily feeds on the seal's energy-rich blubber. Other prey include walruses, beluga whales and some terrestrial animals. Polar bears are usually solitary but can be found in groups when on land. During the breeding season, male bears guard females and defend them from rivals. Mothers give birth to cubs in maternity dens during the winter. Young stay with their mother for up to two and a half years.
The polar bear is considered a vulnerable species by the International Union for Conservation of Nature (IUCN) with an estimated total population of 22,000 to 31,000 individuals. Its biggest threats are climate change, pollution and energy development. Climate change has caused a decline in sea ice, giving the polar bear less access to its favoured prey and increasing the risk of malnutrition and starvation. Less sea ice also means that the bears must spend more time on land, increasing conflicts with humans. Polar bears have been hunted, both by native and non-native peoples, for their coats, meat and other items. They have been kept in captivity in zoos and circuses and are prevalent in art, folklore, religion and modern culture.
Naming
The polar bear was given its common name by Thomas Pennant in A Synopsis of Quadrupeds (1771). It was known as the "white bear" in Europe between the 13th and 18th centuries, as well as "ice bear", "sea bear" and "Greenland bear". The Norse referred to it as and . The bear is called by the Inuit. The Netsilik cultures additionally have different names for bears based on certain factors, such as sex and age: these include adult males (), single adult females (), gestating females (), newborns (), large adolescents () and dormant bears (). The scientific name is Latin for .
Taxonomy
Carl Linnaeus classified the polar bear as a type of brown bear (Ursus arctos), labelling it as Ursus maritimus albus-major, arcticus ('mostly-white sea bear, arctic') in the 1758 edition of his work Systema Naturae. Constantine John Phipps formally described the polar bear as a distinct species, Ursus maritimus in 1774, following his 1773 voyage towards the North Pole. Because of its adaptations to a marine environment, some taxonomists, such as Theodore Knottnerus-Meyer, have placed the polar bear in its own genus, Thalarctos. However Ursus is widely considered to be the valid genus for the species on the basis of the fossil record and the fact that it can breed with the brown bear.
Different subspecies have been proposed including Ursus maritimus maritimus and U. m. marinus. However, these are not supported, and the polar bear is considered to be monotypic. One possible fossil subspecies, U. m. tyrannus, was posited in 1964 by Björn Kurtén, who reconstructed the subspecies from a single fragment of an ulna which was approximately 20 percent larger than expected for a polar bear. However, re-evaluation in the 21st century has indicated that the fragment likely comes from a giant brown bear.
Evolution
The polar bear is one of eight extant species in the bear family, Ursidae, and of six extant species in the subfamily Ursinae.
Fossils of polar bears are uncommon. The oldest known fossil is a 130,000- to 110,000-year-old jaw bone, found on Prince Charles Foreland, Norway, in 2004. Scientists in the 20th century surmised that polar bears directly descended from a population of brown bears, possibly in eastern Siberia or Alaska. Mitochondrial DNA studies in the 1990s and 2000s supported the status of the polar bear as a derivative of the brown bear, finding that some brown bear populations were more closely related to polar bears than to other brown bears, particularly the ABC Islands bears of Southeast Alaska. A 2010 study estimated that the polar bear lineage split from other brown bears around 150,000 years ago.
More extensive genetic studies have refuted the idea that polar bears are directly descended from brown bears and found that the two species are separate sister lineages. The genetic similarities between polar bears and some brown bears were found to be the result of interbreeding. A 2012 study estimated the split between polar and brown bears as occurring around 600,000 years ago. A 2022 study estimated the divergence as occurring even earlier at over one million years ago. Glaciation events over hundreds of thousands of years led to both the origin of polar bears and their subsequent interactions and hybridizations with brown bears.
Studies in 2011 and 2012 concluded that gene flow went from brown bears to polar bears during hybridization. In particular, a 2011 study concluded that living polar bear populations derived their maternal lines from now-extinct Irish brown bears. Later studies have clarified that gene flow went from polar to brown bears rather than the reverse. Up to 9 percent of the genome of ABC bears was transferred from polar bears, while Irish bears had up to 21.5 percent polar bear origin. Mass hybridization between the two species appears to have stopped around 200,000 years ago. Modern hybrids are relatively rare in the wild.
Analysis of the number of variations of gene copies in polar bears compared with brown bears and American black bears shows distinct adaptions. Polar bears have a less diverse array of olfactory receptor genes, a result of there being fewer odours in their Arctic habitat. With its carnivorous, high-fat diet the species has fewer copies of the gene involved in making amylase, an enzyme that breaks down starch, and more selection for genes for fatty acid breakdown and a more efficient circulatory system. The polar bear's thicker coat is the result of more copies of genes involved in keratin-creating proteins.
Characteristics
The polar bear is the largest living species of bear and land carnivore, though some brown bear subspecies like the Kodiak bear can rival it in size. Males are generally long with a weight of . Females are smaller at with a weight of . Sexual dimorphism in the species is particularly high compared with most other mammals. Male polar bears also have proportionally larger heads than females. The weight of polar bears fluctuates during the year, as they can bulk up on fat and increase their mass by 50 percent. A fattened, pregnant female can weigh as much as . Adults may stand tall at the shoulder. The tail is long. The largest polar bear on record, reportedly weighing , was a male shot at Kotzebue Sound in northwestern Alaska in 1960.
Compared with the brown bear, this species has a more slender build, with a narrower, flatter and smaller skull, a longer neck, and a lower shoulder hump. The snout profile is curved, resembling a "Roman nose". They have 34–42 teeth including 12 incisors, 4 canines, 8–16 premolars and 10 molars. The teeth are adapted for a more carnivorous diet than that of the brown bear, having longer, sharper and more spaced out canines, and smaller, more pointed cheek teeth (premolars and molars). The species has a large space or diastema between the canines and cheek teeth, which may allow it to better bite into prey. Since it normally preys on animals much smaller than it, the polar bear does not have a particularly strong bite. Polar bears have large paws, with the front paws being broader than the back. The feet are hairier than in other bear species, providing warmth and friction when stepping on snow and sea ice. The claws are small but sharp and hooked and are used both to snatch prey and climb onto ice.
The coat consists of dense underfur around long and guard hairs around long. Males have long hairs on their forelegs, which is thought to signal their fitness to females. The outer surface of the hairs has a scaly appearance, and the guard hairs are hollow, which allows the animals to trap heat and float in the water. The transparent guard hairs forward scatter ultraviolet light between the underfur and the skin, leading to a cycle of absorption and re-emission, keeping them warm. The fur appears white because of the backscatter of incident light and the absence of pigment. Polar bears gain a yellowish colouration as they are exposed more to the sun. This is reversed after they moult. It can also be grayish or brownish. Their light fur provides camouflage in their snowy environment. After emerging from the water, the bear can easily shake itself dry since the hairs are resistant to tangling when wet. Oil secretions prevent the hair from freezing. The skin, including the nose and lips, is black and absorbs heat. Polar bears have a thick layer of fat underneath the skin, which provides both warmth and energy. Polar bears maintain their core body temperature at about . Overheating is countered by a layer of highly vascularized striated muscle tissue and finely controlled blood vessels. Bears also cool off by entering the water.
The eyes of a polar bear are close to the top of the head, which may allow them to stay out of the water when the animal is swimming at the surface. They are relatively small, which may be an adaption against blowing snow and snow blindness. Polar bears are dichromats, and lack the cone cells for seeing medium, mainly green, wavelengths. They have many rod cells, which allow them to see at night. The ears are small, allowing them to better retain heat and not get frostbitten. They can hear best at frequencies of 11.2–22.5 kHz, a wider frequency range than expected given that their prey mostly makes low-frequency sounds. The nasal concha creates a large surface area, so more warm air can move through the nasal passages. Their olfactory system is also large and adapted for smelling prey over vast distances. The animal has reniculate kidneys which filter out the salt in their food.
Distribution and habitat
Polar bears inhabit the Arctic and adjacent areas. Their range includes Greenland, Canada, Alaska, Russia and the Svalbard Archipelago of Norway. Polar bears have been recorded as close as from the North Pole. The southern limits of their range include James Bay and Newfoundland and Labrador in Canada and St. Matthew Island and the Pribilof Islands of Alaska. They are not permanent residents of Iceland but have been recorded visiting there if they can reach it via sea ice. As there has been minimal human encroachment on the bears' remote habitat, they can still be found in much of their original range, more of it than any other large land carnivore.
Polar bears have been divided into at least 18 subpopulations labelled East Greenland (ES), Barents Sea (BS), Kara Sea (KS), Laptev Sea (LVS), Chukchi Sea (CS), northern and southern Beaufort Sea (SBS and NBS), Viscount Melville (VM), M'Clintock Channel (MC), Gulf of Boothia (GB), Lancaster Sound (LS), Norwegian Bay (NB), Kane Basin (KB), Baffin Bay (BB), Davis Strait (DS), Foxe Basin (FB) and the western and southern Hudson Bay (WHB and SHB) populations. Bears in and around the Queen Elizabeth Islands have been proposed as a subpopulation but this is not universally accepted. A 2022 study has suggested that the bears in southeast Greenland should be considered a different subpopulation based on their geographic isolation and genetics. Polar bear populations can also be divided into four gene clusters: Southern Canadian, Canadian Archipelago, Western Basin (northwestern Canada west to the Russian Far East) and Eastern Basin (Greenland east to Siberia).
The polar bear is dependent enough on the ocean to be considered a marine mammal. It is pagophilic and mainly inhabits annual sea ice covering continental shelves and between islands of archipelagos. These areas, known as the "Arctic Ring of Life", have high biological productivity. The species tends to frequent areas where sea ice meets water, such as polynyas and leads, to hunt the seals that make up most of its diet. Polar bears travel in response to changes in ice cover throughout the year. They are forced onto land in summer when the sea ice disappears. Terrestrial habitats used by polar bears include forests, mountains, rocky areas, lakeshores and creeks. In the Chukchi and Beaufort seas, where the sea ice breaks off and floats north during the summer, polar bears generally stay on the ice, though a large portion of the population (15–40%) has been observed spending all summer on land since the 1980s. Some areas have thick multiyear ice that does not completely melt and the bears can stay on all year, though this type of ice has fewer seals and allows for less productivity in the water.
Behaviour and ecology
Polar bears may travel areas as small as to as large as in a year, while drifting ice allows them to move further. Depending on ice conditions, a bear can travel an average of per day. These movements are powered by their energy-rich diet. Polar bears move by walking and galloping and do not trot. Walking bears tilt their front paws towards each other. They can run at estimated speeds of up to but typically move at around . Polar bears are also capable swimmers and can swim at up to . One study found they can swim for an average of 3.4 days at a time and travel an average of . They can dive for as long as three minutes. When swimming, the broad front paws do the paddling, while the hind legs play a role in steering and diving.
Most polar bears are active year-round. Hibernation occurs only among pregnant females. Non-hibernating bears typically have a normal 24-hour cycle even during days of all darkness or all sunlight, though cycles less than a day are more common during the former. The species is generally diurnal, being most active early in the day. Polar bears sleep close to eight hours a day on average. They will sleep in various positions, including curled up, sitting up, lying on one side, on the back with limbs spread, or on the belly with the rump elevated. On sea ice, polar bears snooze at pressure ridges where they dig on the sheltered side and lie down. After a snowstorm, a bear may rest under the snow for hours or days. On land, the bears may dig a resting spot on gravel or sand beaches. They will also sleep on rocky outcrops. In mountainous areas on the coast, mothers and subadults will sleep on slopes where they can better spot another bear coming. Adult males are less at risk from other bears and can sleep nearly anywhere.
Social life
Polar bears are typically solitary, aside from mothers with cubs and mating pairs. On land, they are found closer together and gather around food resources. Adult males, in particular, are more tolerant of each other in land environments and outside the breeding season. They have been recorded forming stable "alliances", travelling, resting and playing together. A dominance hierarchy exists among polar bears with the largest mature males ranking at the top. Adult females outrank subadults and adolescents and younger males outrank females of the same age. In addition, cubs with their mothers outrank those on their own. Females with dependent offspring tend to stay away from males, but are sometimes associated with other female–offspring units, creating "composite families".
Polar bears are generally quiet but can produce various sounds. Chuffing, a soft pulsing call, is made by mother bears presumably to keep in contact with their young. During the breeding season, adult males will chuff at potential mates. Unlike other animals where chuffing is passed through the nostrils, in polar bears it is emitted through a partially open mouth. Cubs will cry for attention and produce humming noises while nursing. Teeth chops, jaw pops, blows, huffs, moans, growls and roars are heard in more hostile encounters. A polar bear visually communicates with its eyes, ears, nose and lips. Chemical communication can also be important: bears secrete their scent from their foot pads into their tracks, allowing individuals to keep track of one another.
Diet and hunting
The polar bear is a hypercarnivore, and the most carnivorous species of bear. It is an apex predator of the Arctic, preying on ice-living seals and consuming their energy-rich blubber. The most commonly taken species is the ringed seal, but they also prey on bearded seals and harp seals. Ringed seals are ideal prey as they are abundant and small enough to be overpowered by even small bears. Bearded seal adults are larger and are more likely to break free from an attacking bear, hence adult male bears are more successful in hunting them. Less common prey are hooded seals, spotted seals, ribbon seals and the more temperate-living harbour seals. Polar bears, mostly adult males, will occasionally hunt walruses both on land and ice. They mainly target young walruses, as adults, with their thick skin and long tusks, are too large and formidable.
Besides seals, bears will prey on cetacean species such as beluga whales and narwhals, as well as reindeer, birds and their eggs, fish and marine invertebrates. They rarely eat plant material as their digestive system is too specialized for animal matter, though they have been recorded eating berries, moss, grass and seaweed. In their southern range, especially near Hudson Bay and James Bay, polar bears endure all summer without sea ice to hunt from and must subsist more on terrestrial foods. Fat reserves allow polar bears to survive for months without eating. Cannibalism is known to occur in the species.
Polar bears hunt their prey in several different ways. When a bear spots a seal hauling out on the sea ice, it slowly stalks it with the head and neck lowered, possibly to make its dark nose and eyes less noticeable. As it gets closer, the bear crouches more and eventually charges at a high speed, attempting to catch the seal before it can escape into its ice hole. Some stalking bears need to move through water; traversing through water cavities in the ice when approaching the seal or swimming towards a seal on an ice floe. The polar bear can stay underwater with its nose exposed. When it gets close enough, the animal lunges from the water to attack.
During a limited time in spring, polar bears will search for ringed seal pups in their birth lairs underneath the ice. Once a bear catches the scent of a hiding pup and pinpoints its location, it approaches the den quietly to not alert it. It uses its front feet to smash through the ice and then pokes its head in to catch the pup before it can escape. A ringed seal's lair can be more than below the surface of the ice and thus more massive bears are better equipped for breaking in. Some bears may simply stay still near a breathing hole or other spot near the water and wait for prey to come by. This can last hours and when a seal surfaces the bear will try to pull it out with its paws and claws. This tactic is the primary hunting method from winter to early spring.
Bears hunt walrus groups by provoking them into stampeding and then look for young that have been crushed or separated from their mothers during the turmoil. There are reports of bears trying to kill or injure walruses by throwing rocks and pieces of ice on them. Belugas and narwhals are vulnerable to bear attacks when they are stranded in shallow water or stuck in isolated breathing holes in the ice. When stalking reindeer, polar bears will hide in vegetation before an ambush. On some occasions, bears may try to catch prey in open water, swimming underneath a seal or aquatic bird. Seals in particular, however, are more agile than bears in the water. Polar bears rely on raw power when trying to kill their prey, and will employ bites and paw swipes. They have the strength to pull a mid-sized seal out of the water or haul a beluga carcass for quite some distance. Polar bears only occasionally store food for later—burying it under snow—and only in the short term.
Arctic foxes routinely follow polar bears and scavenge scraps from their kills. The bears usually tolerate them but will charge a fox that gets too close when they are feeding. Polar bears themselves will scavenge. Subadult bears will eat remains left behind by others. Females with cubs often abandon a carcass when they see an adult male approaching, though are less likely to if they have not eaten in a long time. Whale carcasses are a valuable food source, particularly on land and after the sea ice melts, and attract several bears. In one area in northeastern Alaska, polar bears have been recorded competing with grizzly bears for whale carcasses. Despite their smaller size, grizzlies are more aggressive and polar bears are likely to yield to them in confrontations. Polar bears will also scavenge at garbage dumps during ice-free periods.
Reproduction and development
Polar bear mating takes place on the sea ice and during spring, mostly between March and May. Males search for females in estrus and often travel in twisting paths which reduces the chances of them encountering other males while still allowing them to find females. The movements of females remain linear and they travel more widely. The mating system can be labelled as female-defence polygyny, serial monogamy or promiscuity.
Upon finding a female, a male will try to isolate and guard her. Courtship can be somewhat aggressive, and a male will pursue a female if she tries to run away. It can take days for the male to mate with the female which induces ovulation. After their first copulation, the couple bond. Undisturbed polar bear pairings typically last around two weeks during which they will sleep together and mate multiple times. Competition for mates can be intense and this has led to sexual selection for bigger males. Polar bear males often have scars from fighting. A male and female that have already bonded will flee together when another male arrives. A female mates with multiple males in a season and a single litter can have more than one father.
When the mating season ends, the female will build up more fat reserves to sustain both herself and her young. Sometime between August and October, the female constructs and enters a maternity den for winter. Depending on the area, maternity dens can be found in sea ice just off the coastline or further inland and may be dug underneath snow, earth or a combination of both. The inside of these shelters can be around wide with a ceiling height of while the entrance may be long and wide. The temperature of a den can be much higher than the outside. Females hibernate and give birth to their cubs in the dens. Hibernating bears fast and internally recycle bodily waste. Polar bears experience delayed implantation and the fertilized embryo does not start development until the fall, between mid-September and mid-October. With delayed implantation, gestation in the species lasts seven to nine months but actual pregnancy is only two months.
Mother polar bears typically give birth to two cubs per litter. As with other bear species, newborn polar bears are tiny and altricial. The newborns have woolly hair and pink skin, with a weight of around . Their eyes remain closed for a month. The mother's fatty milk fuels their growth, and the cubs are kept warm both by the mother's body heat and the den. The mother emerges from the den between late February and early April, and her cubs are well-developed and capable of walking with her. At this time they weigh . A polar bear family stays near the den for roughly two weeks; during this time the cubs will move and play around while the mother mostly rests. They eventually head out on the sea ice.
Cubs under a year old stay close to their mother. When she hunts, they stay still and watch until she calls them back. Observing and imitating the mother helps the cubs hone their hunting skills. After their first year they become more independent and explore. At around two years old, they are capable of hunting on their own. The young suckle their mother as she is lying on her side or sitting on her rump. A lactating female cannot conceive and give birth, and cubs are weaned between two and two-and-a-half years. She may simply leave her weaned young or they may be chased away by a courting male. Polar bears reach sexual maturity at around four years for females and six years for males. Females reach their adult size at 4 or 5 years of age while males are fully grown at twice that age.
Mortality
Polar bears can live up to 30 years. The bear's long lifespan and ability to consistently produce young offsets cub deaths in a population. Some cubs die in the dens or the womb if the female is not in good condition. Nevertheless, the female has a chance to produce a surviving litter the next spring if she can eat better in the coming year. Cubs will eventually starve if their mothers cannot kill enough prey. Cubs also face threats from wolves and adult male bears. Males kill cubs to bring their mother back into estrus but also kill young outside the breeding season for food. A female and her cubs can flee from the slower male. If the male can get close to a cub, the mother may try to fight him off, sometimes at the cost of her life.
Subadult bears, who are independent but not quite mature, have a particularly rough time as they are not as successful hunters as adults. Even when they do succeed, their kill will likely be stolen by a larger bear. Hence subadults have to scavenge and are often underweight and at risk of starvation. At adulthood, polar bears have a high survival rate, though adult males suffer injuries from fights over mates. Polar bears are especially susceptible to Trichinella, a parasitic roundworm they contract through cannibalism.
Conservation status
In 2015, the IUCN Red List categorized the polar bear as vulnerable because of a "decline in area of occupancy, extent of occurrence and/or quality of habitat". It estimated the total population to be between 22,000 and 31,000, and the current population trend is unknown. Threats to polar bear populations include climate change, pollution and energy development.
In 2021, the IUCN/SSC Polar Bear Specialist Group labelled four subpopulations (Barents and Chukchi Sea, Foxe Basin and Gulf of Boothia) as "likely stable", two (Kane Basin and M'Clintock Channel) as "likely increased" and three (Southern Beaufort Sea, Southern and Western Hudson Bay) as "likely decreased" over specific periods between the 1980s and 2010s. The remaining ten did not have enough data. A 2008 study predicted two-thirds of the world's polar bears may disappear by 2050, based on the reduction of sea ice, and only one population would likely survive in 50 years. A 2016 study projected a likely decline in polar bear numbers of more than 30 percent over three generations. The study concluded that declines of more than 50 percent are much less likely. A 2012 review suggested that polar bears may become regionally extinct in southern areas by 2050 if trends continue, leaving the Canadian Archipelago and northern Greenland as strongholds. A 2020 study concluded that a worst-case scenario pathway would lead to the majority of subpopulations disappearing by 2100, while an intermediate pathway would still see the extirpation of some subpopulations within the same time period. However, a 2025 study cautioned that "previously published approach is too sensitive to modeling assumptions and choice of decision rules to accurately evaluate the impacts of GHG [greenhouse gas] emissions on polar bear demographic rates".
The key danger from climate change is malnutrition or starvation due to habitat loss. Polar bears hunt seals on the sea ice, and rising temperatures cause the ice to melt earlier in the year, driving the bears to shore before they have built sufficient fat reserves to survive the period of scarce food in the late summer and early fall. Thinner sea ice tends to break more easily, which makes it more difficult for polar bears to access seals. Insufficient nourishment leads to lower reproductive rates in adult females and lower survival rates in cubs and juvenile bears. Lack of access to seals also causes bears to find food on land which increases the risk of conflict with humans. A 2024 study concluded that greater consumption of terrestrial foods during the longer warm periods are unlikely to provide enough nourishment, increasing the risk of starvation during ice-free periods. Subadult bears would be particularly vulnerable.
Reduction in sea ice cover also forces bears to swim longer distances, which further depletes their energy stores and occasionally leads to drowning. Increased ice mobility may result in less stable sites for dens or longer distances for mothers travelling to and from dens on land. Thawing of permafrost would lead to more fire-prone roofs for bears denning underground. Less snow may affect insulation while more rain could cause more cave-ins. The maximum corticosteroid-binding capacity of corticosteroid-binding globulin in polar bear serum correlates with stress in polar bears, and this has increased with climate warming. Disease-causing bacteria and parasites would flourish more readily in a warmer climate.
Oil and gas development also affects polar bear habitat. The Chukchi Sea Planning Area of northwestern Alaska, which has had many drilling leases, was found to be an important site for non-denning female bears. Oil spills are also a risk. A 2018 study found that ten percent or less of prime bear habitat in the Chukchi Sea is vulnerable to a potential spill, but a spill at full reach could impact nearly 40 percent of the polar bear population. Polar bears accumulate high levels of persistent organic pollutants such as polychlorinated biphenyl (PCBs) and chlorinated pesticides, because of their position at the top of the ecological pyramid. Many of these chemicals have been internationally banned as a result of the recognition of their harm to the environment. Traces of them have slowly dwindled in polar bears but persist and have even increased in some populations.
Polar bears receive some legal protection in all the countries they inhabit. The species has been labelled as threatened under the US Endangered Species Act since 2008, while the Committee on the Status of Endangered Wildlife in Canada listed it as of 'Special concern' since 1991. In 1973, the Agreement on the Conservation of Polar Bears was signed by all five nations with polar bear populations, Canada, Denmark (of which Greenland is an autonomous territory), Russia (then USSR), Norway and the US. This banned most harvesting of polar bears, allowed indigenous hunting using traditional methods, and promoted the preservation of bear habitat. The Convention on International Trade in Endangered Species of Wild Fauna lists the species under Appendix II, which allows regulated trade.
Relationship with humans
Polar bears have coexisted and interacted with circumpolar peoples for millennia. "White bears" are mentioned as commercial items in the Japanese book Nihon Shoki in the seventh century. It is not clear if these were polar bears or white-coloured brown bears. During the Middle Ages, Europeans considered white bears to be a novelty and were more familiar with brown- and black-coloured bears. The first known written account of the polar bear in its natural environment is found in the 13th-century anonymous Norwegian text Konungs skuggsjá, which mentions that "the white bear of Greenland wanders most of the time on the ice of the sea, hunting seals and whales and feeding on them" and says the bear is "as skillful a swimmer as any seal or whale".
Over the next centuries, several European explorers would mention polar bears and describe their habits. Such accounts became more accurate after the Enlightenment, and both living and dead specimens were brought back. Nevertheless, some fanciful reports continued, including the idea that polar bears cover their noses during hunts. A relatively accurate drawing of a polar bear is found in Henry Ellis's work A Voyage to Hudson's Bay (1748). Polar bears were formally classified as a species by Constantine Phipps after his 1773 voyage to the Arctic. Accompanying him was a young Horatio Nelson, who was said to have wanted to get a polar bear coat for his father but failed in his hunt. In his 1785 edition of Histoire Naturelle, Comte de Buffon mentions and depicts a "sea bear", clearly a polar bear, and "land bears", likely brown and black bears. This helped promote ideas about speciation. Buffon also mentioned a "white bear of the forest", possibly a Kermode bear.
Exploitation
Polar bears were hunted as early as 8,000 years ago, as indicated by archaeological remains at Zhokhov Island in the East Siberian Sea. The oldest graphic depiction of a polar bear shows it being hunted by a man with three dogs. This rock art was among several petroglyphs found at Pegtymel in Siberia and dates from the fifth to eighth centuries. Before access to firearms, native people used lances, bows and arrows and hunted in groups accompanied by dogs. Though hunting typically took place on foot, some people killed swimming bears from boats with a harpoon. Polar bears were sometimes killed in their dens. Killing a polar bear was considered a rite of passage for boys in some cultures. Native people respected the animal and hunts were subject to strict rituals. Bears were harvested for the fur, meat, fat, tendons, bones and teeth. The fur was worn and slept on, while the bones and teeth were made into tools. For the Netsilik, the individual who finally killed the bear had the right to its fur while the meat was passed to all in the party. Some people kept the cubs of slain bears.
Norsemen in Greenland traded polar bear furs in the Middle Ages. Russia traded polar bear products as early as 1556, with Novaya Zemlya and Franz Josef Land being important commercial centres. Large-scale hunting of bears at Svalbard occurred since at least the 18th century, when no less than 150 bears were killed each year by Russian explorers. In the next century, more Norwegians were harvesting the bears on the island. From the 1870s to the 1970s, around 22,000 of the animals were hunted in total. Over 150,000 polar bears in total were either killed or captured in Russia and Svalbard, from the 18th to the 20th century. In the Canadian Arctic, bears were harvested by commercial whalers especially if they could not get enough whales. The Hudson's Bay Company is estimated to have sold 15,000 polar bear coats between the late 19th century and early 20th century. In the mid-20th century, countries began to regulate polar bear harvesting, culminating in the 1973 agreement.
Polar bear meat was commonly eaten as rations by explorers and sailors in the Arctic, to widely varying appraisal. Some have called it too coarse and strong-smelling to eat, while others have praised it as a "royal dish". The liver was known for being too toxic to eat. This is due to the accumulation of vitamin A from the bears' prey. Polar bear fat was also used in lamps when other fuel was unavailable. Polar bear rugs were almost ubiquitous on the floors of Norwegian churches by the 13th and 14th centuries. In more modern times, classical Hollywood actors would pose on bearskin rugs, notably Marilyn Monroe. Such images often had sexual connotations.
Conflicts
When the sea ice melts, polar bears, particularly subadults, conflict with humans over resources on land. They are attracted to the smell of human-made foods, particularly at garbage dumps and may be shot when they encroach on private property. In Churchill, Manitoba, local authorities maintain a "polar bear jail" where nuisance bears are held until the sea ice freezes again. Climate change has increased conflicts between the two species. Over 50 polar bears swarmed a town in Novaya Zemlya in February 2019, leading local authorities to declare a state of emergency.
From 1870 to 2014, there were an estimated 73 polar bear attacks on humans, which led to 20 deaths. The majority of attacks were by hungry males, typically subadults, while female attacks were usually in defence of the young. In comparison to brown and American black bears, attacks by polar bears were more often near and around where humans lived. This may be due to the bears getting desperate for food and thus more likely to seek out human settlements. As with the other two bear species, polar bears are unlikely to target more than two people at once. Though popularly thought of as the most dangerous bear, the polar bear is no more aggressive to humans than other species.
Captivity
The polar bear was for long a particularly sought-after species for exotic animal collectors, since it was relatively rare and remote living and had a reputation as a ferocious beast. It is one of the few marine mammals that will reproduce well in captivity. They were originally kept only by royals and elites. The Tower of London got a polar bear as early as 1252 under King Henry III. In 1609, James VI and I of Scotland, England and Ireland was given two polar bear cubs by the sailor Jonas Poole, who got them during a trip to Svalbard. At the end of the 17th century, Frederick I of Prussia housed polar bears in menageries with other wild animals. He had their claws and canines removed to allow them to perform mock fights safely. Around 1726, Catherine I of Russia gifted two polar bears to Augustus II the Strong of Poland, who desired them for his animal collection. Later, polar bears were displayed to the public in zoos and circuses. In early 19th century, the species was exhibited at the Exeter Exchange in London, as well as menageries in Vienna and Paris. The first zoo in North America to exhibit a polar bear was the Philadelphia Zoo in 1859.
Polar bear exhibits were innovated by Carl Hagenbeck, who replaced cages and pits with settings that mimicked the animal's natural environment. In 1907, he revealed a complex panoramic structure at the Tierpark Hagenbeck Zoo in Hamburg consisting of exhibits made of artificial snow and ice separated by moats. Different polar animals were displayed on each platform, giving the illusion of them living together. Starting in 1975, Hellabrunn Zoo in Munich housed its polar bears in an exhibit which consisted of a glass barrier, a house, concrete platforms mimicking ice floes and a large pool. Inside the house were maternity dens, and rooms for the staff to prepare and store the food. The exhibit was connected to an outdoor yard for extra room. Similar naturalistic and "immersive" exhibits were opened in the early 21st century, such as the "Arctic Ring of Life" at the Detroit Zoo and Ontario's Cochrane Polar Bear Habitat. Many zoos in Europe and North America have stopped keeping polar bears because of the size and costs of their complex exhibits. In North America, the population of polar bears in zoos reached its zenith in 1975 with 229 animals and declined in the 21st century.
Polar bears have been trained to perform in circuses. Bears in general, being large, powerful, easy to train and human-like in form, were widespread in circuses, and the white coat of polar bears made them particularly attractive. Circuses helped change the polar bear's image from a fearsome monster to something more comical. Performing polar bears were used in 1888 by Circus Krone in Germany and later in 1904 by the Bostock and Wombwell Menagerie in England. Circus director Wilhelm Hagenbeck trained up to 75 polar bears to slide into a large tank through a chute. He began performing with them in 1908 and they had a particularly well-received show at the Hippodrome in London. Other circus tricks performed by polar bears involved tightropes, balls, roller skates and motorcycles. One of the most famous polar bear trainers in the second half of the twentieth century was the East German Ursula Böttcher, whose small stature contrasted with that of the large bears. Starting in the late 20th century, most polar bear acts were retired and the use of these bears for the circus is now prohibited in the US.
Several captive polar bears gained celebrity status in the late 20th and early 21st century, notably Knut of the Berlin Zoological Garden, who was rejected by his mother and had to be hand-reared by zookeepers. Another bear, Binky of the Alaska Zoo in Anchorage, became famous for attacking two visitors who got too close. Captive polar bears may pace back and forth, a stereotypical behaviour. In one study, they were recorded to have spent 14 percent of their days pacing. Gus of the Central Park Zoo was prescribed Prozac by a therapist for constantly swimming in his pool. To reduce stereotypical behaviours, zookeepers provide the bears with enrichment items to trigger their play behaviour. In sufficiently warm conditions, algae concentrated in the medulla of their fur's guard hairs may cause zoo polar bears to appear green.
Cultural significance
Polar bears have prominent roles in Inuit culture and religion. The deity Torngarsuk is sometimes imagined as a giant polar bear. He resides underneath the sea floor in an underworld of the dead and has power over sea creatures. Kalaallit shamans would worship him through singing and dancing and were expected to be taken by him to the sea and consumed if he considered them worthy. Polar bears were also associated with the goddess Nuliajuk who was responsible for their creation, along with other sea creatures. It is believed that shamans could reach the Moon or the bottom of the ocean by riding on a guardian spirit in the form of a polar bear. Some folklore involves people turning into or disguising themselves as polar bears by donning their skins or the reverse, with polar bears removing their skins. In Inuit astronomy, the Pleiades star cluster is conceived of as a polar bear trapped by dogs while Orion's Belt, the Hyades and Aldebaran represent hunters, dogs and a wounded bear respectively.
Nordic folklore and literature have also featured polar bears. In The Tale of Auðun of the West Fjords, written around 1275, a poor man named Auðun spends all his money on a polar bear in Greenland, but ends up wealthy after giving the bear to the king of Denmark. In the 14th-century manuscript Hauksbók, a man named Odd kills and eats a polar bear that killed his father and brother. In the story of The Grimsey Man and the Bear, a mother bear nurses and rescues a farmer stuck on an ice floe and is repaid with sheep meat. 18th-century Icelandic writings mention the legend of a "polar bear king" known as the . This beast was depicted as a polar bear with "ruddy cheeks" and a unicorn-like horn, which glows in the dark. The king could understand when humans talk and was considered to be very astute. Two Norwegian fairy tales, "East of the Sun and West of the Moon" and "White-Bear-King-Valemon", involve white bears turning into men and seducing women.
Drawings of polar bears have been featured on maps of the northern regions. Possibly the earliest depictions of a polar bear on a map is the Swedish Carta marina of 1539, which has a white bear on Iceland or "Islandia". A 1544 map of North America includes two polar bears near Quebec. Notable paintings featuring polar bears include François-Auguste Biard's Fighting Polar Bears (1839) and Edwin Landseer's Man Proposes, God Disposes (1864). Polar bears have also been filmed for cinema. An Inuit polar bear hunt was shot for the 1932 documentary Igloo, while the 1974 film The White Dawn filmed a simulated stabbing of a trained bear for a scene. In the film The Big Show (1961), two characters are killed by a circus polar bear. The scenes were shot using animal trainers instead of the actors. In modern literature, polar bears have been characters in both children's fiction, like Hans Beer's Little Polar Bear and the Whales and Sakiasi Qaunaq's The Orphan and the Polar Bear, and fantasy novels, like Philip Pullman's His Dark Materials series. In radio, Mel Blanc provided the vocals for Jack Benny's pet polar bear Carmichael on The Jack Benny Program. The polar bear is featured on flags and coats of arms, like the coat of arms of Greenland, and in many advertisements, notably for Coca-Cola since 1922.
As charismatic megafauna, polar bears have been used to raise awareness of the dangers of climate change. Aurora the polar bear is a giant marionette created by Greenpeace for climate protests. The World Wide Fund for Nature has sold plush polar bears as part of its "Arctic Home" campaign. Photographs of polar bears have been featured in National Geographic and Time magazines, including ones of them standing on ice floes, while the climate change documentary and advocacy film An Inconvenient Truth (2006) includes an animated bear swimming. Automobile manufacturer Nissan used a polar bear in one of its commercials, hugging a man for using an electric car. To make a statement about global warming, in 2009 a Copenhagen ice statue of a polar bear with a bronze skeleton was purposely left to melt in the sun.
See also
2011 Svalbard polar bear attack
International Polar Bear Day
List of individual bears – includes individual captive polar bears
Polar Bears International – conservation organization
Polar Bear Shores – an exhibit featuring polar bears at Sea World in Australia
Notes
References
Bibliography
External links
Polar Bears International website
ARKive—images and movies of the polar bear (Ursus maritimus)
Category:Articles containing video clips
Category:Carnivorans of Asia
Category:Carnivorans of Europe
Category:Carnivorans of North America
Category:ESA threatened species
Category:Extant Late Pleistocene first appearances
Category:Fauna of the Holarctic realm
Category:Mammals described in 1774
Category:Mammals of the Arctic
Category:Marine mammals
Category:Pleistocene bears
Category:Species that are or were threatened by climate change
Category:Taxa named by Constantine Phipps, 2nd Baron Mulgrave
Category:Ursus (mammal)
Category:Vulnerable animals
Category:Vulnerable biota of Asia
Category:Vulnerable biota of Europe
|
nature_wildlife
| 7,895
|
24544
|
Photosynthesis
|
https://en.wikipedia.org/wiki/Photosynthesis
|
upright=1.5|thumb|right|Composite image showing the global distribution of photosynthesis, including both oceanic phytoplankton and terrestrial vegetation. Dark red and blue-green indicate regions of high photosynthetic activity in the ocean and on land, respectively.
Photosynthesis ( ) is a system of biological processes by which photopigment-bearing autotrophic organisms, such as most plants, algae and cyanobacteria, convert light energy — typically from sunlight — into the chemical energy necessary to fuel their metabolism. The term photosynthesis usually refers to oxygenic photosynthesis, a process that releases oxygen as a byproduct of water splitting. Photosynthetic organisms store the converted chemical energy within the bonds of intracellular organic compounds (complex compounds containing carbon), typically carbohydrates like sugars (mainly glucose, fructose and sucrose), starches, phytoglycogen and cellulose. When needing to use this stored energy, an organism's cells then metabolize the organic compounds through cellular respiration. Photosynthesis plays a critical role in producing and maintaining the oxygen content of the Earth's atmosphere, and it supplies most of the biological energy necessary for complex life on Earth.
Some organisms also perform anoxygenic photosynthesis, which does not produce oxygen. Some bacteria (e.g. purple bacteria) uses bacteriochlorophyll to split hydrogen sulfide as a reductant instead of water, releasing sulfur instead of oxygen, which was a dominant form of photosynthesis in the euxinic Canfield oceans during the Boring Billion. Archaea such as Halobacterium also perform a type of non-carbon-fixing anoxygenic photosynthesis, where the simpler photopigment retinal and its microbial rhodopsin derivatives are used to absorb green light and produce a proton (hydron) gradient across the cell membrane, and the subsequent ion movement powers transmembrane proton pumps to directly synthesize adenosine triphosphate (ATP), the "energy currency" of cells. Such archaeal photosynthesis might have been the earliest form of photosynthesis that evolved on Earth, as far back as the Paleoarchean, preceding that of cyanobacteria (see Purple Earth hypothesis).
While the details may differ between species, the process always begins when light energy is absorbed by the reaction centers, proteins that contain photosynthetic pigments or chromophores. In plants, these pigments are chlorophylls (a porphyrin derivative that absorbs the red and blue spectra of light, thus reflecting green) held inside chloroplasts, abundant in leaf cells. In cyanobacteria, they are embedded in the plasma membrane. In these light-dependent reactions, some energy is used to strip electrons from suitable substances, such as water, producing oxygen gas. The hydrogen freed by the splitting of water is used in the creation of two important molecules that participate in energetic processes: reduced nicotinamide adenine dinucleotide phosphate (NADPH) and ATP.
In plants, algae, and cyanobacteria, sugars are synthesized by a subsequent sequence of reactions called the Calvin cycle. In this process, atmospheric carbon dioxide is incorporated into already existing organic compounds, such as ribulose bisphosphate (RuBP). Using the ATP and NADPH produced by the light-dependent reactions, the resulting compounds are then reduced and removed to form further carbohydrates, such as glucose. In other bacteria, different mechanisms like the reverse Krebs cycle are used to achieve the same end.
The first photosynthetic organisms probably evolved early in the evolutionary history of life using reducing agents such as hydrogen or hydrogen sulfide, rather than water, as sources of electrons. Cyanobacteria appeared later; the excess oxygen they produced contributed directly to the oxygenation of the Earth, which rendered the evolution of complex life possible. The average rate of energy captured by global photosynthesis is approximately 130 terawatts, which is about eight times the total power consumption of human civilization. Photosynthetic organisms also convert around 100–115 billion tons (91–104 Pg petagrams, or billions of metric tons), of carbon into biomass per year. Photosynthesis was discovered in 1779 by Jan Ingenhousz who showed that plants need light, not just soil and water.
Overview
Most photosynthetic organisms are photoautotrophs, which means that they are able to synthesize food directly from carbon dioxide and water using energy from light. However, not all organisms use carbon dioxide as a source of carbon atoms to carry out photosynthesis; photoheterotrophs use organic compounds, rather than carbon dioxide, as a source of carbon.
In plants, algae, and cyanobacteria, photosynthesis releases oxygen. This oxygenic photosynthesis is by far the most common type of photosynthesis used by living organisms. Some shade-loving plants (sciophytes) produce such low levels of oxygen during photosynthesis that they use all of it themselves instead of releasing it to the atmosphere.
Although there are some differences between oxygenic photosynthesis in plants, algae, and cyanobacteria, the overall process is quite similar in these organisms. There are also many varieties of anoxygenic photosynthesis, used mostly by bacteria, which consume carbon dioxide but do not release oxygen or which produce elemental sulfur instead of molecular oxygen.
Carbon dioxide is converted into sugars in a process called carbon fixation; photosynthesis captures energy from sunlight to convert carbon dioxide into carbohydrates. Carbon fixation is an endothermic redox reaction. In general outline, photosynthesis is the opposite of cellular respiration: while photosynthesis is a process of reduction of carbon dioxide to carbohydrates, cellular respiration is the oxidation of carbohydrates or other nutrients to carbon dioxide. Nutrients used in cellular respiration include carbohydrates, amino acids and fatty acids. These nutrients are oxidized to produce carbon dioxide and water, and to release chemical energy to drive the organism's metabolism.
Photosynthesis and cellular respiration are distinct processes, as they take place through different sequences of chemical reactions and in different cellular compartments (cellular respiration in mitochondria).
The general equation for photosynthesis as first proposed by Cornelis van Niel is:
+ + → + +
Since water is used as the electron donor in oxygenic photosynthesis, the equation for this process is:
+ + → + +
This equation emphasizes that water is both a reactant in the light-dependent reaction and a product of the light-independent reaction, but canceling n water molecules from each side gives the net equation:
+ + → +
Other processes substitute other compounds (such as arsenite) for water in the electron-supply role; for example some microbes use sunlight to oxidize arsenite to arsenate:Anaerobic Photosynthesis, Chemical & Engineering News, 86, 33, August 18, 2008, p. 36 The equation for this reaction is:
+ + → + (used to build other compounds in subsequent reactions)
Photosynthesis occurs in two stages. In the first stage, light-dependent reactions or light reactions capture the energy of light and use it to make the hydrogen carrier NADPH and the energy-storage molecule ATP. During the second stage, the light-independent reactions use these products to capture and reduce carbon dioxide.
Most organisms that use oxygenic photosynthesis use visible light for the light-dependent reactions, although at least three use shortwave infrared or, more specifically, far-red radiation.
Some organisms employ even more radical variants of photosynthesis. Some archaea use a simpler method that employs a pigment similar to those used for vision in animals. The bacteriorhodopsin changes its configuration in response to sunlight, acting as a proton pump. This produces a proton gradient more directly, which is then converted to chemical energy. The process does not involve carbon dioxide fixation and does not release oxygen, and seems to have evolved separately from the more common types of photosynthesis.
Photosynthetic membranes and organelles
In photosynthetic bacteria, the proteins that gather light for photosynthesis are embedded in cell membranes. In its simplest form, this involves the membrane surrounding the cell itself. However, the membrane may be tightly folded into cylindrical sheets called thylakoids, or bunched up into round vesicles called intracytoplasmic membranes. These structures can fill most of the interior of a cell, giving the membrane a very large surface area and therefore increasing the amount of light that the bacteria can absorb.
In plants and algae, photosynthesis takes place in organelles called chloroplasts. A typical plant cell contains about 10 to 100 chloroplasts. The chloroplast is enclosed by a membrane. This membrane is composed of a phospholipid inner membrane, a phospholipid outer membrane, and an intermembrane space. Enclosed by the membrane is an aqueous fluid called the stroma. Embedded within the stroma are stacks of thylakoids (grana), which are the site of photosynthesis. The thylakoids appear as flattened disks. The thylakoid itself is enclosed by the thylakoid membrane, and within the enclosed volume is a lumen or thylakoid space. Embedded in the thylakoid membrane are integral and peripheral membrane protein complexes of the photosynthetic system.
Plants absorb light primarily using the pigment chlorophyll. The green part of the light spectrum is not absorbed but is reflected, which is the reason that most plants have a green color. Besides chlorophyll, plants also use pigments such as carotenes and xanthophylls. Algae also use chlorophyll, but various other pigments are present, such as phycocyanin, carotenes, and xanthophylls in green algae, phycoerythrin in red algae (rhodophytes) and fucoxanthin in brown algae and diatoms resulting in a wide variety of colors.
These pigments are embedded in plants and algae in complexes called antenna proteins. In such proteins, the pigments are arranged to work together. Such a combination of proteins is also called a light-harvesting complex.
Although all cells in the green parts of a plant have chloroplasts, the majority of those are found in specially adapted structures called leaves. Certain species adapted to conditions of strong sunlight and aridity, such as many Euphorbia and cactus species, have their main photosynthetic organs in their stems. The cells in the interior tissues of a leaf, called the mesophyll, can contain between 450,000 and 800,000 chloroplasts for every square millimeter of leaf. The surface of the leaf is coated with a water-resistant waxy cuticle that protects the leaf from excessive evaporation of water and decreases the absorption of ultraviolet or blue light to minimize heating. The transparent epidermis layer allows light to pass through to the palisade mesophyll cells where most of the photosynthesis takes place.
Light-dependent reactions
In the light-dependent reactions, one molecule of the pigment chlorophyll absorbs one photon and loses one electron. This electron is taken up by a modified form of chlorophyll called pheophytin, which passes the electron to a quinone molecule, starting the flow of electrons down an electron transport chain that leads to the ultimate reduction of NADP to NADPH. In addition, this creates a proton gradient (energy gradient) across the chloroplast membrane, which is used by ATP synthase in the synthesis of ATP. The chlorophyll molecule ultimately regains the electron it lost when a water molecule is split in a process called photolysis, which releases oxygen.
The overall equation for the light-dependent reactions under the conditions of non-cyclic electron flow in green plants is:
Not all wavelengths of light can support photosynthesis. The photosynthetic action spectrum depends on the type of accessory pigments present. For example, in green plants, the action spectrum resembles the absorption spectrum for chlorophylls and carotenoids with absorption peaks in violet-blue and red light. In red algae, the action spectrum is blue-green light, which allows these algae to use the blue end of the spectrum to grow in the deeper waters that filter out the longer wavelengths (red light) used by above-ground green plants. The non-absorbed part of the light spectrum is what gives photosynthetic organisms their color (e.g., green plants, red algae, purple bacteria) and is the least effective for photosynthesis in the respective organisms.
Z scheme
In plants, light-dependent reactions occur in the thylakoid membranes of the chloroplasts where they drive the synthesis of ATP and NADPH. The light-dependent reactions are of two forms: cyclic and non-cyclic.
In the non-cyclic reaction, the photons are captured in the light-harvesting antenna complexes of photosystem II by chlorophyll and other accessory pigments (see diagram "Z-scheme"). The absorption of a photon by the antenna complex loosens an electron by a process called photoinduced charge separation. The antenna system is at the core of the chlorophyll molecule of the photosystem II reaction center. That loosened electron is taken up by the primary electron-acceptor molecule, pheophytin. As the electrons are shuttled through an electron transport chain (the so-called Z-scheme shown in the diagram), a chemiosmotic potential is generated by pumping proton cations (H+) across the membrane and into the thylakoid space. An ATP synthase enzyme uses that chemiosmotic potential to make ATP during photophosphorylation, whereas NADPH is a product of the terminal redox reaction in the Z-scheme. The electron enters a chlorophyll molecule in Photosystem I. There it is further excited by the light absorbed by that photosystem. The electron is then passed along a chain of electron acceptors to which it transfers some of its energy. The energy delivered to the electron acceptors is used to move hydrogen ions across the thylakoid membrane into the lumen. The electron is eventually used to reduce the coenzyme NADP with an H+ to NADPH (which has functions in the light-independent reaction); at that point, the path of that electron ends.
The cyclic reaction is similar to that of the non-cyclic but differs in that it generates only ATP, and no reduced NADP (NADPH) is created. The cyclic reaction takes place only at photosystem I. Once the electron is displaced from the photosystem, the electron is passed down the electron acceptor molecules and returns to photosystem I, from where it was emitted, hence the name cyclic reaction.
Water photolysis
Linear electron transport through a photosystem will leave the reaction center of that photosystem oxidized. Elevating another electron will first require re-reduction of the reaction center. The excited electrons lost from the reaction center (P700) of photosystem I are replaced by transfer from plastocyanin, whose electrons come from electron transport through photosystem II. Photosystem II, as the first step of the Z-scheme, requires an external source of electrons to reduce its oxidized chlorophyll a reaction center. The source of electrons for photosynthesis in green plants and cyanobacteria is water. Two water molecules are oxidized by the energy of four successive charge-separation reactions of photosystem II to yield a molecule of diatomic oxygen and four hydrogen ions. The electrons yielded are transferred to a redox-active tyrosine residue that is oxidized by the energy of P680. This resets the ability of P680 to absorb another photon and release another photo-dissociated electron. The oxidation of water is catalyzed in photosystem II by a redox-active structure that contains four manganese ions and a calcium ion; this oxygen-evolving complex binds two water molecules and contains the four oxidizing equivalents that are used to drive the water-oxidizing reaction (Kok's S-state diagrams). The hydrogen ions are released in the thylakoid lumen and therefore contribute to the transmembrane chemiosmotic potential that leads to ATP synthesis. Oxygen is a waste product of light-dependent reactions, but the majority of organisms on Earth use oxygen and its energy for cellular respiration, including photosynthetic organisms.
Light-independent reactions
Calvin cycle
In the light-independent (or "dark") reactions, the enzyme RuBisCO captures CO2 from the atmosphere and, in a process called the Calvin cycle, uses the newly formed NADPH and releases three-carbon sugars, which are later combined to form sucrose and starch. The overall equation for the light-independent reactions in green plants is
Carbon fixation produces the three-carbon sugar intermediate, which is then converted into the final carbohydrate products. The simple carbon sugars photosynthesis produces are then used to form other organic compounds, such as the building material cellulose, the precursors for lipid and amino acid biosynthesis, or as a fuel in cellular respiration. The latter occurs not only in plants but also in animals when the carbon and energy from plants is passed through a food chain.
The fixation or reduction of carbon dioxide is a process in which carbon dioxide combines with a five-carbon sugar, ribulose 1,5-bisphosphate, to yield two molecules of a three-carbon compound, glycerate 3-phosphate, also known as 3-phosphoglycerate. Glycerate 3-phosphate, in the presence of ATP and NADPH produced during the light-dependent stages, is reduced to glyceraldehyde 3-phosphate. This product is also referred to as 3-phosphoglyceraldehyde (PGAL) or, more generically, as triose phosphate. Most (five out of six molecules) of the glyceraldehyde 3-phosphate produced are used to regenerate ribulose 1,5-bisphosphate so the process can continue. The triose phosphates not thus "recycled" often condense to form hexose phosphates, which ultimately yield sucrose, starch, and cellulose, as well as glucose and fructose. The sugars produced during carbon metabolism yield carbon skeletons that can be used for other metabolic reactions like the production of amino acids and lipids.
Carbon concentrating mechanisms
On land
In hot and dry conditions, plants close their stomata to prevent water loss. Under these conditions, will decrease and oxygen gas, produced by the light reactions of photosynthesis, will increase, causing an increase of photorespiration by the oxygenase activity of ribulose-1,5-bisphosphate carboxylase/oxygenase (RuBisCO) and decrease in carbon fixation. Some plants have evolved mechanisms to increase the concentration in the leaves under these conditions.
Plants that use the C4 carbon fixation process chemically fix carbon dioxide in the cells of the mesophyll by adding it to the three-carbon molecule phosphoenolpyruvate (PEP), a reaction catalyzed by an enzyme called PEP carboxylase, creating the four-carbon organic acid oxaloacetic acid. Oxaloacetic acid or malate synthesized by this process is then translocated to specialized bundle sheath cells where the enzyme RuBisCO and other Calvin cycle enzymes are located, and where released by decarboxylation of the four-carbon acids is then fixed by RuBisCO activity to the three-carbon 3-phosphoglyceric acids. The physical separation of RuBisCO from the oxygen-generating light reactions reduces photorespiration and increases fixation and, thus, the photosynthetic capacity of the leaf. plants can produce more sugar than plants in conditions of high light and temperature. Many important crop plants are plants, including maize, sorghum, sugarcane, and millet. Plants that do not use PEP-carboxylase in carbon fixation are called C3 plants because the primary carboxylation reaction, catalyzed by RuBisCO, produces the three-carbon 3-phosphoglyceric acids directly in the Calvin-Benson cycle. Over 90% of plants use carbon fixation, compared to 3% that use carbon fixation; however, the evolution of in over sixty plant lineages makes it a striking example of convergent evolution. C2 photosynthesis, which involves carbon-concentration by selective breakdown of photorespiratory glycine, is both an evolutionary precursor to and a useful carbon-concentrating mechanism in its own right.
Xerophytes, such as cacti and most succulents, also use PEP carboxylase to capture carbon dioxide in a process called Crassulacean acid metabolism (CAM). In contrast to metabolism, which spatially separates the fixation to PEP from the Calvin cycle, CAM temporally separates these two processes. CAM plants have a different leaf anatomy from plants, and fix the at night, when their stomata are open. CAM plants store the mostly in the form of malic acid via carboxylation of phosphoenolpyruvate to oxaloacetate, which is then reduced to malate. Decarboxylation of malate during the day releases inside the leaves, thus allowing carbon fixation to 3-phosphoglycerate by RuBisCO. CAM is used by 16,000 species of plants.
Calcium-oxalate-accumulating plants, such as Amaranthus hybridus and Colobanthus quitensis, show a variation of photosynthesis where calcium oxalate crystals function as dynamic carbon pools, supplying carbon dioxide (CO2) to photosynthetic cells when stomata are partially or totally closed. This process was named alarm photosynthesis. Under stress conditions (e.g., water deficit), oxalate released from calcium oxalate crystals is converted to CO2 by an oxalate oxidase enzyme, and the produced CO2 can support the Calvin cycle reactions. Reactive hydrogen peroxide (H2O2), the byproduct of oxalate oxidase reaction, can be neutralized by catalase. Alarm photosynthesis represents a photosynthetic variant to be added to the well-known C4 and CAM pathways. However, alarm photosynthesis, in contrast to these pathways, operates as a biochemical pump that collects carbon from the organ interior (or from the soil) and not from the atmosphere.
In water
Cyanobacteria possess carboxysomes, which increase the concentration of around RuBisCO to increase the rate of photosynthesis. An enzyme, carbonic anhydrase, located within the carboxysome, releases CO2 from dissolved hydrocarbonate ions (HCO). Before the CO2 can diffuse out, RuBisCO concentrated within the carboxysome quickly sponges it up. HCO ions are made from CO2 outside the cell by another carbonic anhydrase and are actively pumped into the cell by a membrane protein. They cannot cross the membrane as they are charged, and within the cytosol they turn back into CO2 very slowly without the help of carbonic anhydrase. This causes the HCO ions to accumulate within the cell from where they diffuse into the carboxysomes. Pyrenoids in algae and hornworts also act to concentrate around RuBisCO.
Order and kinetics
The overall process of photosynthesis takes place in four stages:
Stage Event Site Time scale 1 Energy transfer in antenna chlorophyllThylakoid membranes in the chloroplasts Femtosecond to picosecond 2 Transfer of electrons in photochemical reactions Picosecond to nanosecond 3 Electron transport chain and ATP synthesis Microsecond to millisecond 4 Carbon fixation and export of stable productsStroma of the chloroplasts and the cell cytosol Millisecond to second
Efficiency
Plants usually convert light into chemical energy with a photosynthetic efficiency of 3–6%.
Absorbed light that is unconverted is dissipated primarily as heat, with a small fraction (1–2%) reemitted as chlorophyll fluorescence at longer (redder) wavelengths. This fact allows measurement of the light reaction of photosynthesis by using chlorophyll fluorometers.
Actual plants' photosynthetic efficiency varies with the frequency of the light being converted, light intensity, temperature, and proportion of carbon dioxide in the atmosphere, and can vary from 0.1% to 8%. By comparison, solar panels convert light into electric energy at an efficiency of approximately 6–20% for mass-produced panels, and above 40% in laboratory devices.
Scientists are studying photosynthesis in hopes of developing plants with increased yield.
The efficiency of both light and dark reactions can be measured, but the relationship between the two can be complex. For example, the light reaction creates ATP and NADPH energy molecules, which C3 plants can use for carbon fixation or photorespiration. Electrons may also flow to other electron sinks. For this reason, it is not uncommon for authors to differentiate between work done under non-photorespiratory conditions and under photorespiratory conditions.
Chlorophyll fluorescence of photosystem II can measure the light reaction, and infrared gas analyzers can measure the dark reaction. An integrated chlorophyll fluorometer and gas exchange system can investigate both light and dark reactions when researchers use the two separate systems together. Infrared gas analyzers and some moisture sensors are sensitive enough to measure the photosynthetic assimilation of CO2 and of ΔH2O using reliable methods. CO2 is commonly measured in /(m2/s), parts per million, or volume per million; and H2O is commonly measured in /(m2/s) or in . By measuring CO2 assimilation, ΔH2O, leaf temperature, barometric pressure, leaf area, and photosynthetically active radiation (PAR), it becomes possible to estimate, "A" or carbon assimilation, "E" or transpiration, "gs" or stomatal conductance, and "Ci" or intracellular CO2. However, it is more common to use chlorophyll fluorescence for plant stress measurement, where appropriate, because the most commonly used parameters FV/FM and Y(II) or F/FM' can be measured in a few seconds, allowing the investigation of larger plant populations.
Gas exchange systems that offer control of CO2 levels, above and below ambient, allow the common practice of measurement of A/Ci curves, at different CO2 levels, to characterize a plant's photosynthetic response.
Integrated chlorophyll fluorometer – gas exchange systems allow a more precise measure of photosynthetic response and mechanisms. While standard gas exchange photosynthesis systems can measure Ci, or substomatal CO2 levels, the addition of integrated chlorophyll fluorescence measurements allows a more precise measurement of CC, the estimation of CO2 concentration at the site of carboxylation in the chloroplast, to replace Ci. CO2 concentration in the chloroplast becomes possible to estimate with the measurement of mesophyll conductance or gm using an integrated system.
Photosynthesis measurement systems are not designed to directly measure the amount of light the leaf absorbs, but analysis of chlorophyll fluorescence, P700- and P515-absorbance, and gas exchange measurements reveal detailed information about, e.g., the photosystems, quantum efficiency and the CO2 assimilation rates. With some instruments, even wavelength dependency of the photosynthetic efficiency can be analyzed.
A phenomenon known as quantum walk increases the efficiency of the energy transport of light significantly. In the photosynthetic cell of an alga, bacterium, or plant, there are light-sensitive molecules called chromophores arranged in an antenna-shaped structure called a photocomplex. When a photon is absorbed by a chromophore, it is converted into a quasiparticle referred to as an exciton, which jumps from chromophore to chromophore towards the reaction center of the photocomplex, a collection of molecules that traps its energy in a chemical form accessible to the cell's metabolism. The exciton's wave properties enable it to cover a wider area and try out several possible paths simultaneously, allowing it to instantaneously "choose" the most efficient route, where it will have the highest probability of arriving at its destination in the minimum possible time.
Because that quantum walking takes place at temperatures far higher than quantum phenomena usually occur, it is only possible over very short distances. Obstacles in the form of destructive interference cause the particle to lose its wave properties for an instant before it regains them once again after it is freed from its locked position through a classic "hop". The movement of the electron towards the photo center is therefore covered in a series of conventional hops and quantum walks.
Evolution
Fossils of what are thought to be filamentous photosynthetic organisms have been dated at 3.4 billion years old. More recent studies also suggest that photosynthesis may have begun about 3.4 billion years ago, though the first direct evidence of photosynthesis comes from thylakoid membranes preserved in 1.75-billion-year-old cherts.
Oxygenic photosynthesis is the main source of oxygen in the Earth's atmosphere, and its earliest appearance is sometimes referred to as the oxygen catastrophe. Geological evidence suggests that oxygenic photosynthesis, such as that in cyanobacteria, became important during the Paleoproterozoic era around two billion years ago. Modern photosynthesis in plants and most photosynthetic prokaryotes is oxygenic, using water as an electron donor, which is oxidized to molecular oxygen in the photosynthetic reaction center.
Symbiosis and the origin of chloroplasts
Several groups of animals have formed symbiotic relationships with photosynthetic algae. These are most common in corals, sponges, and sea anemones. Scientists presume that this is due to the particularly simple body plans and large surface areas of these animals compared to their volumes. In addition, a few marine mollusks, such as Elysia viridis and Elysia chlorotica, also maintain a symbiotic relationship with chloroplasts they capture from the algae in their diet and then store in their bodies (see Kleptoplasty). This allows the mollusks to survive solely by photosynthesis for several months at a time. Some of the genes from the plant cell nucleus have even been transferred to the slugs, so that the chloroplasts can be supplied with proteins they need to survive.
An even closer form of symbiosis may explain the origin of chloroplasts. Chloroplasts have many similarities with photosynthetic bacteria, including a circular chromosome, prokaryotic-type ribosome, and similar proteins in the photosynthetic reaction center. The endosymbiotic theory suggests that photosynthetic bacteria were acquired (by endocytosis) by early eukaryotic cells to form the first plant cells. Therefore, chloroplasts may be photosynthetic bacteria that adapted to life inside plant cells. Like mitochondria, chloroplasts possess their own DNA, separate from the nuclear DNA of their plant host cells and the genes in this chloroplast DNA resemble those found in cyanobacteria. DNA in chloroplasts codes for redox proteins such as those found in the photosynthetic reaction centers. The CoRR Hypothesis proposes that this co-location of genes with their gene products is required for redox regulation of gene expression, and accounts for the persistence of DNA in bioenergetic organelles.
Photosynthetic eukaryotic lineages
Symbiotic and kleptoplastic organisms excluded:
The glaucophytes and the red and green algae—clade Archaeplastida (uni- and multicellular)
The cryptophytes—clade Cryptista (unicellular)
The haptophytes—clade Haptista (unicellular)
The dinoflagellates and chromerids in the superphylum Myzozoa, and Pseudoblepharisma in the phylum Ciliophora—clade Alveolata (unicellular)
The ochrophytes—clade Stramenopila (uni- and multicellular)
The chlorarachniophytes and three species of Paulinella in the phylum Cercozoa—clade Rhizaria (unicellular)
The euglenids—clade Excavata (unicellular)
Except for the euglenids, which are found within the Excavata, all of these belong to the Diaphoretickes. Archaeplastida and the photosynthetic Paulinella got their plastids, which are surrounded by two membranes, through primary endosymbiosis in two separate events, by engulfing a cyanobacterium. The plastids in all the other groups have either a red or green algal origin, and are referred to as the "red lineages" and the "green lineages". The only known exception is the ciliate Pseudoblepharisma tenue, which in addition to its plastids that originated from green algae also has a purple sulfur bacterium as symbiont. In dinoflagellates and euglenids the plastids are surrounded by three membranes, and in the remaining lines by four. A nucleomorph, remnants of the original algal nucleus located between the inner and outer membranes of the plastid, is present in the cryptophytes (from a red alga) and chlorarachniophytes (from a green alga).
Some dinoflagellates that lost their photosynthetic ability later regained it again through new endosymbiotic events with different algae.
While able to perform photosynthesis, many of these eukaryotic groups are mixotrophs and practice heterotrophy to various degrees.
Photosynthetic prokaryotic lineages
Early photosynthetic systems, such as those in green and purple sulfur and green and purple nonsulfur bacteria, are thought to have been anoxygenic, and used various other molecules than water as electron donors. Green and purple sulfur bacteria are thought to have used hydrogen and sulfur as electron donors. Green nonsulfur bacteria used various amino and other organic acids as electron donors. Purple nonsulfur bacteria used a variety of nonspecific organic molecules. The use of these molecules is consistent with the geological evidence that Earth's early atmosphere was highly reducing at that time.
With a possible exception of Heimdallarchaeota, photosynthesis is not found in archaea. Haloarchaea are photoheterotrophic; they can absorb energy from the sun, but do not harvest carbon from the atmosphere and are therefore not photosynthetic. Instead of chlorophyll they use rhodopsins, which convert light-energy to ion gradients but cannot mediate electron transfer reactions.
In bacteria eight photosynthetic lineages are currently known:
Cyanobacteria, the only prokaryotes performing oxygenic photosynthesis and the only prokaryotes that contain two types of photosystems (type I (RCI), also known as Fe-S type, and type II (RCII), also known as quinone type). The seven remaining prokaryotes have anoxygenic photosynthesis and use versions of either type I or type II.
Chlorobi (green sulfur bacteria) Type I
Heliobacteria Type I
Chloracidobacterium Type I
Proteobacteria (purple sulfur bacteria and purple non-sulfur bacteria) Type II (see: Purple bacteria)
Chloroflexota (green non-sulfur bacteria) Type II
Gemmatimonadota Type II
Eremiobacterota Type II
Cyanobacteria and the evolution of photosynthesis
The biochemical capacity to use water as the source for electrons in photosynthesis evolved once, in a common ancestor of extant cyanobacteria (formerly called blue-green algae). The geological record indicates that this transforming event took place early in Earth's history, at least 2450–2320 million years ago (Ma), and, it is speculated, much earlier. Because the Earth's atmosphere contained almost no oxygen during the estimated development of photosynthesis, it is believed that the first photosynthetic cyanobacteria did not generate oxygen. Available evidence from geobiological studies of Archean (>2500 Ma) sedimentary rocks indicates that life existed 3500 Ma, but the question of when oxygenic photosynthesis evolved is still unanswered. A clear paleontological window on cyanobacterial evolution opened about 2000 Ma, revealing an already-diverse biota of cyanobacteria. Cyanobacteria remained the principal primary producers of oxygen throughout the Proterozoic Eon (2500–543 Ma), in part because the redox structure of the oceans favored photoautotrophs capable of nitrogen fixation. Green algae joined cyanobacteria as the major primary producers of oxygen on continental shelves near the end of the Proterozoic, but only with the Mesozoic (251–66 Ma) radiations of dinoflagellates, coccolithophorids, and diatoms did the primary production of oxygen in marine shelf waters take modern form. Cyanobacteria remain critical to marine ecosystems as primary producers of oxygen in oceanic gyres, as agents of biological nitrogen fixation, and, in modified form, as the plastids of marine algae.
Experimental history
Discovery
Although some of the steps in photosynthesis are still not completely understood, the overall photosynthetic equation has been known since the 19th century.
Jan van Helmont began the research of the process in the mid-17th century when he carefully measured the mass of the soil a plant was using and the mass of the plant as it grew. After noticing that the soil mass changed very little, he hypothesized that the mass of the growing plant must come from the water, the only substance he added to the potted plant. His hypothesis was partially accurate – much of the gained mass comes from carbon dioxide as well as water. However, this was a signaling point to the idea that the bulk of a plant's biomass comes from the inputs of photosynthesis, not the soil itself.
Joseph Priestley, a chemist and minister, discovered that when he isolated a volume of air under an inverted jar and burned a candle in it (which gave off CO2), the candle would burn out very quickly, much before it ran out of wax. He further discovered that a mouse could similarly "injure" air. He then showed that a plant could restore the air the candle and the mouse had "injured".
In 1779, Jan Ingenhousz repeated Priestley's experiments. He discovered that it was the influence of sunlight on the plant that could cause it to revive a mouse in a matter of hours.
In 1796, Jean Senebier, a Swiss pastor, botanist, and naturalist, demonstrated that green plants consume carbon dioxide and release oxygen under the influence of light. Soon afterward, Nicolas-Théodore de Saussure showed that the increase in mass of the plant as it grows could not be due only to uptake of CO2 but also to the incorporation of water. Thus, the basic reaction by which organisms use photosynthesis to produce food (such as glucose) was outlined.
Refinements
Cornelis Van Niel made key discoveries explaining the chemistry of photosynthesis. By studying purple sulfur bacteria and green bacteria, he was the first to demonstrate that photosynthesis is a light-dependent redox reaction in which hydrogen reduces (donates its atoms as electrons and protons to) carbon dioxide.
Robert Emerson discovered two light reactions by testing plant productivity using different wavelengths of light. With the red alone, the light reactions were suppressed. When blue and red were combined, the output was much more substantial. Thus, there were two photosystems, one absorbing up to 600 nm wavelengths, the other up to 700 nm. The former is known as PSII, the latter is PSI. PSI contains only chlorophyll "a", PSII contains primarily chlorophyll "a" with most of the available chlorophyll "b", among other pigments. These include phycobilins, which are the red and blue pigments of red and blue algae, respectively, and fucoxanthol for brown algae and diatoms. The process is most productive when the absorption of quanta is equal in both PSII and PSI, assuring that input energy from the antenna complex is divided between the PSI and PSII systems, which in turn powers the photochemistry.
Robert Hill thought that a complex of reactions consisted of an intermediate to cytochrome b6 (now a plastoquinone), and that another was from cytochrome f to a step in the carbohydrate-generating mechanisms. These are linked by plastoquinone, which does require energy to reduce cytochrome f. Further experiments to prove that the oxygen developed during the photosynthesis of green plants came from water were performed by Hill in 1937 and 1939. He showed that isolated chloroplasts give off oxygen in the presence of unnatural reducing agents like iron oxalate, ferricyanide or benzoquinone after exposure to light. In the Hill reaction:
2 H2O + 2 A + (light, chloroplasts) → 2 AH2 + O2
A is the electron acceptor. Therefore, in light, the electron acceptor is reduced and oxygen is evolved. Samuel Ruben and Martin Kamen used radioactive isotopes to determine that the oxygen liberated in photosynthesis came from the water.
Melvin Calvin and Andrew Benson, along with James Bassham, elucidated the path of carbon assimilation (the photosynthetic carbon reduction cycle) in plants. The carbon reduction cycle is known as the Calvin cycle, but many scientists refer to it as the Calvin-Benson, Benson-Calvin, or even Calvin-Benson-Bassham (or CBB) Cycle.
Nobel Prize–winning scientist Rudolph A. Marcus was later able to discover the function and significance of the electron transport chain.
Otto Heinrich Warburg and Dean Burk discovered the I-quantum photosynthesis reaction that splits CO2, activated by the respiration.Otto Warburg – Biography . Nobelprize.org (1970-08-01). Retrieved on 2011-11-03.
In 1950, first experimental evidence for the existence of photophosphorylation in vivo was presented by Otto Kandler using intact Chlorella cells and interpreting his findings as light-dependent ATP formation.
In 1954, Daniel I. Arnon et al. discovered photophosphorylation in vitro in isolated chloroplasts with the help of P32.
Louis N. M. Duysens and Jan Amesz discovered that chlorophyll "a" will absorb one light, oxidize cytochrome f, while chlorophyll "a" (and other pigments) will absorb another light but will reduce this same oxidized cytochrome, stating the two light reactions are in series.
Development of the concept
In 1893, the American botanist Charles Reid Barnes proposed two terms, photosyntax and photosynthesis, for the biological process of synthesis of complex carbon compounds out of carbonic acid, in the presence of chlorophyll, under the influence of light. The term photosynthesis is derived from the Greek phōs (φῶς, gleam) and sýnthesis (σύνθεσις, arranging together), while another word that he designated was photosyntax, from sýntaxis (σύνταξις, configuration). Over time, the term photosynthesis came into common usage. Later discovery of anoxygenic photosynthetic bacteria and photophosphorylation necessitated redefinition of the term.
C3 : C4 photosynthesis research
In the late 1940s at the University of California, Berkeley, the details of photosynthetic carbon metabolism were sorted out by the chemists Melvin Calvin, Andrew Benson, James Bassham and a score of students and researchers utilizing the carbon-14 isotope and paper chromatography techniques. The pathway of CO2 fixation by the algae Chlorella in a fraction of a second in light resulted in a three carbon molecule called phosphoglyceric acid (PGA). For that original and ground-breaking work, a Nobel Prize in Chemistry was awarded to Melvin Calvin in 1961. In parallel, plant physiologists studied leaf gas exchanges using the new method of infrared gas analysis and a leaf chamber where the net photosynthetic rates ranged from 10 to 13 μmol CO2·m−2·s−1, with the conclusion that all terrestrial plants have the same photosynthetic capacities, that are light saturated at less than 50% of sunlight.
Later in 1958–1963 at Cornell University, field grown maize was reported to have much greater leaf photosynthetic rates of 40 μmol CO2·m−2·s−1 and not be saturated at near full sunlight. This higher rate in maize was almost double of those observed in other species such as wheat and soybean, indicating that large differences in photosynthesis exist among higher plants. At the University of Arizona, detailed gas exchange research on more than 15 species of monocots and dicots uncovered for the first time that differences in leaf anatomy are crucial factors in differentiating photosynthetic capacities among species. In tropical grasses, including maize, sorghum, sugarcane, Bermuda grass and in the dicot amaranthus, leaf photosynthetic rates were around 38−40 μmol CO2·m−2·s−1, and the leaves have two types of green cells, i.e. outer layer of mesophyll cells surrounding a tightly packed cholorophyllous vascular bundle sheath cells. This type of anatomy was termed Kranz anatomy in the 19th century by the botanist Gottlieb Haberlandt while studying leaf anatomy of sugarcane. Plant species with the greatest photosynthetic rates and Kranz anatomy showed no apparent photorespiration, very low CO2 compensation point, high optimum temperature, high stomatal resistances and lower mesophyll resistances for gas diffusion and rates never saturated at full sun light. The research at Arizona was designated a Citation Classic in 1986. These species were later termed C4 plants as the first stable compound of CO2 fixation in light has four carbons as malate and aspartate. Other species that lack Kranz anatomy were termed C3 type such as cotton and sunflower, as the first stable carbon compound is the three-carbon PGA. At 1000 ppm CO2 in measuring air, both the C3 and C4 plants had similar leaf photosynthetic rates around 60 μmol CO2·m−2·s−1 indicating the suppression of photorespiration in C3 plants.
Factors
There are four main factors influencing photosynthesis and several corollary factors. The four main are:
Light irradiance and wavelength
Water absorption
Carbon dioxide concentration
Temperature.
Total photosynthesis is limited by a range of environmental factors. These include the amount of light available, the amount of leaf area a plant has to capture light (shading by other plants is a major limitation of photosynthesis), the rate at which carbon dioxide can be supplied to the chloroplasts to support photosynthesis, the availability of water, and the availability of suitable temperatures for carrying out photosynthesis.
Light intensity (irradiance), wavelength and temperature
The process of photosynthesis provides the main input of free energy into the biosphere, and is one of four main ways in which radiation is important for plant life.
The radiation climate within plant communities is extremely variable, in both time and space.
In the early 20th century, Frederick Blackman and Gabrielle Matthaei investigated the effects of light intensity (irradiance) and temperature on the rate of carbon assimilation.
At constant temperature, the rate of carbon assimilation varies with irradiance, increasing as the irradiance increases, but reaching a plateau at higher irradiance.
At low irradiance, increasing the temperature has little influence on the rate of carbon assimilation. At constant high irradiance, the rate of carbon assimilation increases as the temperature is increased.
These two experiments illustrate several important points: First, it is known that, in general, photochemical reactions are not affected by temperature. However, these experiments clearly show that temperature affects the rate of carbon assimilation, so there must be two sets of reactions in the full process of carbon assimilation. These are the light-dependent 'photochemical' temperature-independent stage, and the light-independent, temperature-dependent stage. Second, Blackman's experiments illustrate the concept of limiting factors. Another limiting factor is the wavelength of light. Cyanobacteria, which reside several meters underwater, cannot receive the correct wavelengths required to cause photoinduced charge separation in conventional photosynthetic pigments. To combat this problem, Cyanobacteria have a light-harvesting complex called Phycobilisome. This complex is made up of a series of proteins with different pigments which surround the reaction center.
Carbon dioxide levels and photorespiration
As carbon dioxide concentrations rise, the rate at which sugars are made by the light-independent reactions increases until limited by other factors. RuBisCO, the enzyme that captures carbon dioxide in the light-independent reactions, has a binding affinity for both carbon dioxide and oxygen. When the concentration of carbon dioxide is high, RuBisCO will fix carbon dioxide. However, if the carbon dioxide concentration is low, RuBisCO will bind oxygen instead of carbon dioxide. This process, called photorespiration, uses energy, but does not produce sugars.
RuBisCO oxygenase activity is disadvantageous to plants for several reasons:
One product of oxygenase activity is phosphoglycolate (2 carbon) instead of 3-phosphoglycerate (3 carbon). Phosphoglycolate cannot be metabolized by the Calvin-Benson cycle and represents carbon lost from the cycle. A high oxygenase activity, therefore, drains the sugars that are required to recycle ribulose 5-bisphosphate and for the continuation of the Calvin-Benson cycle.
Phosphoglycolate is quickly metabolized to glycolate that is toxic to a plant at a high concentration; it inhibits photosynthesis.
Salvaging glycolate is an energetically expensive process that uses the glycolate pathway, and only 75% of the carbon is returned to the Calvin-Benson cycle as 3-phosphoglycerate. The reactions also produce ammonia (NH3), which is able to diffuse out of the plant, leading to a loss of nitrogen.
A highly simplified summary is:
2 glycolate + ATP → 3-phosphoglycerate + carbon dioxide + ADP + NH3
The salvaging pathway for the products of RuBisCO oxygenase activity is more commonly known as photorespiration, since it is characterized by light-dependent oxygen consumption and the release of carbon dioxide.
See also
Jan Anderson (scientist)
Artificial photosynthesis
Calvin-Benson cycle
Carbon fixation
Cellular respiration
Chemosynthesis
Daily light integral
Hill reaction
Integrated fluorometer
Light-dependent reaction
Organic reaction
Photobiology
Photoinhibition
Photosynthetic reaction center
Photosynthetically active radiation
Photosystem
Photosystem I
Photosystem II
Quantasome
Quantum biology
Radiosynthesis
Red edge
Vitamin D
References
Further reading
Books
Papers
External links
A collection of photosynthesis pages for all levels from a renowned expert (Govindjee)
In depth, advanced treatment of photosynthesis, also from Govindjee
Science Aid: Photosynthesis Article appropriate for high school science
Metabolism, Cellular Respiration and Photosynthesis – The Virtual Library of Biochemistry and Cell Biology
Overall examination of Photosynthesis at an intermediate level
Overall Energetics of Photosynthesis
The source of oxygen produced by photosynthesis Interactive animation, a textbook tutorial
Photosynthesis – Light Dependent & Light Independent Stages
Khan Academy, video introduction
Category:Agronomy
Category:Biological processes
Category:Botany
Category:Cellular respiration
Category:Ecosystems
Category:Metabolism
Category:Plant nutrition
Category:Plant physiology
Category:Quantum biology
|
biology
| 7,434
|
25523
|
Richard Feynman
|
https://en.wikipedia.org/wiki/Richard_Feynman
|
Richard Phillips Feynman (; May 11, 1918 – February 15, 1988) was an American theoretical physicist. He is best known for his work in the path integral formulation of quantum mechanics, the theory of quantum electrodynamics, the physics of the superfluidity of supercooled liquid helium, and in particle physics, for which he proposed the parton model. For his contributions to the development of quantum electrodynamics, Feynman received the Nobel Prize in Physics in 1965 jointly with Julian Schwinger and Shin'ichirō Tomonaga.
Feynman developed a pictorial representation scheme for the mathematical expressions describing the behavior of subatomic particles, which later became known as Feynman diagrams and is widely used. During his lifetime, Feynman became one of the best-known scientists in the world. In a 1999 poll of 130 leading physicists worldwide by the British journal Physics World, he was ranked the seventh-greatest physicist of all time.
He assisted in the development of the atomic bomb during World War II and became known to the wider public in the 1980s as a member of the Rogers Commission, the panel that investigated the Space Shuttle Challenger disaster. Along with his work in theoretical physics, Feynman has been credited with having pioneered the field of quantum computing and introducing the concept of nanotechnology. He held the Richard C. Tolman professorship in theoretical physics at the California Institute of Technology.
Feynman was a keen popularizer of physics through both books and lectures, including a talk on top-down nanotechnology, "There's Plenty of Room at the Bottom" (1959) and the three-volumes of his undergraduate lectures, The Feynman Lectures on Physics (1961–1964). He delivered lectures for lay audiences, recorded in The Character of Physical Law (1965) and QED: The Strange Theory of Light and Matter (1985). Feynman also became known through his autobiographical books Surely You're Joking, Mr. Feynman! (1985) and What Do You Care What Other People Think? (1988), and books written about him such as Tuva or Bust! by Ralph Leighton and the biography Genius: The Life and Science of Richard Feynman by James Gleick.
Early life
Feynman was born on May 11, 1918, in New York City, to Lucille (), a homemaker, and Melville Arthur Feynman, a sales manager. Feynman's father was born into a Jewish family in Minsk, Russian Empire, and immigrated with his parents to the United States at the age of five. Feynman's mother was born in the United States into a Jewish family. Lucille's father had emigrated from Poland, and her mother also came from a family of Polish immigrants. She trained as a primary school teacher but married Melville in 1917, before taking up a profession. Richard was a late talker and did not speak until after his third birthday. As an adult, he spoke with a New York accent strong enough to be perceived as an affectation or exaggeration, so much so that his friends Wolfgang Pauli and Hans Bethe once commented that Feynman spoke like a "bum".
The young Feynman was heavily influenced by his father, who encouraged him to ask questions to challenge orthodox thinking, and who was always ready to teach Feynman something new. From his mother, he gained the sense of humor that he had throughout his life. As a child, he had a talent for engineering, maintained an experimental laboratory in his home, and delighted in repairing radios. This radio repairing was probably the first job Feynman had, and during this time he showed early signs of an aptitude for his later career in theoretical physics, when he would analyze the issues theoretically and arrive at the solutions. When he was in grade school, he created a home burglar alarm system while his parents were out for the day running errands.
When Richard was five, his mother gave birth to a younger brother, Henry Phillips, who died at age four weeks. Four years later, Richard's sister Joan was born and the family moved to Far Rockaway, Queens. Though separated by nine years, Joan and Richard were close, and they both shared a curiosity about the world. Though their mother thought women lacked the capacity to understand such things, Richard encouraged Joan's interest in astronomy, taking her to see the aurora borealis in Far Rockaway. As an astrophysicist, Joan would help to explain what caused the northern lights.
Religion
Feynman's parents were both from Jewish families, and his family went to the synagogue every Friday. However, by his youth, Feynman described himself as an "avowed atheist". Many years later, in a letter to Tina Levitan, declining a request for information for her book on Jewish Nobel Prize winners, he stated, "To select, for approbation the peculiar elements that come from some supposedly Jewish heredity is to open the door to all kinds of nonsense on racial theory", adding, "at thirteen I was not only converted to other religious views, but I also stopped believing that the Jewish people are in any way 'the chosen people'".
Later in life, during a visit to the Jewish Theological Seminary, Feynman encountered the Talmud for the first time. He saw that it contained the original text in a little square on each page, and surrounding it were commentaries written over time by different people. In this way, the Talmud had evolved, and everything that was discussed was carefully recorded. Despite being impressed, Feynman was disappointed with the lack of interest in nature and the outside world expressed by the rabbis, who cared about only those questions which arise from the Talmud.
Education
Feynman attended Far Rockaway High School, which was also attended by fellow Nobel laureates Burton Richter and Baruch Samuel Blumberg. Upon starting high school, Feynman was quickly promoted to a higher math class. An IQ test administered in high school estimated his IQ at 125—high but "merely respectable", according to biographer James Gleick. His sister Joan, who scored one point higher, later jokingly claimed to an interviewer that she was smarter. Years later he declined to join Mensa International, saying that his IQ was too low.
When Feynman was 15, he taught himself trigonometry, advanced algebra, infinite series, analytic geometry, and both differential and integral calculus. Before entering college, he was experimenting with mathematical topics such as the half-derivative using his own notation. He created special symbols for logarithm, sine, cosine and tangent functions so they did not look like three variables multiplied together, and for the derivative, to remove the temptation of canceling out the 's in . A member of the Arista Honor Society, in his last year in high school he won the New York University Math Championship. His habit of direct characterization sometimes rattled more conventional thinkers; for example, one of his questions, when learning feline anatomy, was "Do you have a map of the cat?" (referring to an anatomical chart).
Feynman applied to Columbia University but was not accepted because of its quota for the number of Jews admitted. Instead, he attended the Massachusetts Institute of Technology, where he joined the Pi Lambda Phi fraternity. Although he originally majored in mathematics, he later switched to electrical engineering, as he considered mathematics to be too abstract. Noticing that he "had gone too far", he then switched to physics, which he claimed was "somewhere in between". As an undergraduate, he published two papers in the Physical Review. One of these, which was co-written with Manuel Vallarta, was titled "The Scattering of Cosmic Rays by the Stars of a Galaxy".
The other was his senior thesis, on "Forces in Molecules", based on a topic assigned by John C. Slater, who was sufficiently impressed by the paper to have it published. Its main result is known as the Hellmann–Feynman theorem.
In 1939, Feynman received a bachelor's degree and was named a Putnam Fellow. He attained a perfect score on the graduate school entrance exams to Princeton University in physics—an unprecedented feat—and an outstanding score in mathematics, but did poorly on the history and English portions. The head of the physics department there, Henry D. Smyth, had another concern, writing to Philip M. Morse to ask: "Is Feynman Jewish? We have no definite rule against Jews but have to keep their proportion in our department reasonably small because of the difficulty of placing them." Morse conceded that Feynman was indeed Jewish, but reassured Smyth that Feynman's "physiognomy and manner, however, show no trace of this characteristic".
Attendees at Feynman's first seminar, which was on the classical version of the Wheeler–Feynman absorber theory, included Albert Einstein, Wolfgang Pauli, and John von Neumann. Pauli made the prescient comment that the theory would be extremely difficult to quantize, and Einstein said that one might try to apply this method to gravity in general relativity, which Sir Fred Hoyle and Jayant Narlikar did much later as the Hoyle–Narlikar theory of gravity. Feynman received a PhD from Princeton in 1942; his thesis advisor was John Archibald Wheeler. In his doctoral thesis titled "The Principle of Least Action in Quantum Mechanics", Feynman applied the principle of stationary action to problems of quantum mechanics, inspired by a desire to quantize the Wheeler–Feynman absorber theory of electrodynamics, and laid the groundwork for the path integral formulation and Feynman diagrams. A key insight was that positrons behaved like electrons moving backwards in time. James Gleick wrote:
One of the conditions of Feynman's scholarship to Princeton was that he could not be married; nevertheless, he continued to see his high school sweetheart, Arline Greenbaum, and was determined to marry her once he had been awarded his PhD despite the knowledge that she was seriously ill with tuberculosis. This was an incurable disease at the time, and she was not expected to live more than two years. On June 29, 1942, they took the ferry to Staten Island, where they were married in the city office. The ceremony was attended by neither family nor friends and was witnessed by a pair of strangers. Feynman could kiss Arline only on the cheek. After the ceremony he took her to Deborah Hospital, where he visited her on weekends.
Manhattan Project
In 1941, with World War II occurring in Europe but the United States not yet at war, Feynman spent the summer working on ballistics problems at the Frankford Arsenal in Pennsylvania. After the attack on Pearl Harbor brought the United States into the war, Feynman was recruited by Robert R. Wilson, who was working on means to produce enriched uranium for use in an atomic bomb, as part of what would become the Manhattan Project. At the time, Feynman had not earned a graduate degree.Richard Feynman Lecture – "Los Alamos From Below" , talk given at UCSB in 1975 (posted to YouTube on July 12, 2016)Quote:"I did not even have my degree when I started to work on stuff associated with the Manhattan Project."Later in this same talk, at 5m34s , he explains that he took a six week vacation to finish his thesis so received his PhD prior to his arrival at Los Alamos. Wilson's team at Princeton was working on a device called an isotron, intended to electromagnetically separate uranium-235 from uranium-238. This was done in a quite different manner from that used by the calutron that was under development by a team under Wilson's former mentor, Ernest O. Lawrence, at the Radiation Laboratory of the University of California. On paper, the isotron was many times more efficient than the calutron, but Feynman and Paul Olum struggled to determine whether it was practical. Ultimately, on Lawrence's recommendation, the isotron project was abandoned.
At this juncture, in early 1943, Robert Oppenheimer was establishing the Los Alamos Laboratory, a secret laboratory on a mesa in New Mexico where atomic bombs would be designed and built. An offer was made to the Princeton team to be redeployed there. "Like a bunch of professional soldiers," Wilson later recalled, "we signed up, en masse, to go to Los Alamos." Oppenheimer recruited many young physicists, including Feynman, who he telephoned long distance from Chicago to inform that he had found a Presbyterian sanatorium in Albuquerque, New Mexico for Arline. They were among the first to depart for New Mexico, leaving on a train on March 28, 1943. The railroad supplied Arline with a wheelchair, and Feynman paid extra for a private room for her. There they spent their wedding anniversary.
At Los Alamos, Feynman was assigned to Hans Bethe's Theoretical (T) Division, and impressed Bethe enough to be made a group leader. He and Bethe developed the Bethe–Feynman formula for calculating the yield of a fission bomb, which built upon previous work by Robert Serber. As a junior physicist, he was not central to the project. He administered the computation group of human computers in the theoretical division. With Stanley Frankel and Nicholas Metropolis, he assisted in establishing a system for using IBM punched cards for computation. He invented a new method of computing logarithms that he later used on the Connection Machine. An avid drummer, Feynman figured out how to get the machine to click in musical rhythms. Other work at Los Alamos included calculating neutron equations for the Los Alamos "Water Boiler", a small nuclear reactor, to measure how close an assembly of fissile material was to criticality.
On completing this work, Feynman was sent to the Clinton Engineer Works in Oak Ridge, Tennessee, where the Manhattan Project had its uranium enrichment facilities. He aided the engineers there in devising safety procedures for material storage so that criticality accidents could be avoided, especially when enriched uranium came into contact with water, which acted as a neutron moderator. He insisted on giving the rank and file a lecture on nuclear physics so that they would realize the dangers. He explained that while any amount of unenriched uranium could be safely stored, the enriched uranium had to be carefully handled. He developed a series of safety recommendations for the various grades of enrichments. He was told that if the people at Oak Ridge gave him any difficulty with his proposals, he was to inform them that Los Alamos "could not be responsible for their safety otherwise".
Returning to Los Alamos, Feynman was put in charge of the group responsible for the theoretical work and calculations on the proposed uranium hydride bomb, which ultimately proved to be infeasible. He was sought out by physicist Niels Bohr for one-on-one discussions. He later discovered the reason: most of the other physicists were too much in awe of Bohr to argue with him. Feynman had no such inhibitions, vigorously pointing out anything he considered to be flawed in Bohr's thinking. He said he felt as much respect for Bohr as anyone else, but once anyone got him talking about physics, he would become so focused he forgot about social niceties. Perhaps because of this, Bohr never warmed to Feynman. Feynman impressed Oppenheimer, who wrote in a letter to the University of California's physics department chairman, Raymond T. Birge, in November 1943 that Feynman was "by all odds the most brilliant young physicist here, and everyone knows this."
At Los Alamos, which was isolated for security, Feynman amused himself by investigating the combination locks on the cabinets and desks of physicists. He often found that they left the lock combinations on the factory settings, wrote the combinations down, or used easily guessable combinations like dates. He found one cabinet's combination by trying numbers he thought a physicist might use (it proved to be 27–18–28 after the base of natural logarithms, e = 2.71828 ...), and found that the three filing cabinets where a colleague kept research notes all had the same combination. He left notes in the cabinets as a prank, spooking his colleague, Frederic de Hoffmann, into thinking a spy had gained access to them.
Feynman's $380 () monthly salary was about half the amount needed for his modest living expenses and Arline's medical bills, and they were forced to dip into her $3,300 () in savings. On weekends he borrowed a car from his friend Klaus Fuchs to drive to Albuquerque to see Arline. Asked who at Los Alamos was most likely to be a spy, Fuchs mentioned Feynman's safe-cracking and frequent trips to Albuquerque; Fuchs himself later confessed to spying for the Soviet Union. The FBI would compile a bulky file on Feynman, particularly in view of Feynman's Q clearance.
alt=The scientists standing in a semi-circle, wearing suits|thumb|Feynman (center) with Robert Oppenheimer (immediately right of Feynman) at a Los Alamos Laboratory social function during the Manhattan Project
Informed that Arline was dying, Feynman drove to Albuquerque and sat with her for hours until she died on June 16, 1945. He then immersed himself in work on the project and was present at the Trinity nuclear test. Feynman claimed to be the only person to see the explosion without the very dark glasses or welder's lenses provided, reasoning that it was safe to look through a truck windshield, as it would screen out the harmful ultraviolet radiation. The immense brightness of the explosion made him duck to the truck's floor, where he saw a temporary "purple splotch" afterimage.
Cornell (1945–1949)
Feynman nominally held an appointment at the University of Wisconsin–Madison as an assistant professor of physics, but was on unpaid leave during his involvement in the Manhattan Project. In 1945, he received a letter from Dean Mark Ingraham of the College of Letters and Science requesting his return to the university to teach in the coming academic year. His appointment was not extended when he did not commit to returning. In a talk given there several years later, Feynman quipped, "It's great to be back at the only university that ever had the good sense to fire me."
As early as October 30, 1943, Bethe had written to the chairman of the physics department of his university, Cornell, to recommend that Feynman be hired. On February 28, 1944, this was endorsed by Robert Bacher, also from Cornell, and one of the most senior scientists at Los Alamos. This led to an offer being made in August 1944, which Feynman accepted. Oppenheimer had hoped to recruit Feynman to the University of California, but Birge was reluctant. He made Feynman an offer in May 1945, but Feynman turned it down. Cornell matched its salary offer of $3,900 () per annum. Feynman became one of the first of the Los Alamos Laboratory's group leaders to depart, leaving for Ithaca, New York, in October 1945.
Because Feynman was no longer working at the Los Alamos Laboratory, he was no longer exempt from the draft. At his induction physical, Army psychiatrists diagnosed Feynman as suffering from a mental illness and the Army gave him a 4-F exemption on mental grounds. His father died suddenly on October 8, 1946, and Feynman suffered from depression. On October 17, 1946, he wrote a letter to Arline, expressing his deep love and heartbreak. The letter was sealed and only opened after his death. "Please excuse my not mailing this," the letter concluded, "but I don't know your new address." Unable to focus on research problems, Feynman began tackling physics problems, not for utility, but for self-satisfaction. One of these involved analyzing the physics of a twirling, nutating disk as it is moving through the air, inspired by an incident in the cafeteria at Cornell when someone tossed a dinner plate in the air. He read the work of Sir William Rowan Hamilton on quaternions, and tried unsuccessfully to use them to formulate a relativistic theory of electrons. His work during this period, which used equations of rotation to express various spinning speeds, ultimately proved important to his Nobel Prize–winning work, yet because he felt burned out and had turned his attention to less immediately practical problems, he was surprised by the offers of professorships from other renowned universities, including the Institute for Advanced Study, the University of California, Los Angeles, and the University of California, Berkeley.
Feynman was not the only frustrated theoretical physicist in the early post-war years. Quantum electrodynamics suffered from infinite integrals in perturbation theory. These were clear mathematical flaws in the theory, which Feynman and Wheeler had tried, unsuccessfully, to work around. "Theoreticians", noted Murray Gell-Mann, "were in disgrace". In June 1947, leading American physicists met at the Shelter Island Conference. For Feynman, it was his "first big conference with big men ... I had never gone to one like this one in peacetime." The problems plaguing quantum electrodynamics were discussed, but the theoreticians were completely overshadowed by the achievements of the experimentalists, who reported the discovery of the Lamb shift, the measurement of the magnetic moment of the electron, and Robert Marshak's two-meson hypothesis.
Bethe took the lead from the work of Hans Kramers, and derived a renormalized non-relativistic quantum equation for the Lamb shift. The next step was to create a relativistic version. Feynman thought that he could do this, but when he went back to Bethe with his solution, it did not converge. Feynman carefully worked through the problem again, applying the path integral formulation that he had used in his thesis. Like Bethe, he made the integral finite by applying a cut-off term. The result corresponded to Bethe's version. Feynman presented his work to his peers at the Pocono Conference in 1948. It did not go well. Julian Schwinger gave a long presentation of his work in quantum electrodynamics, and Feynman then offered his version, entitled "Alternative Formulation of Quantum Electrodynamics". The unfamiliar Feynman diagrams, used for the first time, puzzled the audience. Feynman failed to get his point across, and Paul Dirac, Edward Teller and Niels Bohr all raised objections.
To Freeman Dyson, one thing at least was clear: Shin'ichirō Tomonaga, Schwinger and Feynman understood what they were talking about even if no one else did, but had not published anything. He was convinced that Feynman's formulation was easier to understand, and ultimately managed to convince Oppenheimer that this was the case. Dyson published a paper in 1949, which added new rules to Feynman's that told how to implement renormalization. Feynman was prompted to publish his ideas in the Physical Review in a series of papers over three years. His 1948 papers on "A Relativistic Cut-Off for Classical Electrodynamics" attempted to explain what he had been unable to get across at Pocono. His 1949 paper on "The Theory of Positrons" addressed the Schrödinger equation and Dirac equation, and introduced what is now called the Feynman propagator. Finally, in papers on the "Mathematical Formulation of the Quantum Theory of Electromagnetic Interaction" in 1950 and "An Operator Calculus Having Applications in Quantum Electrodynamics" in 1951, he developed the mathematical basis of his ideas, derived familiar formulae and advanced new ones.
While papers by others initially cited Schwinger, papers citing Feynman and employing Feynman diagrams appeared in 1950, and soon became prevalent. Students learned and used the powerful new tool that Feynman had created. Computer programs were later written to evaluate Feynman diagrams, enabling physicists to use quantum field theory to make high-precision predictions. Marc Kac adapted Feynman's technique of summing over possible histories of a particle to the study of parabolic partial differential equations, yielding what is now known as the Feynman–Kac formula, the use of which extends beyond physics to many applications of stochastic processes. To Schwinger, however, the Feynman diagram was "pedagogy, not physics".
Looking back on this period, Feynman would reflect fondly on his time at the Telluride House, where he resided for a large period of his Cornell career. In an interview, he described the House as "a group of boys that have been specially selected because of their scholarship, because of their cleverness or whatever it is, to be given free board and lodging and so on, because of their brains". He enjoyed the house's convenience and said that "it's there that I did the fundamental work" for which he won the Nobel Prize.
However, Feynman was also reported to have been quite restless during his time at Cornell. By 1949, as the period was coming to a close, he had never settled into a particular house or apartment, moving instead between guest houses or student residences. While he did spend some time living with various married friends, these situations were reported to frequently end because the "arrangements became sexually volatile". The renowned 31 year old was known to frequently pursue his married female friends, undergraduate girls and women, and to hire sex workers, which would sour many of his friendships. Additionally, Feynman was not fond of Ithaca's cold winter weather or feeling as though he lived in the shadow of Hans Bethe while at Cornell.
Brazil (1949–1952)
Feynman spent several weeks in Rio de Janeiro in July 1949. That year, the Soviet Union detonated its first atomic bomb, generating concerns about espionage. Fuchs was arrested as a Soviet spy in 1950 and the FBI questioned Bethe about Feynman's loyalty. Physicist David Bohm was arrested on December 4, 1950, and emigrated to Brazil in October 1951. Because of the fears of a nuclear war, a girlfriend told Feynman that he should also consider moving to South America. He had a sabbatical coming for 1951–1952, and elected to spend it in Brazil, where he gave courses at the Centro Brasileiro de Pesquisas Físicas.
alt=Feynman seated on the floor with drums around him|thumb|Feynman with drums
In Brazil, Feynman was impressed with samba music, and learned to play the , a metal percussion instrument based on a frying pan. He was an enthusiastic amateur player of bongo and conga drums and often played them in the pit orchestra in musicals. He spent time in Rio with his friend Bohm, but Bohm could not convince Feynman to investigate Bohm's ideas on physics.
Caltech and later years (1952–1978)
Personal and political life
Feynman did not return to Cornell. Bacher, who had been instrumental in bringing Feynman to Cornell, had lured him to the California Institute of Technology (Caltech). Part of the deal was that he could spend his first year on sabbatical in Brazil. He had become smitten by Mary Louise Bell from Neodesha, Kansas. They had met in a cafeteria in Cornell, where she had studied the history of Mexican art and textiles. She later followed him to Caltech, where he gave a lecture. While he was in Brazil, she taught classes on the history of furniture and interiors at Michigan State University. He proposed to her by mail from Rio de Janeiro, and they married in Boise, Idaho, on June 28, 1952, shortly after he returned. They frequently quarreled and she was frightened by what she described as "a violent temper". Their politics were different; although he registered and voted as a Republican, she was more conservative, and her opinion on the 1954 Oppenheimer security hearing ("Where there's smoke there's fire") offended him. They separated on May 20, 1956. An interlocutory decree of divorce was entered on June 19, 1956, on the grounds of "extreme cruelty". The divorce became final on May 5, 1958.
In the wake of the 1957 Sputnik crisis, the U.S. government's interest in science rose for a time. Feynman was considered for a seat on the President's Science Advisory Committee, but was not appointed. At this time, the FBI interviewed a woman close to Feynman, possibly his ex-wife Bell, who sent a written statement to J. Edgar Hoover on August 8, 1958:
The U.S. government nevertheless sent Feynman to Geneva for the September 1958 Atoms for Peace Conference. On the beach at Lake Geneva, he met Gweneth Howarth, who was from Ripponden, West Yorkshire, and working in Switzerland as an au pair. Feynman's love life had been turbulent since his divorce; his previous girlfriend had walked off with his Albert Einstein Award medal and, on the advice of an earlier girlfriend, had feigned pregnancy and extorted him into paying for an abortion, then used the money to buy furniture. When Feynman found that Howarth was being paid only $25 a month, he offered her $20 (equivalent to $202 in 2022) a week to be his live-in maid. Feynman knew that this sort of behavior was illegal under the Mann Act, so he had a friend, Matthew Sands, act as her sponsor. Howarth pointed out that she already had two boyfriends, but decided to take Feynman up on his offer, and arrived in Altadena, California, in June 1959. She made a point of dating other men, but Feynman proposed in early 1960. They were married on September 24, 1960, at the Huntington Hotel in Pasadena. They had a son, Carl, in 1962, and adopted a daughter, Michelle, in 1968. Besides their home in Altadena, they had a beach house in Baja California, purchased with the money from Feynman's Nobel Prize.
Allegations of sexism
There were protests over his alleged sexism at Caltech in 1968, and again in 1972. Protesters "objected to his use of sexist stories about 'lady drivers' and clueless women in his lectures." Feynman recalled protesters entering a hall and picketing a lecture he was about to make in San Francisco, calling him a "sexist pig". He later reflected on the incident claiming that it prompted him to address the protesters, saying that "women do indeed suffer prejudice and discrimination in physics, and your presence here today serves to remind us of these difficulties and the need to remedy them".
In his 1985 memoir, Surely You're Joking, Mr. Feynman!, he recalled holding meetings in strip clubs, drawing naked portraits of his female students while lecturing at Caltech, and pretending to be an undergraduate to deceive younger women into sleeping with him.
Feynman diagram van
In 1975, in Long Beach, California, Feynman bought a Dodge Tradesman Maxivan with a bronze-khaki exterior and yellow-green interior, with custom Feynman diagram exterior murals. After Feynman's death, Gweneth sold the van for $1 to one of Feynman's friends, film producer Ralph Leighton, who later put it into storage, where it began to rust. In 2012, video game designer Seamus Blackley, a father of the Xbox, bought the van. Qantum was the license plate ID.
Physics
At Caltech, Feynman investigated the physics of the superfluidity of supercooled liquid helium, where helium seems to display a complete lack of viscosity when flowing. Feynman provided a quantum-mechanical explanation for the Soviet physicist Lev Landau's theory of superfluidity. Applying the Schrödinger equation to the question showed that the superfluid was displaying quantum mechanical behavior observable on a macroscopic scale. This helped with the problem of superconductivity, but the solution eluded Feynman. It was solved with the BCS theory of superconductivity, proposed by John Bardeen, Leon Neil Cooper, and John Robert Schrieffer in 1957.
alt=Feynman standing among trees|thumb|left|Feynman at the Robert Treat Paine Estate in Waltham, Massachusetts, in 1984
Feynman, inspired by a desire to quantize the Wheeler–Feynman absorber theory of electrodynamics, laid the groundwork for the path integral formulation and Feynman diagrams.
With Murray Gell-Mann, Feynman developed a model of weak decay, which showed that the current coupling in the process is a combination of vector and axial currents (an example of weak decay is the decay of a neutron into an electron, a proton, and an antineutrino). Although E. C. George Sudarshan and Robert Marshak developed the theory nearly simultaneously, Feynman's collaboration with Gell-Mann was seen as seminal because the weak interaction was neatly described by the vector and axial currents. It thus combined the 1933 beta decay theory of Enrico Fermi with an explanation of parity violation.
Feynman attempted an explanation, called the parton model, of the strong interactions governing nucleon scattering. The parton model emerged as a complement to the quark model developed by Gell-Mann. The relationship between the two models was murky; Gell-Mann referred to Feynman's partons derisively as "put-ons". In the mid-1960s, physicists believed that quarks were just a bookkeeping device for symmetry numbers, not real particles; the statistics of the omega-minus particle, if it were interpreted as three identical strange quarks bound together, seemed impossible if quarks were real.
The SLAC National Accelerator Laboratory deep inelastic scattering experiments of the late 1960s showed that nucleons (protons and neutrons) contained point-like particles that scattered electrons. It was natural to identify these with quarks, but Feynman's parton model attempted to interpret the experimental data in a way that did not introduce additional hypotheses. For example, the data showed that some 45% of the energy momentum was carried by electrically neutral particles in the nucleon. These electrically neutral particles are now seen to be the gluons that carry the forces between the quarks, and their three-valued color quantum number solves the omega-minus problem. Feynman did not dispute the quark model; for example, when the fifth quark was discovered in 1977, Feynman immediately pointed out to his students that the discovery implied the existence of a sixth quark, which was discovered in the decade after his death.
After the success of quantum electrodynamics, Feynman turned to quantum gravity. By analogy with the photon, which has spin 1, he investigated the consequences of a free massless spin 2 field and derived the Einstein field equation of general relativity, but little more. The computational device that Feynman discovered then for gravity, "ghosts", which are "particles" in the interior of his diagrams that have the "wrong" connection between spin and statistics, have proved invaluable in explaining the quantum particle behavior of the Yang–Mills theories, for example, quantum chromodynamics and the electro-weak theory. He did work on all four of the fundamental interactions of nature: electromagnetic, the weak force, the strong force and gravity. John and Mary Gribbin state in their book on Feynman that "Nobody else has made such influential contributions to the investigation of all four of the interactions".
Partly as a way to bring publicity to progress in physics, Feynman offered $1,000 prizes for two of his challenges in nanotechnology; one was claimed by William McLellan and the other by Tom Newman.
Feynman was also interested in the relationship between physics and computation. He was also one of the first scientists to conceive the possibility of quantum computers. In the 1980s he began to spend his summers working at Thinking Machines Corporation, helping to build some of the first parallel supercomputers and considering the construction of quantum computers.
Between 1984 and 1986, he developed a variational method for the approximate calculation of path integrals, which has led to a powerful method of converting divergent perturbation expansions into convergent strong-coupling expansions (variational perturbation theory) and, as a consequence, to the most accurate determination of critical exponents measured in satellite experiments. At Caltech, he once chalked "What I cannot create I do not understand" on his blackboard.
Machine technology
Feynman had studied the ideas of John von Neumann while researching quantum field theory. His most famous lecture on the subject was delivered in 1959 at the California Institute of Technology, published under the title "There's Plenty of Room at the Bottom" a year later. In this lecture he theorized on future opportunities for designing miniaturized machines, which could build smaller reproductions of themselves. This lecture is frequently cited in technical literature on microtechnology, and nanotechnology.
Feynman also suggested that it should be possible, in principle, to make nanoscale machines that "arrange the atoms the way we want" and do chemical synthesis by mechanical manipulation.Feynman, Richard P. (1959) There's Plenty of Room at the Bottom. zyvex.com
He also presented the possibility of "swallowing the doctor", an idea that he credited in the essay to his friend and graduate student Albert Hibbs. This concept involved building a tiny, swallowable surgical robot.
Pedagogy
alt=Feynman standing before a large blackboard with chalk writing all over it|thumb|upright=1.2|Feynman during a lecture
In the early 1960s, Feynman acceded to a request to "spruce up" the teaching of undergraduates at the California Institute of Technology, also called Caltech. After three years devoted to the task, he produced a series of lectures that later became The Feynman Lectures on Physics. Accounts vary about how successful the original lectures were. Feynman's own preface, written just after an exam on which the students did poorly, was somewhat pessimistic. His colleagues David L. Goodstein and Gerry Neugebauer said later that the intended audience of first-year students found the material intimidating while older students and faculty found it inspirational, so the lecture hall remained full even as the first-year students dropped away. In contrast, physicist Matthew Sands recalled the student attendance as being typical for a large lecture course.
Converting the lectures into books occupied Matthew Sands and Robert B. Leighton as part-time co-authors for several years. Feynman suggested that the book cover should have a picture of a drum with mathematical diagrams about vibrations drawn upon it, in order to illustrate the application of mathematics to understanding the world. Instead, the publishers gave the books plain red covers, though they included a picture of Feynman playing drums in the foreword. Even though the books were not adopted by universities as textbooks, they continue to sell well because they provide a deep understanding of physics.
Many of Feynman's lectures and miscellaneous talks were turned into other books, including The Character of Physical Law, QED: The Strange Theory of Light and Matter, Statistical Mechanics, Lectures on Gravitation, and the Feynman Lectures on Computation.
Feynman wrote about his experiences teaching physics undergraduates in Brazil. The students' studying habits and the Portuguese language textbooks were so devoid of any context or applications for their information that, in Feynman's opinion, the students were not learning physics at all. At the end of the year, Feynman was invited to give a lecture on his teaching experiences, and he agreed to do so, provided he could speak frankly, which he did.
Feynman opposed rote learning, or unthinking memorization, as well as other teaching methods that emphasized form over function. In his mind, clear thinking and clear presentation were fundamental prerequisites for his attention. It could be perilous even to approach him unprepared, and he did not forget fools and pretenders.
In 1964, he served on the California State Curriculum Commission, which was responsible for approving textbooks to be used by schools in California. He was not impressed with what he found. Many of the mathematics texts covered subjects of use only to pure mathematicians as part of the "New Math". Elementary students were taught about sets, but:
In April 1966, Feynman delivered an address to the National Science Teachers Association, in which he suggested how students could be made to think like scientists, be open-minded, curious, and especially, to doubt. In the course of the lecture, he gave a definition of science, which he said came about by several stages. The evolution of intelligent life on planet Earth—creatures such as cats that play and learn from experience. The evolution of humans, who came to use language to pass knowledge from one individual to the next, so that the knowledge was not lost when an individual died. Unfortunately, incorrect knowledge could be passed down as well as correct knowledge, so another step was needed. Galileo and others started doubting the truth of what was passed down and to investigate ab initio, from experience, what the true situation was—this was science.
In 1974, Feynman delivered the Caltech commencement address on the topic of cargo cult science, which has the semblance of science, but is only pseudoscience due to a lack of "a kind of scientific integrity, a principle of scientific thought that corresponds to a kind of utter honesty" on the part of the scientist. He instructed the graduating class that "The first principle is that you must not fool yourself—and you are the easiest person to fool. So you have to be very careful about that. After you've not fooled yourself, it's easy not to fool other scientists. You just have to be honest in a conventional way after that."
Feynman served as doctoral advisor to 30 students.
Case before the Equal Employment Opportunity Commission
In 1977, Feynman supported his English literature colleague Jenijoy La Belle, who had been hired as Caltech's first female professor in 1969, and filed suit with the Equal Employment Opportunity Commission after she was refused tenure in 1974. The EEOC ruled against Caltech in 1977, adding that La Belle had been paid less than male colleagues. La Belle finally received tenure in 1979. Many of Feynman's colleagues were surprised that he took her side, but he had gotten to know La Belle and liked and admired her.
Surely You're Joking, Mr. Feynman!
In the 1960s, Feynman began thinking of writing an autobiography, and he began granting interviews to historians. In the 1980s, working with Ralph Leighton (Robert Leighton's son), he recorded chapters on audio tape that Ralph transcribed. The book was published in 1985 as Surely You're Joking, Mr. Feynman! and became a best-seller.
Gell-Mann was upset by Feynman's account in the book of the weak interaction work, and threatened to sue, resulting in a correction being inserted in later editions. This incident was just the latest provocation in decades of bad feeling between the two scientists. Gell-Mann often expressed frustration at the attention Feynman received; he remarked: was a great scientist, but he spent a great deal of his effort generating anecdotes about himself."
Feynman has been criticized for a chapter in the book entitled "You Just Ask Them?", where he describes how he learned to seduce women at a bar he went to in the summer of 1946. A mentor taught him to ask a woman if she would sleep with him before buying her anything. He describes seeing women at the bar as "bitches" in his thoughts, and tells a story of how he told a woman named Ann that she was "worse than a whore" after Ann persuaded him to buy her sandwiches by telling him he could eat them at her place, but then, after he bought them, saying they actually could not eat together because another man was coming over. Later on that same evening, Ann returned to the bar to take Feynman to her place.Multiple sources: Feynman states at the end of the chapter that this behavior was not typical of him: "So it worked even with an ordinary girl! But no matter how effective the lesson was, I never really used it after that. I didn't enjoy doing it that way. But it was interesting to know that things worked much differently from how I was brought up."
Challenger disaster
alt=A cloud of smoke|thumb|The 1986 Space Shuttle Challenger disaster
Feynman played an important role on the Presidential Rogers Commission, which investigated the 1986 Space Shuttle Challenger disaster. He had been reluctant to participate, but was persuaded by advice from his wife. Feynman clashed several times with commission chairman William P. Rogers. During a break in one hearing, Rogers told commission member Neil Armstrong, "Feynman is becoming a pain in the ass."
During a televised hearing, Feynman demonstrated that the material used in the shuttle's O-rings became less resilient in cold weather by compressing a sample of the material in a clamp and immersing it in ice-cold water.. The commission ultimately determined that the disaster was caused by the primary O-ring not properly sealing in unusually cold weather at Cape Canaveral.
Feynman devoted the latter half of his 1988 book What Do You Care What Other People Think? to his experience on the Rogers Commission, straying from his usual convention of brief, light-hearted anecdotes to deliver an extended and sober narrative. Feynman's account reveals a disconnect between NASA's engineers and executives that was far more striking than he expected. His interviews of NASA's high-ranking managers revealed startling misunderstandings of elementary concepts. For instance, NASA managers claimed that there was a 1 in 100,000 probability of a catastrophic failure aboard the Shuttle, but Feynman discovered that NASA's own engineers estimated the probability of a catastrophe at closer to 1 in 200. He concluded that NASA management's estimate of the reliability of the Space Shuttle was unrealistic, and he was particularly angered that NASA used it to recruit Christa McAuliffe into the Teacher-in-Space program. He warned in his appendix to the commission's report (which was included only after he threatened not to sign the report), "For a successful technology, reality must take precedence over public relations, for nature cannot be fooled."
Recognition and awards
The first public recognition of Feynman's work came in 1954, when Lewis Strauss, the chairman of the Atomic Energy Commission (AEC) notified him that he had won the Albert Einstein Award, which was worth $15,000 and came with a gold medal. Because of Strauss's actions in stripping Oppenheimer of his security clearance, Feynman was reluctant to accept the award, but Isidor Isaac Rabi cautioned him: "You should never turn a man's generosity as a sword against him. Any virtue that a man has, even if he has many vices, should not be used as a tool against him." It was followed by the AEC's Ernest Orlando Lawrence Award in 1962. Schwinger, Tomonaga and Feynman shared the 1965 Nobel Prize in Physics "for their fundamental work in quantum electrodynamics, with deep-ploughing consequences for the physics of elementary particles". He was elected a Foreign Member of the Royal Society in 1965, received the Oersted Medal in 1972, and the National Medal of Science in 1979. He was elected a Member of the National Academy of Sciences, but ultimately resigned and is no longer listed by them. Schwinger called him "an honest man, the outstanding intuitionist of our age, and a prime example of what may lie in store for anyone who dares follow the beat of a different drum."
Death
In 1978, Feynman sought medical treatment for abdominal pains and was diagnosed with liposarcoma, a rare form of cancer. Surgeons removed a "very large" tumor that had crushed one kidney and his spleen. In 1986, doctors discovered another cancer, Waldenström macroglobulinemia.John Simmons, Lynda Simmons, The Scientific 100, p. 250. Further operations were performed in October 1986 and October 1987. He was again hospitalized at the UCLA Medical Center on February 3, 1988. A ruptured duodenal ulcer caused kidney failure, and he declined to undergo the dialysis that might have prolonged his life for a few months. Feynman's wife Gweneth, sister Joan, and cousin Frances Lewine watched over him during the final days of his life until he died on February 15, 1988.
When Feynman was nearing death, he asked his friend and colleague Danny Hillis why Hillis appeared so sad. Hillis replied that he thought Feynman was going to die soon. Hillis quotes Feynman as replying:
Near the end of his life, Feynman attempted to visit the Tuvan Autonomous Soviet Socialist Republic (ASSR) in the Soviet Union, a dream thwarted by Cold War bureaucratic issues. The letter from the Soviet government authorizing the trip was not received until the day after he died. His daughter Michelle later made the journey. Ralph Leighton chronicled the attempt in Tuva or Bust!, published in 1991.
His burial was at Mountain View Cemetery and Mausoleum in Altadena, California. His last words were "This dying is boring" in reference to the extended coma that preceded his death.
Popular legacy
alt=A bronze bust with flowers next to it, resting on a stone base|thumb|upright=0.7|Bust of Feynman on NTHU campus, Taiwan
Aspects of Feynman's life have been portrayed in various media. Feynman was portrayed by Matthew Broderick in the 1996 biopic Infinity. Actor Alan Alda commissioned playwright Peter Parnell to write a two-character play about a fictional day in the life of Feynman set two years before Feynman's death. The play, QED, premiered at the Mark Taper Forum in Los Angeles in 2001 and was later presented at the Vivian Beaumont Theater on Broadway, with both productions starring Alda as Richard Feynman. Real Time Opera premiered its opera Feynman at the Norfolk (Connecticut) Chamber Music Festival in June 2005. In 2011, Feynman was the subject of a biographical graphic novel entitled simply Feynman, written by Jim Ottaviani and illustrated by Leland Myrick. In 2013, Feynman's role on the Rogers Commission was dramatized by the BBC in The Challenger (US title: The Challenger Disaster), with William Hurt playing Feynman. In 2016, Oscar Isaac performed a public reading of Feynman's 1946 love letter to the late Arline. In the 2023 American film Oppenheimer, directed by Christopher Nolan and based on American Prometheus, Feynman is portrayed by actor Jack Quaid.
On May 4, 2005, the United States Postal Service issued the "American Scientists" commemorative set of four 37-cent stamps in several configurations. The scientists depicted were Richard Feynman, John von Neumann, Barbara McClintock, and Josiah Willard Gibbs. Feynman's stamp features a photograph of Feynman in his thirties and eight small Feynman diagrams. The stamps were designed by Victor Stabin under the artistic direction of Carl T. Herrman. The main building for the Computing Division at Fermilab is named the "Feynman Computing Center" in his honor, as is the Richard P. Feynman Center for Innovation at the Los Alamos National Laboratory. Two photographs of Feynman were used in Apple Computer's "Think Different" advertising campaign, which launched in 1997. Sheldon Cooper, a fictional theoretical physicist from the television series The Big Bang Theory, was depicted as a Feynman fan, even emulating him by playing the bongo drums. On January 27, 2016, co-founder of Microsoft Bill Gates wrote an article describing Feynman's talents as a teacher ("The Best Teacher I Never Had"), which inspired Gates to create Project Tuva to place the videos of Feynman's Messenger Lectures, The Character of Physical Law, on a website for public viewing. In 2015, Gates made a video in response to Caltech's request for thoughts on Feynman for the 50th anniversary of Feynman's 1965 Nobel Prize, on why he thought Feynman was special.
Works
Selected scientific works
Lecture presented at the fifteenth annual meeting of the National Science Teachers Association, 1966 in New York City.
Proceedings of the International Workshop at Wangerooge Island, Germany; Sept 1–4, 1987.
Textbooks and lecture notes
The Feynman Lectures on Physics is perhaps his most accessible work for anyone with an interest in physics, compiled from lectures to Caltech undergraduates in 1961–1964. As news of the lectures' lucidity grew, professional physicists and graduate students began to drop in to listen. Co-authors Robert B. Leighton and Matthew Sands, colleagues of Feynman, edited and illustrated them into book form. The work has endured and is useful to this day. They were edited and supplemented in 2005 with Feynman's Tips on Physics: A Problem-Solving Supplement to the Feynman Lectures on Physics by Michael Gottlieb and Ralph Leighton (Robert Leighton's son), with support from Kip Thorne and other physicists.
Includes Feynman's Tips on Physics (with Michael Gottlieb and Ralph Leighton), which includes four previously unreleased lectures on problem solving, exercises by Robert Leighton and Rochus Vogt, and a historical essay by Matthew Sands. Three volumes; originally published as separate volumes in 1964 and 1966.
.
Popular works
No Ordinary Genius: The Illustrated Richard Feynman, ed. Christopher Sykes, W. W. Norton & Company, 1996, .
Six Easy Pieces: Essentials of Physics Explained by Its Most Brilliant Teacher, Perseus Books, 1994, . Listed by the board of directors of the Modern Library as one of the 100 best nonfiction books.
Six Not So Easy Pieces: Einstein's Relativity, Symmetry and Space-Time, Addison Wesley, 1997, .
Classic Feynman: All the Adventures of a Curious Character, edited by Ralph Leighton, W. W. Norton & Company, 2005, . Chronologically reordered omnibus volume of Surely You're Joking, Mr. Feynman! and What Do You Care What Other People Think?, with a bundled CD containing one of Feynman's signature lectures.
Audio and video recordings
Safecracker Suite (a collection of drum pieces interspersed with Feynman telling anecdotes)
Los Alamos From Below (audio, talk given by Feynman at Santa Barbara on February 6, 1975)
The Feynman Lectures on Physics: The Complete Audio Collection, selections from which were also released as Six Easy Pieces and Six Not So Easy Pieces
The Messenger Lectures (link), given at Cornell in 1964, in which he explains basic topics in physics; they were also adapted into the book The Character of Physical Law
The Douglas Robb Memorial Lectures, four public lectures of which the four chapters of the book QED: The Strange Theory of Light and Matter are transcripts. (1979)
The Pleasure of Finding Things Out, BBC Horizon episode (1981) (not to be confused with the later published book of the same title)
Richard Feynman: Fun to Imagine Collection, BBC Archive of six short films of Feynman talking in a style that is accessible to all about the physics behind common to all experiences. (1983)
Elementary Particles and the Laws of Physics, from the 1986 Dirac Memorial Lectures (video, 1986)
Tiny Machines: The Feynman Talk on Nanotechnology (video, 1984)
Computers From the Inside Out (video)
Quantum Mechanical View of Reality: Workshop at Esalen (video, 1983)
Idiosyncratic Thinking Workshop (video, 1985)
Bits and Pieces—From Richard's Life and Times (video, 1988)
Strangeness Minus Three (video, BBC Horizon 1964)
No Ordinary Genius (video, Cristopher Sykes Documentary)
Four NOVA episodes are made about or with him. (TV program, 1975, 1983, 1989, 1993)
The Motion of Planets Around the Sun (audio, sometimes titled "Feynman's Lost Lecture")
Nature of Matter (audio)
References
Sources
Further reading
Articles
Physics Today, American Institute of Physics magazine, February 1989 Issue. (Vol. 42, No. 2.) Special Feynman memorial issue containing non-technical articles on Feynman's life and work in physics.
Books
Brown, Laurie M. and Rigden, John S. (editors) (1993) Most of the Good Stuff: Memories of Richard Feynman Simon & Schuster, New York, . Commentary by Joan Feynman, John Wheeler, Hans Bethe, Julian Schwinger, Murray Gell-Mann, Daniel Hillis, David Goodstein, Freeman Dyson, and Laurie Brown
Dyson, Freeman (1979) Disturbing the Universe. Harper and Row. . Dyson's autobiography. The chapters "A Scientific Apprenticeship" and "A Ride to Albuquerque" describe his impressions of Feynman in the period 1947–1948 when Dyson was a graduate student at Cornell
for high school readers
Published in the United Kingdom as Some Time With Feynman
Films and plays
Infinity (1996), a movie both directed by and starring Matthew Broderick as Feynman, depicting his love affair with his first wife and ending with the Trinity test.
Parnell, Peter (2002), QED, Applause Books, (play)
Whittell, Crispin (2006), Clever Dick, Oberon Books, (play)
"The Quest for Tannu Tuva", with Richard Feynman and Ralph Leighton. 1987, BBC Horizon and PBS Nova (entitled "Last Journey of a Genius").
No Ordinary Genius, a two-part documentary about Feynman's life and work, with contributions from colleagues, friends and family. 1993, BBC Horizon and PBS Nova (a one-hour version, under the title The Best Mind Since Einstein) (2 × 50-minute films)
The Challenger (2013), a BBC Two factual drama starring William Hurt, tells the story of American Nobel prize-winning physicist Richard Feynman's determination to reveal the truth behind the 1986 Space Shuttle Challenger disaster.
The Fantastic Mr Feynman. One hour documentary. 2013, BBC TV
How We Built The Bomb, a docudrama about The Manhattan Project at Los Alamos. Feynman is played by actor/playwright Michael Raver. 2015
Oppenheimer (2023), a biopic based on the 2005 biography American Prometheus. Feynman is played by actor Jack Quaid.
External links
Online edition of The Feynman Lectures on Physics by California Institute of Technology, Michael A. Gottlieb, and Rudolf Pfeiffer
Oral history interview transcript with Richard Feynman on 4 March 1966 – Session I from Oral History Interviews, Niels Bohr Library & Archives, American Institute of Physics
Oral history interview transcript with Richard Feynman on 5 March 1966 – Session II from Oral History Interviews, Niels Bohr Library & Archives, American Institute of Physics
Oral history interview transcript with Richard Feynman on 27 June 1966 – Session III from Oral History Interviews, Niels Bohr Library & Archives, American Institute of Physics
Oral history interview transcript with Richard Feynman on 28 June 1966 – Session IV from Oral History Interviews, Niels Bohr Library & Archives, American Institute of Physics
Oral history interview transcript with Richard Feynman on 4 February 1973 – Session V from Oral History Interviews, Niels Bohr Library & Archives, American Institute of Physics
Richard Feynman – Scientist. Teacher. Raconteur. Musician A site dedicated to Richard Feynman
Los Alamos National laboratory page on Feynman
Category:20th-century American physicists
Category:20th-century American science writers
Category:American experimental physicists
Category:American particle physicists
Category:American quantum physicists
Category:American relativity theorists
Category:American theoretical physicists
Category:Jewish American physicists
Category:Quantum gravity physicists
Category:Nobel laureates in Physics
Category:American Nobel laureates
Category:American nanotechnologists
Category:American textbook writers
Category:Jewish American non-fiction writers
Category:Cellular automatists
Category:Quantum computing
Category:California Institute of Technology faculty
Category:Cornell University faculty
Category:Space Shuttle Challenger disaster
Category:Manhattan Project people
Category:Nuclear weapons scientists and engineers
Category:Fellows of the American Physical Society
Category:Foreign members of the Royal Society
Category:National Medal of Science laureates
Category:Niels Bohr International Gold Medal recipients
Category:Putnam Fellows
Category:Sloan Research Fellows
Category:Massachusetts Institute of Technology School of Science alumni
Category:Princeton University alumni
Category:Scientists from California
Category:Scientists from New York (state)
Category:United States Army civilians
Category:Bongo players
Category:Far Rockaway High School alumni
Category:People from Far Rockaway, Queens
Category:People from Los Alamos, New Mexico
Category:Deaths from liposarcoma
Category:Deaths from cancer in California
Category:Burials at Mountain View Cemetery (Altadena, California)
Category:20th-century atheists
Category:American skeptics
Category:American atheists
Category:Jewish American atheists
Category:American people of Russian-Jewish descent
Category:American people of Polish-Jewish descent
Category:1918 births
Category:1988 deaths
|
biographies
| 9,515
|
25717
|
Regular expression
|
https://en.wikipedia.org/wiki/Regular_expression
|
A regular expression (shortened as regex or regexp), sometimes referred to as a rational expression, is a sequence of characters that specifies a match pattern in text. Usually such patterns are used by string-searching algorithms for "find" or "find and replace" operations on strings, or for input validation. Regular expression techniques are developed in theoretical computer science and formal language theory.
The concept of regular expressions began in the 1950s, when the American mathematician Stephen Cole Kleene formalized the concept of a regular language. They came into common use with Unix text-processing utilities. Different syntaxes for writing regular expressions have existed since the 1980s, one being the POSIX standard and another, widely used, being the Perl syntax.
Regular expressions are used in search engines, in search and replace dialogs of word processors and text editors, in text processing utilities such as sed and AWK, and in lexical analysis. Regular expressions are supported in many programming languages. Library implementations are often called an "engine", and many of these are available for reuse.
History
Regular expressions originated in 1951, when mathematician Stephen Cole Kleene described regular languages using his mathematical notation called regular events. These arose in theoretical computer science, in the subfields of automata theory (models of computation) and the description and classification of formal languages, motivated by Kleene's attempt to describe early artificial neural networks. (Kleene introduced it as an alternative to McCulloch & Pitts's "prehensible", but admitted "We would welcome any suggestions as to a more descriptive term."Kleene 1951, pg46) Other early implementations of pattern matching include the SNOBOL language, which did not use regular expressions, but instead its own pattern matching constructs.
Regular expressions entered popular use from 1968 in two uses: pattern matching in a text editor and lexical analysis in a compiler. Among the first appearances of regular expressions in program form was when Ken Thompson built Kleene's notation into the editor QED as a means to match patterns in text files. For speed, Thompson implemented regular expression matching by just-in-time compilation (JIT) to IBM 7094 code on the Compatible Time-Sharing System, an important early example of JIT compilation. He later added this capability to the Unix editor ed, which eventually led to the popular search tool grep's use of regular expressions ("grep" is a word derived from the command for regular expression searching in the ed editor: g/re/p meaning "Global search for Regular Expression and Print matching lines"). Around the same time that Thompson developed QED, a group of researchers including Douglas T. Ross implemented a tool based on regular expressions that is used for lexical analysis in compiler design.
Many variations of these original forms of regular expressions were used in Unix programs at Bell Labs in the 1970s, including lex, sed, AWK, and expr, and in other programs such as vi, and Emacs (which has its own, incompatible syntax and behavior). Regexes were subsequently adopted by a wide range of programs, with these early forms standardized in the POSIX.2 standard in 1992.
In the 1980s, the more complicated regexes arose in Perl, which originally derived from a regex library written by Henry Spencer (1986), who later wrote an implementation for Tcl called Advanced Regular Expressions. The Tcl library is a hybrid NFA/DFA implementation with improved performance characteristics. Software projects that have adopted Spencer's Tcl regular expression implementation include PostgreSQL. Perl later expanded on Spencer's original library to add many new features. Part of the effort in the design of Raku (formerly named Perl 6) is to improve Perl's regex integration, and to increase their scope and capabilities to allow the definition of parsing expression grammars. The result is a mini-language called Raku rules, which are used to define Raku grammar as well as provide a tool to programmers in the language. These rules maintain existing features of Perl 5.x regexes, but also allow BNF-style definition of a recursive descent parser via sub-rules.
The use of regexes in structured information standards for document and database modeling started in the 1960s and expanded in the 1980s when industry standards like ISO SGML (precursored by ANSI "GCA 101-1983") consolidated. The kernel of the structure specification language standards consists of regexes. Its use is evident in the DTD element group syntax. Prior to the use of regular expressions, many search languages allowed simple wildcards, for example "*" to match any sequence of characters, and "?" to match a single character. Relics of this can be found today in the glob syntax for filenames, and in the SQL LIKE operator.
Starting in 1997, Philip Hazel developed PCRE (Perl Compatible Regular Expressions), which attempts to closely mimic Perl's regex functionality and is used by many modern tools including PHP and Apache HTTP Server.
Today, regexes are widely supported in programming languages, text processing programs (particularly lexers), advanced text editors, and some other programs. Regex support is part of the standard library of many programming languages, including Java and Python, and is built into the syntax of others, including Perl and ECMAScript. In the late 2010s, several companies started to offer hardware, FPGA, GPU implementations of PCRE compatible regex engines that are faster compared to CPU implementations.
Patterns
The phrase regular expressions, or regexes, is often used to mean the specific, standard textual syntax for representing patterns for matching text, as distinct from the mathematical notation described below. Each character in a regular expression (that is, each character in the string describing its pattern) is either a metacharacter, having a special meaning, or a regular character that has a literal meaning. For example, in the regex b., 'b' is a literal character that matches just 'b', while '.' is a metacharacter that matches every character except a newline. Therefore, this regex matches, for example, 'b%', or 'bx', or 'b5'. Together, metacharacters and literal characters can be used to identify text of a given pattern or process a number of instances of it. Pattern matches may vary from a precise equality to a very general similarity, as controlled by the metacharacters. For example, . is a very general pattern, [a-z] (match all lowercase letters from 'a' to 'z') is less general and b is a precise pattern (matches just 'b'). The metacharacter syntax is designed specifically to represent prescribed targets in a concise and flexible way to direct the automation of text processing of a variety of input data, in a form easy to type using a standard ASCII keyboard.
A very simple case of a regular expression in this syntax is to locate a word spelled two different ways in a text editor, the regular expression seriali[sz]e matches both "serialise" and "serialize". Wildcard characters also achieve this, but are more limited in what they can pattern, as they have fewer metacharacters and a simple language-base.
The usual context of wildcard characters is in globbing similar names in a list of files, whereas regexes are usually employed in applications that pattern-match text strings in general. For example, the regex ^[ \t]+|[ \t]+$ matches excess whitespace at the beginning or end of a line. An advanced regular expression that matches any numeral is [+-]?(\d+(\.\d*)?|\.\d+)([eE][+-]?\d+)?.
A regex processor translates a regular expression in the above syntax into an internal representation that can be executed and matched against a string representing the text being searched in. One possible approach is the Thompson's construction algorithm to construct a nondeterministic finite automaton (NFA), which is then made deterministic and the resulting deterministic finite automaton (DFA) is run on the target text string to recognize substrings that match the regular expression.
The picture shows the NFA scheme N(s*) obtained from the regular expression s*, where s denotes a simpler regular expression in turn, which has already been recursively translated to the NFA N(s).
Basic concepts
A regular expression, often called a pattern, specifies a set of strings required for a particular purpose. A simple way to specify a finite set of strings is to list its elements or members. However, there are often more concise ways: for example, the set containing the three strings "Handel", "Händel", and "Haendel" can be specified by the pattern H(ä|ae?)ndel; we say that this pattern matches each of the three strings. However, there can be many ways to write a regular expression for the same set of strings: for example, (Hän|Han|Haen)del also specifies the same set of three strings in this example.
Most formalisms provide the following operations to construct regular expressions.
Boolean "or"
A vertical bar separates alternatives. For example, can match "gray" or "grey".
Grouping
Parentheses are used to define the scope and precedence of the operators (among other uses). For example, gray|grey and are equivalent patterns which both describe the set of "gray" or "grey".
Quantification
A quantifier after an element (such as a token, character, or group) specifies how many times the preceding element is allowed to repeat. The most common quantifiers are the question mark ?, the asterisk * (derived from the Kleene star), and the plus sign + (Kleene plus).
{|
|-
| style="width:15px; vertical-align:top;" |?
|The question mark indicates zero or one occurrences of the preceding element. For example, colou?r matches both "color" and "colour".
|-
| style="vertical-align:top;" |*
|The asterisk indicates zero or more occurrences of the preceding element. For example, ab*c matches "ac", "abc", "abbc", "abbbc", and so on.
|-
| style="vertical-align:top;" |+
|The plus sign indicates one or more occurrences of the preceding element. For example, ab+c matches "abc", "abbc", "abbbc", and so on, but not "ac".
|-
|{n}
| The preceding item is matched exactly n times.
|-
|{min,}
| The preceding item is matched min or more times.
|-
|{,max}
| The preceding item is matched up to max times.
|-
|{min,max}
| The preceding item is matched at least min times, but not more than max times.
|}
Wildcard
The wildcard . matches any character. For example,
a.b matches any string that contains an "a", and then any character and then "b".
a.*b matches any string that contains an "a", and then the character "b" at some later point.
These constructions can be combined to form arbitrarily complex expressions, much like one can construct arithmetical expressions from numbers and the operations +, −, ×, and ÷.
The precise syntax for regular expressions varies among tools and with context; more detail is given in .
Formal language theory
Regular expressions describe regular languages in formal language theory. They have the same expressive power as regular grammars. But the language of regular expressions itself, is context-free language.
Formal definition
Regular expressions consist of constants, which denote sets of strings, and operator symbols, which denote operations over these sets. The following definition is standard, and found as such in most textbooks on formal language theory. Given a finite alphabet Σ, the following constants are defined
as regular expressions:
(empty set) ∅ denoting the set ∅.
(empty string) ε denoting the set containing only the "empty" string, which has no characters at all.
(literal character) a in Σ denoting the set containing only the character a.
Given regular expressions R and S, the following operations over them are defined
to produce regular expressions:
(concatenation) (RS) denotes the set of strings that can be obtained by concatenating a string accepted by R and a string accepted by S (in that order). For example, let R denote {"ab", "c"} and S denote {"d", "ef"}. Then, (RS) denotes {"abd", "abef", "cd", "cef"}.
(alternation) (R|S) denotes the set union of sets described by R and S. For example, if R describes {"ab", "c"} and S describes {"ab", "d", "ef"}, expression (R|S) describes {"ab", "c", "d", "ef"}.
(Kleene star) (R*) denotes the smallest superset of the set described by R that contains ε and is closed under string concatenation. This is the set of all strings that can be made by concatenating any finite number (including zero) of strings from the set described by R. For example, if R denotes {"0", "1"}, (R*) denotes the set of all finite binary strings (including the empty string). If R denotes {"ab", "c"}, (R*) denotes {ε, "ab", "c", "abab", "abc", "cab", "cc", "ababab", "abcab", ...}.
To avoid parentheses, it is assumed that the Kleene star has the highest priority followed by concatenation, then alternation. If there is no ambiguity, then parentheses may be omitted. For example, (ab)c can be written as abc, and a|(b(c*)) can be written as a|bc*. Many textbooks use the symbols ∪, +, or ∨ for alternation instead of the vertical bar.
Examples:
a|b* denotes {ε, "a", "b", "bb", "bbb", ...}
(a|b)* denotes the set of all strings with no symbols other than "a" and "b", including the empty string: {ε, "a", "b", "aa", "ab", "ba", "bb", "aaa", ...}
ab*(c|ε) denotes the set of strings starting with "a", then zero or more "b"s and finally optionally a "c": {"a", "ac", "ab", "abc", "abb", "abbc", ...}
(0|(1(01*0)*1))* denotes the set of binary numbers that are multiples of 3: { ε, "0", "00", "11", "000", "011", "110", "0000", "0011", "0110", "1001", "1100", "1111", "00000", ...}
The derivative of a regular expression can be defined using the Brzozowski derivative.
Expressive power and compactness
The formal definition of regular expressions is minimal on purpose, and avoids defining ? and +—these can be expressed as follows: a+=aa*, and a?=(a|ε). Sometimes the complement operator is added, to give a generalized regular expression; here Rc matches all strings over Σ* that do not match R. In principle, the complement operator is redundant, because it does not grant any more expressive power. However, it can make a regular expression much more concise—eliminating a single complement operator can cause a double exponential blow-up of its length.Based on , a regular expression of length about 850 such that its complement has a length about 232 can be found at :File:RegexComplementBlowup.png.
Regular expressions in this sense can express the regular languages, exactly the class of languages accepted by deterministic finite automata. There is, however, a significant difference in compactness. Some classes of regular languages can only be described by deterministic finite automata whose size grows exponentially in the size of the shortest equivalent regular expressions. The standard example here is the languages
Lk consisting of all strings over the alphabet {a,b} whose kth-from-last letter equals a. On the one hand, a regular expression describing L4 is given by
.
Generalizing this pattern to Lk gives the expression:
On the other hand, it is known that every deterministic finite automaton accepting the language Lk must have at least 2k states. Luckily, there is a simple mapping from regular expressions to the more general nondeterministic finite automata (NFAs) that does not lead to such a blowup in size; for this reason NFAs are often used as alternative representations of regular languages. NFAs are a simple variation of the type-3 grammars of the Chomsky hierarchy.
In the opposite direction, there are many languages easily described by a DFA that are not easily described by a regular expression. For instance, determining the validity of a given ISBN requires computing the modulus of the integer base 11, and can be easily implemented with an 11-state DFA. However, converting it to a regular expression results in a 2,14 megabytes file .
Given a regular expression, Thompson's construction algorithm computes an equivalent nondeterministic finite automaton. A conversion in the opposite direction is achieved by Kleene's algorithm.
Finally, many real-world "regular expression" engines implement features that cannot be described by the regular expressions in the sense of formal language theory; rather, they implement regexes. See below for more on this.
Deciding equivalence of regular expressions
As seen in many of the examples above, there is more than one way to construct a regular expression to achieve the same results.
It is possible to write an algorithm that, for two given regular expressions, decides whether the described languages are equal; the algorithm reduces each expression to a minimal deterministic finite state machine, and determines whether they are isomorphic (equivalent).
Algebraic laws for regular expressions can be obtained using a method by Gischer which is best explained along an example: In order to check whether (X+Y)∗ and (X∗ Y∗)∗ denote the same regular language, for all regular expressions X, Y, it is necessary and sufficient to check whether the particular regular expressions (a+b)∗ and (a∗ b∗)∗ denote the same language over the alphabet Σ={a,b}. More generally, an equation E=F between regular-expression terms with variables holds if, and only if, its instantiation with different variables replaced by different symbol constants holds.
Every regular expression can be written solely in terms of the Kleene star and set unions over finite words. This is a surprisingly difficult problem. As simple as the regular expressions are, there is no method to systematically rewrite them to some normal form. The lack of axiom in the past led to the star height problem. In 1991, Dexter Kozen axiomatized regular expressions as a Kleene algebra, using equational and Horn clause axioms.
Already in 1964, Redko had proved that no finite set of purely equational axioms can characterize the algebra of regular languages.
Syntax
A regex pattern matches a target string. The pattern is composed of a sequence of atoms. An atom is a single point within the regex pattern which it tries to match to the target string. The simplest atom is a literal, but grouping parts of the pattern to match an atom will require using ( ) as metacharacters. Metacharacters help form: atoms; quantifiers telling how many atoms (and whether it is a greedy quantifier or not); a logical OR character, which offers a set of alternatives, and a logical NOT character, which negates an atom's existence; and backreferences to refer to previous atoms of a completing pattern of atoms. A match is made, not when all the atoms of the string are matched, but rather when all the pattern atoms in the regex have matched. The idea is to make a small pattern of characters stand for a large number of possible strings, rather than compiling a large list of all the literal possibilities.
Depending on the regex processor there are about fourteen metacharacters, characters that may or may not have their literal character meaning, depending on context, or whether they are "escaped", i.e. preceded by an escape sequence, in this case, the backslash \. Modern and POSIX extended regexes use metacharacters more often than their literal meaning, so to avoid "backslash-osis" or leaning toothpick syndrome, they have a metacharacter escape to a literal mode; starting out, however, they instead have the four bracketing metacharacters ( ) and { } be primarily literal, and "escape" this usual meaning to become metacharacters. Common standards implement both. The usual metacharacters are {}[]()^$.|*+? and \. The usual characters that become metacharacters when escaped are dswDSW and N.
Delimiters
When entering a regex in a programming language, they may be represented as a usual string literal, hence usually quoted; this is common in C, Java, and Python for instance, where the regex re is entered as "re". However, they are often written with slashes as delimiters, as in /re/ for the regex re. This originates in ed, where / is the editor command for searching, and an expression /re/ can be used to specify a range of lines (matching the pattern), which can be combined with other commands on either side, most famously g/re/p as in grep ("global regex print"), which is included in most Unix-based operating systems, such as Linux distributions. A similar convention is used in sed, where search and replace is given by s/re/replacement/ and patterns can be joined with a comma to specify a range of lines as in /re1/,/re2/. This notation is particularly well known due to its use in Perl, where it forms part of the syntax distinct from normal string literals. In some cases, such as sed and Perl, alternative delimiters can be used to avoid collision with contents, and to avoid having to escape occurrences of the delimiter character in the contents. For example, in sed the command s,/,X, will replace a / with an X, using commas as delimiters.
IEEE POSIX Standard
The IEEE POSIX standard has three sets of compliance: BRE (Basic Regular Expressions),ISO/IEC 9945-2:1993 Information technology – Portable Operating System Interface (POSIX) – Part 2: Shell and Utilities, successively revised as ISO/IEC 9945-2:2002 Information technology – Portable Operating System Interface (POSIX) – Part 2: System Interfaces, ISO/IEC 9945-2:2003, and currently ISO/IEC/IEEE 9945:2009 Information technology – Portable Operating System Interface (POSIX) Base Specifications, Issue 7 ERE (Extended Regular Expressions), and SRE (Simple Regular Expressions). SRE is deprecated,The Single Unix Specification (Version 2) in favor of BRE, as both provide backward compatibility. The subsection below covering the character classes applies to both BRE and ERE.
BRE and ERE work together. ERE adds ?, +, and |, and it removes the need to escape the metacharacters ( ) and { }, which are required in BRE. Furthermore, as long as the POSIX standard syntax for regexes is adhered to, there can be, and often is, additional syntax to serve specific (yet POSIX compliant) applications. Although POSIX.2 leaves some implementation specifics undefined, BRE and ERE provide a "standard" which has since been adopted as the default syntax of many tools, where the choice of BRE or ERE modes is usually a supported option. For example, GNU grep has the following options: "grep -E" for ERE, and "grep -G" for BRE (the default), and "grep -P" for Perl regexes.
Perl regexes have become a de facto standard, having a rich and powerful set of atomic expressions. Perl has no "basic" or "extended" levels. As in POSIX EREs, ( ) and { } are treated as metacharacters unless escaped; other metacharacters are known to be literal or symbolic based on context alone. Additional functionality includes lazy matching, backreferences, named capture groups, and recursive patterns.
POSIX basic and extended
In the POSIX standard, Basic Regular Syntax (BRE) requires that the metacharacters ( ) and { } be designated \(\) and \{\}, whereas Extended Regular Syntax (ERE) does not.
Metacharacter Description^Matches the starting position within the string. In line-based tools, it matches the starting position of any line..Matches any single character (many applications exclude newlines, and exactly which characters are considered newlines is flavor-, character-encoding-, and platform-specific, but it is safe to assume that the line feed character is included). Within POSIX bracket expressions, the dot character matches a literal dot. For example, a.c matches "abc", etc., but [a.c] matches only "a", ".", or "c".[ ]A bracket expression. Matches a single character that is contained within the brackets. For example, [abc] matches "a", "b", or "c". [a-z] specifies a range which matches any lowercase letter from "a" to "z". These forms can be mixed: [abcx-z] matches "a", "b", "c", "x", "y", or "z", as does [a-cx-z].
The - character is treated as a literal character if it is the last or the first (after the ^, if present) character within the brackets: [abc-], [-abc], [^-abc]. Backslash escapes are not allowed. The ] character can be included in a bracket expression if it is the first (after the ^, if present) character: []abc], [^]abc].[^ ]Matches a single character that is not contained within the brackets. For example, [^abc] matches any character other than "a", "b", or "c". [^a-z] matches any single character that is not a lowercase letter from "a" to "z". Likewise, literal characters and ranges can be mixed.$Matches the ending position of the string or the position just before a string-ending newline. In line-based tools, it matches the ending position of any line.( )Defines a marked subexpression, also called a capturing group, which is essential for extracting the desired part of the text (See also the next entry, \n). BRE mode requires .\nMatches what the nth marked subexpression matched, where n is a digit from 1 to 9. This construct is defined in the POSIX standard. Some tools allow referencing more than nine capturing groups. Also known as a back-reference, this feature is supported in BRE mode.*Matches the preceding element zero or more times. For example, ab*c matches "ac", "abc", "abbbc", etc. [xyz]* matches "", "x", "y", "z", "zx", "zyx", "xyzzy", and so on. (ab)* matches "", "ab", "abab", "ababab", and so on.Matches the preceding element at least m and not more than n times. For example, a{3,5} matches only "aaa", "aaaa", and "aaaaa". This is not found in a few older instances of regexes. BRE mode requires }.
Examples:
.at matches any three-character string ending with "at", including "hat", "cat", "bat", "4at", "#at" and " at" (starting with a space).
[hc]at matches "hat" and "cat".
[^b]at matches all strings matched by .at except "bat".
[^hc]at matches all strings matched by .at other than "hat" and "cat".
^[hc]at matches "hat" and "cat", but only at the beginning of the string or line.
[hc]at$ matches "hat" and "cat", but only at the end of the string or line.
\[.\] matches any single character surrounded by "[" and "]" since the brackets are escaped, for example: "[a]", "[b]", "[7]", "[@]", "[]]", and "[ ]" (bracket space bracket).
s.* matches s followed by zero or more characters, for example: "s", "saw", "seed", "s3w96.7", and "s6#h%(>>>m n mQ".
According to Russ Cox, the POSIX specification requires ambiguous subexpressions to be handled in a way different from Perl's. The committee replaced Perl's rules with one that is simple to explain, but the new "simple" rules are actually more complex to implement: they were incompatible with pre-existing tooling and made it essentially impossible to define a "lazy match" (see below) extension. As a result, very few programs actually implement the POSIX subexpression rules (even when they implement other parts of the POSIX syntax).
Metacharacters in POSIX extended
The meaning of metacharacters escaped with a backslash is reversed for some characters in the POSIX Extended Regular Expression (ERE) syntax. With this syntax, a backslash causes the metacharacter to be treated as a literal character. So, for example, \( \) is now ( ) and \{ \} is now { }. Additionally, support is removed for \n backreferences and the following metacharacters are added:
Metacharacter Description ? Matches the preceding element zero or one time. For example, ab?c matches only "ac" or "abc". + Matches the preceding element one or more times. For example, ab+c matches "abc", "abbc", "abbbc", and so on, but not "ac". | The choice (also known as alternation or set union) operator matches either the expression before or the expression after the operator. For example, abc|def matches "abc" or "def".
Examples:
[hc]?at matches "at", "hat", and "cat".
[hc]*at matches "at", "hat", "cat", "hhat", "chat", "hcat", "cchchat", and so on.
[hc]+at matches "hat", "cat", "hhat", "chat", "hcat", "cchchat", and so on, but not "at".
cat|dog matches "cat" or "dog".
POSIX Extended Regular Expressions can often be used with modern Unix utilities by including the command line flag -E.
Character classes
The character class is the most basic regex concept after a literal match. It makes one small sequence of characters match a larger set of characters. For example, [A-Z] could stand for any uppercase letter in the English alphabet, and \d could mean any digit. Character classes apply to both POSIX levels.
When specifying a range of characters, such as [a-Z] (i.e. lowercase a to uppercase Z), the computer's locale settings determine the contents by the numeric ordering of the character encoding. They could store digits in that sequence, or the ordering could be abc...zABC...Z, or aAbBcC...zZ. So the POSIX standard defines a character class, which will be known by the regex processor installed. Those definitions are in the following table:
Description POSIX Perl/Tcl Vim Java ASCII ASCII characters \p{ASCII} [\x00-\x7F] Alphanumeric characters [:alnum:] \p{Alnum} [A-Za-z0-9] Alphanumeric characters plus "_" \w \w \w [A-Za-z0-9_] Non-word characters \W \W \W [^A-Za-z0-9_] Alphabetic characters [:alpha:] \a \p{Alpha} [A-Za-z] Space and tab [:blank:] \s \p{Blank} [ \t] Word boundaries \b \< \> \b (?<=\W)(?=\w)|(?<=\w)(?=\W) Non-word boundaries \B (?<=\W)(?=\W)|(?<=\w)(?=\w) Control characters [:cntrl:] \p{Cntrl} [\x00-\x1F\x7F] Digits [:digit:] \d \d \p{Digit} or \d [0-9] Non-digits \D \D \D [^0-9] Visible characters [:graph:] \p{Graph} [\x21-\x7E] Lowercase letters [:lower:] \l \p{Lower} [a-z] Visible characters and the space character [:print:] \p \p{Print} [\x20-\x7E] Punctuation characters [:punct:] \p{Punct} [][!"#$%&'()*+,./:;<=>?@\^_`{|}~-] Whitespace characters [:space:] \s \_s \p{Space} or \s [ \t\r\n\v\f] Non-whitespace characters \S \S \S [^ \t\r\n\v\f] Uppercase letters [:upper:] \u \p{Upper} [A-Z] Hexadecimal digits [:xdigit:] \x \p{XDigit} [A-Fa-f0-9]
POSIX character classes can only be used within bracket expressions. For example, [[:upper:]ab] matches the uppercase letters and lowercase "a" and "b".
An additional non-POSIX class understood by some tools is [:word:], which is usually defined as [:alnum:] plus underscore. This reflects the fact that in many programming languages these are the characters that may be used in identifiers. The editor Vim further distinguishes word and word-head classes (using the notation \w and \h) since in many programming languages the characters that can begin an identifier are not the same as those that can occur in other positions: numbers are generally excluded, so an identifier would look like \h\w* or [[:alpha:]_][[:alnum:]_]* in POSIX notation.
Note that what the POSIX regex standards call character classes are commonly referred to as POSIX character classes in other regex flavors which support them. With most other regex flavors, the term character class is used to describe what POSIX calls bracket expressions.
Perl and PCRE
Because of its expressive power and (relative) ease of reading, many other utilities and programming languages have adopted syntax similar to Perl's—for example, Java, JavaScript, Julia, Python, Ruby, Qt, Microsoft's .NET Framework, and XML Schema. Some languages and tools such as Boost and PHP support multiple regex flavors. Perl-derivative regex implementations are not identical and usually implement a subset of features found in Perl 5.0, released in 1994. Perl sometimes does incorporate features initially found in other languages. For example, Perl 5.10 implements syntactic extensions originally developed in PCRE and Python.
Lazy matching
In Python and some other implementations (e.g. Java), the three common quantifiers (*, +, and ?) are greedy by default because they match as many characters as possible. The regex ".+" (including the double-quotes) applied to the string
"Ganymede," he continued, "is the largest moon in the Solar System."
matches the entire line (because the entire line begins and ends with a double-quote) instead of matching only the first part, "Ganymede,". The aforementioned quantifiers may, however, be made lazy or minimal or reluctant, matching as few characters as possible, by appending a question mark: ".+?" matches only "Ganymede,".
Possessive matching
In Java and Python 3.11+,SRE: Atomic Grouping (?>...) is not supported #34627 quantifiers may be made possessive by appending a plus sign, which disables backing off (in a backtracking engine), even if doing so would allow the overall match to succeed: While the regex ".*" applied to the string
"Ganymede," he continued, "is the largest moon in the Solar System."
matches the entire line, the regex ".*+" does , because .*+ consumes the entire input, including the final ". Thus, possessive quantifiers are most useful with negated character classes, e.g. "[^"]*+", which matches "Ganymede," when applied to the same string.
Another common extension serving the same function is atomic grouping, which disables backtracking for a parenthesized group. The typical syntax is . For example, while matches both and , only matches because the engine is forbidden from backtracking and so cannot try setting the group to "w" after matching "wi".
Possessive quantifiers are easier to implement than greedy and lazy quantifiers, and are typically more efficient at runtime.
IETF I-Regexp
IETF RFC 9485 describes "I-Regexp: An Interoperable Regular Expression Format". It specifies a limited subset of regular-expression idioms designed to be interoperable, i.e. produce the same effect, in a large number of regular-expression libraries. I-Regexp is also limited to matching, i.e. providing a true or false match between a regular expression and a given piece of text. Thus, it lacks advanced features such as capture groups, lookahead, and backreferences.
Patterns for non-regular languages
Many features found in virtually all modern regular expression libraries provide an expressive power that exceeds the regular languages. For example, many implementations allow grouping subexpressions with parentheses and recalling the value they match in the same expression (). This means that, among other things, a pattern can match strings of repeated words like "papa" or "WikiWiki", called squares in formal language theory. The pattern for these strings is (.+)\1.
The language of squares is not regular, nor is it context-free, due to the pumping lemma. However, pattern matching with an unbounded number of backreferences, as supported by numerous modern tools, is still context sensitive. Theorem 3 (p.9) The general problem of matching any number of backreferences is NP-complete, and the execution time for known algorithms grows exponentially by the number of backreference groups used.
However, many tools, libraries, and engines that provide such constructions still use the term regular expression for their patterns. This has led to a nomenclature where the term regular expression has different meanings in formal language theory and pattern matching. For this reason, some people have taken to using the term regex, regexp, or simply pattern to describe the latter. Larry Wall, author of the Perl programming language, writes in an essay about the design of Raku:
Assertions
Assertion Lookbehind Lookahead Positive(?<=)(?=) Negative(?<!)(?!)Lookbehind and lookahead assertionsin Perl regular expressions
Other features not found in describing regular languages include assertions. These include the ubiquitous and , used since at least 1970, Reprinted as "QED Text Editor Reference Manual", MHCC-004, Murray Hill Computing, Bell Laboratories (October 1972). as well as some more sophisticated extensions like lookaround that appeared in 1994. Lookarounds define the surrounding of a match and do not spill into the match itself, a feature only relevant for the use case of string searching. Some of them can be simulated in a regular language by treating the surroundings as a part of the language as well.
The and have been attested since at least 1994, starting with Perl 5. The lookbehind assertions and are attested since 1997 in a commit by Ilya Zakharevich to Perl 5.005.
Implementations and running times
There are at least three different algorithms that decide whether and how a given regex matches a string.
The oldest and fastest relies on a result in formal language theory that allows every nondeterministic finite automaton (NFA) to be transformed into a deterministic finite automaton (DFA). The DFA can be constructed explicitly and then run on the resulting input string one symbol at a time. Constructing the DFA for a regular expression of size m has the time and memory cost of O(2m), but it can be run on a string of size n in time O(n). Note that the size of the expression is the size after abbreviations, such as numeric quantifiers, have been expanded.
An alternative approach is to simulate the NFA directly, essentially building each DFA state on demand and then discarding it at the next step. This keeps the DFA implicit and avoids the exponential construction cost, but running cost rises to O(mn). The explicit approach is called the DFA algorithm and the implicit approach the NFA algorithm. Adding caching to the NFA algorithm is often called the "lazy DFA" algorithm, or just the DFA algorithm without making a distinction. These algorithms are fast, but using them for recalling grouped subexpressions, lazy quantification, and similar features is tricky. Modern implementations include the re1-re2-sregex family based on Cox's code.
The third algorithm is to match the pattern against the input string by backtracking. This algorithm is commonly called NFA, but this terminology can be confusing. Its running time can be exponential, which simple implementations exhibit when matching against expressions like that contain both alternation and unbounded quantification and force the algorithm to consider an exponentially increasing number of sub-cases. This behavior can cause a security problem called Regular expression Denial of Service (ReDoS).
Although backtracking implementations only give an exponential guarantee in the worst case, they provide much greater flexibility and expressive power. For example, any implementation which allows the use of backreferences, or implements the various extensions introduced by Perl, must include some kind of backtracking. Some implementations try to provide the best of both algorithms by first running a fast DFA algorithm, and revert to a potentially slower backtracking algorithm only when a backreference is encountered during the match. GNU grep (and the underlying gnulib DFA) uses such a strategy.
Sublinear runtime algorithms have been achieved using Boyer-Moore (BM) based algorithms and related DFA optimization techniques such as the reverse scan. GNU grep, which supports a wide variety of POSIX syntaxes and extensions, uses BM for a first-pass prefiltering, and then uses an implicit DFA. Wu agrep, which implements approximate matching, combines the prefiltering into the DFA in BDM (backward DAWG matching). NR-grep's BNDM extends the BDM technique with Shift-Or bit-level parallelism.
A few theoretical alternatives to backtracking for backreferences exist, and their "exponents" are tamer in that they are only related to the number of backreferences, a fixed property of some regexp languages such as POSIX. One naive method that duplicates a non-backtracking NFA for each backreference note has a complexity of time and space for a haystack of length n and k backreferences in the RegExp. A very recent theoretical work based on memory automata gives a tighter bound based on "active" variable nodes used, and a polynomial possibility for some backreferenced regexps.
Unicode
In theoretical terms, any token set can be matched by regular expressions as long as it is pre-defined. In terms of historical implementations, regexes were originally written to use ASCII characters as their token set though regex libraries have supported numerous other character sets. Many modern regex engines offer at least some support for Unicode. In most respects it makes no difference what the character set is, but some issues do arise when extending regexes to support Unicode.
Supported encoding. Some regex libraries expect to work on some particular encoding instead of on abstract Unicode characters. Many of these require the UTF-8 encoding, while others might expect UTF-16, or UTF-32. In contrast, Perl and Java are agnostic on encodings, instead operating on decoded characters internally.
Supported Unicode range. Many regex engines support only the Basic Multilingual Plane, that is, the characters which can be encoded with only 16 bits. Currently (as of ) only a few regex engines (e.g., Perl's and Java's) can handle the full 21-bit Unicode range.
Extending ASCII-oriented constructs to Unicode. For example, in ASCII-based implementations, character ranges of the form [x-y] are valid wherever x and y have code points in the range [0x00,0x7F] and codepoint(x) ≤ codepoint(y). The natural extension of such character ranges to Unicode would simply change the requirement that the endpoints lie in [0x00,0x7F] to the requirement that they lie in [0x0000,0x10FFFF]. However, in practice this is often not the case. Some implementations, such as that of gawk, do not allow character ranges to cross Unicode blocks. A range like [0x61,0x7F] is valid since both endpoints fall within the Basic Latin block, as is [0x0530,0x0560] since both endpoints fall within the Armenian block, but a range like [0x0061,0x0532] is invalid since it includes multiple Unicode blocks. Other engines, such as that of the Vim editor, allow block-crossing but the character values must not be more than 256 apart.
Case insensitivity. Some case-insensitivity flags affect only the ASCII characters. Other flags affect all characters. Some engines have two different flags, one for ASCII, the other for Unicode. Exactly which characters belong to the POSIX classes also varies.
Cousins of case insensitivity. As ASCII has case distinction, case insensitivity became a logical feature in text searching. Unicode introduced alphabetic scripts without case like Devanagari. For these, case sensitivity is not applicable. For scripts like Chinese, another distinction seems logical: between traditional and simplified. In Arabic scripts, insensitivity to initial, medial, final, and isolated position may be desired. In Japanese, insensitivity between hiragana and katakana is sometimes useful.
Normalization. Unicode has combining characters. Like old typewriters, plain base characters (white spaces, punctuation characters, symbols, digits, or letters) can be followed by one or more non-spacing symbols (usually diacritics, like accent marks modifying letters) to form a single printable character; but Unicode also provides a limited set of precomposed characters, i.e. characters that already include one or more combining characters. A sequence of a base character + combining characters should be matched with the identical single precomposed character (only some of these combining sequences can be precomposed into a single Unicode character, but infinitely many other combining sequences are possible in Unicode, and needed for various languages, using one or more combining characters after an initial base character; these combining sequences may include a base character or combining characters partially precomposed, but not necessarily in canonical order and not necessarily using the canonical precompositions). The process of standardizing sequences of a base character + combining characters by decomposing these canonically equivalent sequences, before reordering them into canonical order (and optionally recomposing some combining characters into the leading base character) is called normalization.
New control codes. Unicode introduced, among other codes, byte order marks and text direction markers. These codes might have to be dealt with in a special way.
Introduction of character classes for Unicode blocks, scripts, and numerous other character properties. Block properties are much less useful than script properties, because a block can have code points from several different scripts, and a script can have code points from several different blocks. In Perl and the library, properties of the form \p{InX} or \p{Block=X} match characters in block X and \P{InX} or \P{Block=X} matches code points not in that block. Similarly, \p{Armenian}, \p{IsArmenian}, or \p{Script=Armenian} matches any character in the Armenian script. In general, \p{X} matches any character with either the binary property X or the general category X. For example, \p{Lu}, \p{Uppercase_Letter}, or \p{GC=Lu} matches any uppercase letter. Binary properties that are not general categories include \p{White_Space}, \p{Alphabetic}, \p{Math}, and \p{Dash}. Examples of non-binary properties are \p{Bidi_Class=Right_to_Left}, \p{Word_Break=A_Letter}, and \p{Numeric_Value=10}.
Language support
Most general-purpose programming languages support regex capabilities, either natively or via libraries.
Uses
Regexes are useful in a wide variety of text processing tasks, and more generally string processing, where the data need not be textual. Common applications include data validation, data scraping (especially web scraping), data wrangling, simple parsing, the production of syntax highlighting systems, and many other tasks.
Some high-end desktop publishing software has the ability to use regexes to automatically apply text styling, saving the person doing the layout from laboriously doing this by hand for anything that can be matched by a regex. For example, by defining a character style that makes text into small caps and then using the regex [A-Z]{4,} to apply that style, any word of four or more consecutive capital letters will be automatically rendered as small caps instead.
While regexes would be useful on Internet search engines, processing them across the entire database could consume excessive computer resources depending on the complexity and design of the regex. Although in many cases system administrators can run regex-based queries internally, most search engines do not offer regex support to the public. Notable exceptions include Google Code Search and Exalead. However, Google Code Search was shut down in January 2012.
Examples
The specific syntax rules vary depending on the specific implementation, programming language, or library in use. Additionally, the functionality of regex implementations can vary between versions.
Because regexes can be difficult to both explain and understand without examples, interactive websites for testing regexes are a useful resource for learning regexes by experimentation.
This section provides a basic description of some of the properties of regexes by way of illustration.
The following conventions are used in the examples.The character 'm' is not always required to specify a Perl match operation. For example, m/[^abc]/ could also be rendered as /[^abc]/. The 'm' is only necessary if the user wishes to specify a match operation without using a forward-slash as the regex delimiter. Sometimes it is useful to specify an alternate regex delimiter in order to avoid "delimiter collision". See 'perldoc perlre ' for more details.
metacharacter(s) ;; the metacharacters column specifies the regex syntax being demonstrated
=~ m// ;; indicates a regex match operation in Perl
=~ s/// ;; indicates a regex substitution operation in Perl
These regexes are all Perl-like syntax. Standard POSIX regular expressions are different.
Unless otherwise indicated, the following examples conform to the Perl programming language, release 5.8.8, January 31, 2006. This means that other implementations may lack support for some parts of the syntax shown here (e.g. basic vs. extended regex, \( \) vs. (), or lack of \d instead of POSIX [:digit:]).
The syntax and conventions used in these examples coincide with that of other programming environments as well.E.g., see Java in a Nutshell, p. 213; Python Scripting for Computational Science, p. 320; Programming PHP, p. 106.
Metacharacter(s) Description ExampleAll the if statements return a TRUE value . Normally matches any character except a newline. Within square brackets the dot is literal. $string1 = "Hello World\n";
if ($string1 =~ m/...../) {
print "$string1 has length >= 5.\n";
}
Output:
Hello World
has length >= 5. ( ) Groups a series of pattern elements to a single element. When you match a pattern within parentheses, you can use any of $1, $2, ... later to refer to the previously matched pattern. Some implementations may use a backslash notation instead, like \1, \2. $string1 = "Hello World\n";
if ($string1 =~ m/(H..).(o..)/) {
print "We matched '$1' and '$2'.\n";
}
Output:
We matched 'Hel' and 'o W'. + Matches the preceding pattern element one or more times. $string1 = "Hello World\n";
if ($string1 =~ m/l+/) {
print "There are one or more consecutive letter \"l\"'s in $string1.\n";
}
Output:
There are one or more consecutive letter "l"'s in Hello World. ? Matches the preceding pattern element zero or one time. $string1 = "Hello World\n";
if ($string1 =~ m/H.?e/) {
print "There is an 'H' and a 'e' separated by ";
print "0-1 characters (e.g., He Hue Hee).\n";
}
Output:
There is an 'H' and a 'e' separated by 0-1 characters (e.g., He Hue Hee). ? Modifies the *, +, ? or {M,N}'d regex that comes before to match as few times as possible. $string1 = "Hello World\n";
if ($string1 =~ m/(l.+?o)/) {
print "The non-greedy match with 'l' followed by one or ";
print "more characters is 'llo' rather than 'llo Wo'.\n";
}
Output:
The non-greedy match with 'l' followed by one or more characters is 'llo' rather than 'llo Wo'. * Matches the preceding pattern element zero or more times. $string1 = "Hello World\n";
if ($string1 =~ m/el*o/) {
print "There is an 'e' followed by zero to many ";
print "'l' followed by 'o' (e.g., eo, elo, ello, elllo).\n";
}
Output:
There is an 'e' followed by zero to many 'l' followed by 'o' (e.g., eo, elo, ello, elllo). {M,N} Denotes the minimum M and the maximum N match count.N can be omitted and M can be 0: {M} matches "exactly" M times; {M,} matches "at least" M times; {0,N} matches "at most" N times.x* y+ z? is thus equivalent to x{0,} y{1,} z{0,1}. $string1 = "Hello World\n";
if ($string1 =~ m/l{1,2}/) {
print "There exists a substring with at least 1 ";
print "and at most 2 l's in $string1\n";
}
Output:
There exists a substring with at least 1 and at most 2 l's in Hello World […] Denotes a set of possible character matches. $string1 = "Hello World\n";
if ($string1 =~ m/[aeiou]+/) {
print "$string1 contains one or more vowels.\n";
}
Output:
Hello World
contains one or more vowels. | Separates alternate possibilities. $string1 = "Hello World\n";
if ($string1 =~ m/(Hello|Hi|Pogo)/) {
print "$string1 contains at least one of Hello, Hi, or Pogo.";
}
Output:
Hello World
contains at least one of Hello, Hi, or Pogo. \b Matches a zero-width boundary between a word-class character (see next) and either a non-word class character or an edge; same as
(^\w|\w$|\W\w|\w\W). $string1 = "Hello World\n";
if ($string1 =~ m/llo\b/) {
print "There is a word that ends with 'llo'.\n";
}
Output:
There is a word that ends with 'llo'. \w Matches an alphanumeric character, including "_"; same as [A-Za-z0-9_] in ASCII, and
[\p{Alphabetic}\p{GC=Mark}\p{GC=Decimal_Number}\p{GC=Connector_Punctuation}]
in Unicode, where the Alphabetic property contains more than Latin letters, and the Decimal_Number property contains more than Arab digits. $string1 = "Hello World\n";
if ($string1 =~ m/\w/) {
print "There is at least one alphanumeric ";
print "character in $string1 (A-Z, a-z, 0-9, _).\n";
}
Output:
There is at least one alphanumeric character in Hello World
(A-Z, a-z, 0-9, _). \W Matches a non-alphanumeric character, excluding "_"; same as [^A-Za-z0-9_] in ASCII, and
[^\p{Alphabetic}\p{GC=Mark}\p{GC=Decimal_Number}\p{GC=Connector_Punctuation}]
in Unicode. $string1 = "Hello World\n";
if ($string1 =~ m/\W/) {
print "The space between Hello and ";
print "World is not alphanumeric.\n";
}
Output:
The space between Hello and World is not alphanumeric. \s Matches a whitespace character, which in ASCII are tab, line feed, form feed, carriage return, and space; in Unicode, also matches no-break spaces, next line, and the variable-width spaces (among others). $string1 = "Hello World\n";
if ($string1 =~ m/\s.*\s/) {
print "In $string1 there are TWO whitespace characters, which may";
print " be separated by other characters.\n";
}
Output:
In Hello World
there are TWO whitespace characters, which may be separated by other characters. \S Matches anything but a whitespace. $string1 = "Hello World\n";
if ($string1 =~ m/\S.*\S/) {
print "In $string1 there are TWO non-whitespace characters, which";
print " may be separated by other characters.\n";
}
Output:
In Hello World
there are TWO non-whitespace characters, which may be separated by other characters. \d Matches a digit; same as [0-9] in ASCII; in Unicode, same as the \p{Digit} or \p{GC=Decimal_Number} property, which itself the same as the \p{Numeric_Type=Decimal} property. $string1 = "99 bottles of beer on the wall.";
if ($string1 =~ m/(\d+)/) {
print "$1 is the first number in '$string1'\n";
}
Output:
99 is the first number in '99 bottles of beer on the wall.' \D Matches a non-digit; same as [^0-9] in ASCII or \P{Digit} in Unicode. $string1 = "Hello World\n";
if ($string1 =~ m/\D/) {
print "At least one character in $string1";
print " is not a digit.\n";
}
Output:
At least one character in Hello World
is not a digit. ^ Matches the beginning of a line or string. $string1 = "Hello World\n";
if ($string1 =~ m/^He/) {
print "$string1 starts with the characters 'He'.\n";
}
Output:
Hello World
starts with the characters 'He'. $ Matches the end of a line or string. $string1 = "Hello World\n";
if ($string1 =~ m/rld$/) {
print "$string1 is a line or string ";
print "that ends with 'rld'.\n";
}
Output:
Hello World
is a line or string that ends with 'rld'. \A Matches the beginning of a string (but not an internal line). $string1 = "Hello\nWorld\n";
if ($string1 =~ m/\AH/) {
print "$string1 is a string ";
print "that starts with 'H'.\n";
}
Output:
Hello
World
is a string that starts with 'H'. \z Matches the end of a string (but not an internal line). $string1 = "Hello\nWorld\n";
if ($string1 =~ m/d\n\z/) {
print "$string1 is a string ";
print "that ends with 'd\\n'.\n";
}
Output:
Hello
World
is a string that ends with 'd\n'. [^…] Matches every character except the ones inside brackets. $string1 = "Hello World\n";
if ($string1 =~ m/[^abc]/) {
print "$string1 contains a character other than ";
print "a, b, and c.\n";
}
Output:
Hello World
contains a character other than a, b, and c.
Induction
Regular expressions can often be created ("induced" or "learned") based on a set of example strings. This is known as the induction of regular languages and is part of the general problem of grammar induction in computational learning theory. Formally, given examples of strings in a regular language, and perhaps also given examples of strings not in that regular language, it is possible to induce a grammar for the language, i.e., a regular expression that generates that language. Not all regular languages can be induced in this way (see language identification in the limit), but many can. For example, the set of examples {1, 10, 100}, and negative set (of counterexamples) {11, 1001, 101, 0} can be used to induce the regular expression 1⋅0* (1 followed by zero or more 0s).
See also
Comparison of regular expression engines
Extended Backus–Naur form
Matching wildcards
Regular tree grammar
Thompson's construction – converts a regular expression into an equivalent nondeterministic finite automaton (NFA)
Notes
References
External links
ISO/IEC/IEEE 9945:2009 Information technology – Portable Operating System Interface (POSIX) Base Specifications, Issue 7
Regular Expressions, IEEE Std 1003.1-2024, Open Group
Open source list of regular expression resources
Category:1951 introductions
Category:Articles with example code
Category:Automata (computation)
Category:Formal languages
Category:Pattern matching
Category:Programming constructs
|
computer_science
| 8,791
|
25762
|
Russian Revolution
|
https://en.wikipedia.org/wiki/Russian_Revolution
|
The Russian Revolution was a period of political and social change in Russia, starting in 1917. This period saw Russia abolish its monarchy and adopt a socialist form of government following two successive revolutions and a civil war. It can be seen as the precursor for other revolutions that occurred in the aftermath of World War I, such as the German Revolution of 1918–1919. The Russian Revolution was a key event of the 20th century.
The Russian Revolution was inaugurated with the February Revolution in 1917, in the midst of World War I. With the German Empire inflicting defeats on the front, and increasing logistical problems causing shortages of bread and grain, the Russian Army was losing morale, with large scale mutiny looming. Officials were convinced that if Tsar Nicholas II abdicated, the unrest would subside. Nicholas stepped down on , ushering in a provisional government led by the Duma (parliament). During the unrest, Soviet councils were formed by locals in Petrograd (now Saint Petersburg) that initially did not oppose the new government; however, the Soviets insisted on their influence in the government and control over militias. By March, Russia had two rival governments. The Provisional Government held state power in military and international affairs, whereas the network of Soviets held domestic power. Critically, the Soviets held the allegiance of the working class, and urban middle class. There were mutinies, protests and strikes. Socialist and other leftist political organizations competed for influence within the Provisional Government and Soviets. Factions included the Mensheviks, Social Revolutionaries, Anarchists, and the Bolsheviks, a far-left party led by Vladimir Lenin.
The Bolsheviks won popularity with their program promising peace, land, and bread: an end to the war, land for the peasantry, and ending famine. After assuming power, the Provisional Government continued fighting the war in spite of public opposition. Taking advantage, the Bolsheviks and other factions gained popular support to advance the revolution. Responding to discontent in Petrograd, the Provisional Government repressed protestors leading to the July Days. The Bolsheviks merged workers' militias loyal to them into the Red Guards. The volatile situation reached its climax with the October Revolution, a Bolshevik armed insurrection in Petrograd beginning that overthrew the Provisional Government. The Bolsheviks established their own government and proclaimed the establishment of the Russian Soviet Federative Socialist Republic (RSFSR). Under pressure from German military offensives, the Bolsheviks relocated the capital to Moscow. The RSFSR began reorganizing the empire into the world's first socialist state, to practice soviet democracy on a national and international scale. Their promise to end Russia's participation in World War I was fulfilled when Bolshevik leaders signed the Treaty of Brest-Litovsk with Germany in March 1918. The Bolsheviks established the Cheka, a secret police and revolutionary security service working to uncover, punish, and eliminate those considered to be "enemies of the people" in campaigns called the Red Terror.
Although the Bolsheviks held large support in urban areas, they had foreign and domestic enemies that refused to recognize their government. Russia erupted into a bloody civil war, which pitted the Reds (Bolsheviks), against their enemies, which included nationalist movements, anti-Bolshevik socialist parties, anarchists, monarchists and liberals; the latter two parties strongly supported the Russian White movement which was led mainly by right-leaning officers and seen as fighting for the restoration of the imperial order. The Bolshevik commissar Leon Trotsky began organizing workers' militias loyal to the Bolsheviks into the Red Army. While key events occurred in Moscow and Petrograd, every city in the empire was convulsed, including the provinces of national minorities, and in the rural areas peasants took over and redistributed land.
As the war progressed, the RSFSR established Soviet power in Armenia, Azerbaijan, Byelorussia, Georgia, and Ukraine. Wartime cohesion and intervention from foreign powers prompted the RSFSR to begin unifying these nations under one flag and created the Soviet Union. Historians consider the end of the revolutionary period to be in 1922, when the civil war concluded with the defeat of the White Army and separatist factions, leading to mass emigration from Russia. The victorious Bolshevik Party reconstituted itself into the All-Union Communist Party (Bolsheviks) and remained in power for six decades.
Background
The Russian Revolution of 1905 was a major factor contributing to the cause of the Revolutions of 1917. The events of Bloody Sunday triggered nationwide protests and soldier mutinies. A council of workers called the St. Petersburg Soviet was created in this chaos. While the 1905 Revolution was ultimately crushed, and the leaders of the St. Petersburg Soviet were arrested, this laid the groundwork for the later Petrograd Soviet and other revolutionary movements during the leadup to 1917. The 1905 Revolution also led to the creation of a Duma (parliament) that would later form the Provisional Government following February 1917.
Russia's poor performance in 1914–1915 prompted growing complaints directed at Tsar Nicholas II and the Romanov family. A short wave of patriotic nationalism ended in the face of defeats and poor conditions on the Eastern Front of World War I. The Tsar made the situation worse by taking personal control of the Imperial Russian Army in 1915, a challenge far beyond his skills. He was now held personally responsible for Russia's continuing defeats and losses. In addition, Tsarina Alexandra, left to rule while the Tsar commanded at the front, was German born, leading to suspicion of collusion, only to be exacerbated by rumors relating to her relationship with the controversial mystic Grigori Rasputin. Rasputin's influence led to disastrous ministerial appointments and corruption, resulting in a worsening of conditions within Russia.
After the entry of the Ottoman Empire on the side of the Central Powers in October 1914, Russia was deprived of a major trade route to the Mediterranean Sea, which worsened the economic crisis and the munitions shortages. Meanwhile, Germany was able to produce great amounts of munitions whilst constantly fighting on two major battlefronts.
The conditions during the war resulted in a devastating loss of morale within the Russian army and the general population of Russia itself. This was particularly apparent in the cities, owing to a lack of food in response to the disruption of agriculture. Food scarcity had become a considerable problem in Russia, but the cause of this did not lie in any failure of the harvests, which had not been significantly altered during wartime. The indirect reason was that the government, in order to finance the war, printed millions of roubles, and by 1917, inflation had made prices increase up to four times what they had been in 1914. Farmers were consequently faced with a higher cost of living, but with little increase in income. As a result, they tended to hoard their grain and to revert to subsistence farming. Thus the cities were constantly short of food. At the same time, rising prices led to demands for higher wages in the factories, and in January and February 1916, revolutionary propaganda, in part aided by German funds, led to widespread strikes. This resulted in growing criticism of the government, including an increased participation of workers in revolutionary parties.
Liberal parties too had an increased platform to voice their complaints, as the initial fervor of the war resulted in the Tsarist government creating a variety of political organizations. In July 1915, a Central War Industries Committee was established under the chairmanship of prominent Octobrist, Alexander Guchkov (1862–1936), which included ten workers' representatives. The Petrograd Mensheviks agreed to join despite the objections of their leaders abroad. All this activity gave renewed encouragement to political ambitions, and in September 1915, a combination of Octobrists and Kadets in the Duma demanded the forming of a responsible government, which the Tsar rejected.
All these factors had given rise to a sharp loss of confidence in the regime, even within the ruling class, growing throughout the war. Early in 1916, Guchkov discussed with senior army officers and members of the Central War Industries Committee about a possible coup to force the abdication of the Tsar. In December, a small group of nobles assassinated Rasputin, and in January 1917 the Tsar's cousin, Grand Duke Nicholas, was asked indirectly by Prince Lvov whether he would be prepared to take over the throne from his nephew, Tsar Nicholas II. None of these incidents were in themselves the immediate cause of the February Revolution, but they do help to explain why the monarchy survived only a few days after it had broken out.
Meanwhile, Socialist Revolutionary leaders in exile, many of them living in Switzerland, had been the glum spectators of the collapse of international socialist solidarity. French and German Social Democrats had voted in favour of their respective governments' war efforts. Georgi Plekhanov in Paris had adopted a violently anti-German stand, while Alexander Parvus supported the German war effort as the best means of ensuring a revolution in Russia. The Mensheviks largely maintained that Russia had the right to defend herself against Germany, although Julius Martov (a prominent Menshevik), now on the left of his group, demanded an end to the war and a settlement on the basis of national self-determination, with no annexations or indemnities.
It was these views of Martov that predominated in a manifesto drawn up by Leon Trotsky (at the time a Menshevik) at a conference in Zimmerwald, attended by 35 Socialist leaders in September 1915. Inevitably, Vladimir Lenin supported by Zinoviev and Radek, strongly contested them. Their attitudes became known as the Zimmerwald Left. Lenin rejected both the defence of Russia and the cry for peace. Since the autumn of 1914, he had insisted that "from the standpoint of the working class and of the labouring masses the lesser evil would be the defeat of the Tsarist Monarchy"; the war must be turned into a civil war of the proletarian soldiers against their own governments, and if a proletarian victory should emerge from this in Russia, then their duty would be to wage a revolutionary war for the liberation of the masses throughout Europe.
Economic and social changes
An elementary theory of property, believed by many peasants, was that land should belong to those who work on it. At the same time, peasant life and culture was changing constantly. Change was facilitated by the physical movement of growing numbers of peasant villagers who migrated to and from industrial and urban environments, but also by the introduction of city culture into the village through material goods, the press, and word of mouth.For recent research on peasants, see ; ; ; ; .
Workers also had good reasons for discontent: overcrowded housing with often deplorable sanitary conditions, long hours at work (on the eve of the war, a 10-hour workday six days a week was the average and many were working 11–12 hours a day by 1916), constant risk of injury and death from poor safety and sanitary conditions, harsh discipline (not only rules and fines, but foremen's fists), and inadequate wages (made worse after 1914 by steep wartime increases in the cost of living). At the same time, urban industrial life had its benefits, though these could be just as dangerous (in terms of social and political stability) as the hardships. There were many encouragements to expect more from life. Acquiring new skills gave many workers a sense of self-respect and confidence, heightening expectations and desires. Living in cities, workers encountered material goods they had never seen in villages. Most importantly, workers living in cities were exposed to new ideas about the social and political order.For research on Russian workers, see especially ;
The social causes of the Russian Revolution can be derived from centuries of oppression of the lower classes by the Tsarist regime and Nicholas's failures in World War I. While rural agrarian peasants had been emancipated from serfdom in 1861, they still resented paying redemption payments to the state, and demanded communal tender of the land they worked. The problem was further compounded by the failure of Sergei Witte's land reforms of the early 20th century. Increasing peasant disturbances and sometimes actual revolts occurred, with the goal of securing ownership of the land they worked. Russia consisted mainly of poor farming peasants and substantial inequality of land ownership, with 1.5% of the population owning 25% of the land.
The rapid industrialization of Russia also resulted in urban overcrowding and poor conditions for urban industrial workers (as mentioned above). Between 1890 and 1910, the population of the capital, Saint Petersburg, nearly doubled from 1,033,600 to 1,905,600, with Moscow experiencing similar growth. This created a new 'proletariat' which, due to being crowded together in the cities, was much more likely to protest and go on strike than the peasantry had been in previous times. One 1904 survey found that an average of 16 people shared each apartment in Saint Petersburg, with six people per room. There was also no running water, and piles of human waste were a threat to the health of the workers. The poor conditions only aggravated the situation, with the number of strikes and incidents of public disorder rapidly increasing in the years shortly before World War I. Because of late industrialization, Russia's workers were highly concentrated. By 1914, 40% of Russian workers were employed in factories of 1,000+ workers (32% in 1901). 42% worked in 100–1,000 worker enterprises, 18% in 1–100 worker businesses (in the US, 1914, the figures were 18%, 47% and 35% respectively).
Years Average annual strikes 1862–1869 6 1870–1884 20 1885–1894 33 1895–1905 176
World War I added to the chaos. Conscription across Russia resulted in unwilling citizens being sent off to war. The vast demand for factory production of war supplies and workers resulted in many more labor riots and strikes. Conscription stripped skilled workers from the cities, who had to be replaced with unskilled peasants. When famine began to hit due to the poor railway system, workers abandoned the cities in droves seeking food. Finally, the soldiers themselves, who suffered from a lack of equipment and protection from the elements, began to turn against the Tsar. This was mainly because, as the war progressed, many of the officers who were loyal to the Tsar were killed, being replaced by discontented conscripts from the major cities who had little loyalty to the Tsar.
Political issues
Many sections of the country had reason to be dissatisfied with the existing autocracy. Nicholas II was a deeply conservative ruler and maintained a strict authoritarian system. Individuals and society in general were expected to show self-restraint, devotion to community, deference to the social hierarchy and a sense of duty to the country. Religious faith helped bind all of these tenets together as a source of comfort and reassurance in the face of difficult conditions and as a means of political authority exercised through the clergy. Perhaps more than any other modern monarch, Nicholas II attached his fate and the future of his dynasty to the notion of the ruler as a saintly and infallible father to his people.See, especially, ; ; ;
This vision of the Romanov monarchy left him unaware of the state of his country. With a firm belief that his power to rule was granted by Divine Right, Nicholas assumed that the Russian people were devoted to him with unquestioning loyalty. This ironclad belief rendered Nicholas unwilling to allow the progressive reforms that might have alleviated the suffering of the Russian people. Even after the 1905 Revolution spurred the Tsar to decree limited civil rights and democratic representation, he worked to limit even these liberties in order to preserve the ultimate authority of the crown.
Despite constant oppression, the desire of the people for democratic participation in government decisions was strong. Since the Age of Enlightenment, Russian intellectuals had promoted Enlightenment ideals such as the dignity of the individual and the rectitude of democratic representation. These ideals were championed most vociferously by Russia's liberals, although populists, Marxists, and anarchists also claimed to support democratic reforms. A growing opposition movement had begun to challenge the Romanov monarchy openly well before the turmoil of World War I.
Dissatisfaction with Russian autocracy culminated in the huge national upheaval that followed the Bloody Sunday massacre of January 1905, in which hundreds of unarmed protesters were shot by the Tsar's troops. Workers responded to the massacre with a crippling general strike, forcing Nicholas to put forth the October Manifesto, which established a democratically elected parliament (the State Duma). Although the Tsar accepted the 1906 Fundamental State Laws one year later, he subsequently dismissed the first two Dumas when they proved uncooperative. Unfulfilled hopes of democracy fueled revolutionary ideas and violent outbursts targeted at the monarchy.
One of the Tsar's principal rationales for risking war in 1914 was his desire to restore the prestige that Russia had lost amid the debacles of the Russo-Japanese War (1904–1905). Nicholas also sought to foster a greater sense of national unity with a war against a common and old enemy. The Russian Empire was an agglomeration of diverse ethnicities that had demonstrated significant signs of disunity in the years before the First World War. Nicholas believed in part that the shared peril and tribulation of a foreign war would mitigate the social unrest over the persistent issues of poverty, inequality, and inhumane working conditions. Instead of restoring Russia's political and military standing, World War I led to the slaughter of Russian troops and military defeats that undermined both the monarchy and Russian society to the point of collapse.
World War I
The outbreak of war in August 1914 initially served to quiet the prevalent social and political protests, focusing hostilities against a common external enemy, but this patriotic unity did not last long. As the war dragged on inconclusively, war-weariness gradually took its toll. Although many ordinary Russians joined anti-German demonstrations in the first few weeks of the war, hostility toward the Kaiser and the desire to defend their land and their lives did not necessarily translate into enthusiasm for the Tsar or the government.
Russia's first major battle of the war was a disaster; in the 1914 Battle of Tannenberg, over 30,000 Russian troops were killed or wounded and 90,000 captured, while Germany suffered just 12,000 casualties. However, Austro-Hungarian forces allied to Germany were driven back deep into the Galicia region by the end of the year. In the autumn of 1915, Nicholas had taken direct command of the army, personally overseeing Russia's main theatre of war and leaving his ambitious but incapable wife Alexandra in charge of the government. Reports of corruption and incompetence in the Imperial government began to emerge, and the growing influence of Grigori Rasputin in the Imperial family was widely resented.
In 1915, things took a critical turn for the worse when Germany shifted its focus of attack to the Eastern Front. The superior German Army – better led, better trained, and better supplied – was quite effective against the ill-equipped Russian forces, driving the Russians out of Galicia, as well as Russian Poland during the Gorlice–Tarnów Offensive campaign. By the end of October 1916, Russia had lost between 1,600,000 and 1,800,000 soldiers, with an additional 2,000,000 prisoners of war and 1,000,000 missing, all making up a total of nearly 5,000,000 men.
These staggering losses played a definite role in the mutinies and revolts that began to occur. In 1916, reports of fraternizing with the enemy began to circulate. Soldiers went hungry, lacked shoes, munitions, and even weapons. Rampant discontent lowered morale, which was further undermined by a series of military defeats.
Casualty rates were the most vivid sign of this disaster. By the end of 1914, only five months into the war, around 390,000 Russian men had lost their lives and nearly 1,000,000 were injured. Far sooner than expected, inadequately trained recruits were called for active duty, a process repeated throughout the war as staggering losses continued to mount. The officer class also saw remarkable changes, especially within the lower echelons, which were quickly filled with soldiers rising up through the ranks. These men, usually of peasant or working-class backgrounds, were to play a large role in the politicization of the troops in 1917.
The army quickly ran short of rifles and ammunition (as well as uniforms and food), and by mid-1915, men were being sent to the front bearing no arms. It was hoped that they could equip themselves with arms recovered from fallen soldiers, of both sides, on the battlefields. The soldiers did not feel as if they were valuable, rather they felt as if they were expendable.
By the spring of 1915, the army was in steady retreat, which was not always orderly; desertion, plundering, and chaotic flight were not uncommon. By 1916, however, the situation had improved in many respects. Russian troops stopped retreating, and there were even some modest successes in the offensives that were staged that year, albeit at great loss of life. Also, the problem of shortages was largely solved by a major effort to increase domestic production. Nevertheless, by the end of 1916, morale among soldiers was even worse than it had been during the great retreat of 1915. The fortunes of war may have improved, but the fact of war remained which continually took Russian lives. The crisis in morale (as was argued by Allan Wildman, a leading historian of the Russian army in war and revolution) "was rooted fundamentally in the feeling of utter despair that the slaughter would ever end and that anything resembling victory could be achieved."
The war did not only devastate soldiers. By the end of 1915, there were manifold signs that the economy was breaking down under the heightened strain of wartime demand. The main problems were food shortages and rising prices. Inflation dragged incomes down at an alarmingly rapid rate, and shortages made it difficult for an individual to sustain oneself. These shortages were a problem especially in the capital, St. Petersburg, where distance from supplies and poor transportation networks made matters particularly worse. Shops closed early or entirely for lack of bread, sugar, meat, and other provisions, and lines lengthened massively for what remained. Conditions became increasingly difficult to afford food and physically obtain it.
Strikes increased steadily from the middle of 1915, and so did crime, but, for the most part, people suffered and endured, scouring the city for food. Working-class women in St. Petersburg reportedly spent about forty hours a week in food lines, begging, turning to prostitution or crime, tearing down wooden fences to keep stoves heated for warmth, and continued to resent the rich.
Government officials responsible for public order worried about how long people's patience would last. A report by the St. Petersburg branch of the security police, the Okhrana, in October 1916, warned bluntly of "the possibility in the near future of riots by the lower classes of the empire enraged by the burdens of daily existence.""Doklad petrogradskogo okhrannogo otdeleniia osobomu otdelu departamenta politsii" ["Report of the Petrograd Okhrana to the Special Department of the Department of the Police"], October 1916, Krasnyi arkhiv 17 (1926), 4–35 (quotation 4).
Tsar Nicholas was blamed for all of these crises, and what little support he had left began to crumble. As discontent grew, the State Duma issued a warning to Nicholas in November 1916, stating that, inevitably, a terrible disaster would grip the country unless a constitutional form of government was put in place. Nicholas ignored these warnings and Russia's Tsarist regime collapsed a few months later during the February Revolution of 1917. One year later, the Tsar and his entire family were executed.
February Revolution
At the beginning of February, Petrograd workers began several strikes and demonstrations. On , Putilov, Petrograd's largest industrial plant was closed by a workers' strike. The next day, a series of meetings and rallies were held for International Women's Day, which gradually turned into economic and political gatherings. Demonstrations were organised to demand bread, and these were supported by the industrial working force who considered them a reason for continuing the strikes. The women workers marched to nearby factories bringing out over 50,000 workers on strike.When women set Russia ablaze, Fifth International 11 July 2007. By , virtually every industrial enterprise in Petrograd had been shut down, together with many commercial and service enterprises. Students, white-collar workers, and teachers joined the workers in the streets and at public meetings.
To quell the riots, the Tsar looked to the army. At least 180,000 troops were available in the capital, but most were either untrained or injured. Historian Ian Beckett suggests around 12,000 could be regarded as reliable, but even these proved reluctant to move in on the crowd, since it included so many women. It was for this reason that on , when the Tsar ordered the army to suppress the rioting by force, troops began to revolt. Although few actively joined the rioting, many officers were either shot or went into hiding; the ability of the garrison to hold back the protests was all but nullified, symbols of the Tsarist regime were rapidly torn down around the city, and governmental authority in the capital collapsed – not helped by the fact that Nicholas had prorogued the Duma that morning, leaving it with no legal authority to act. The response of the Duma, urged on by the liberal bloc, was to establish a Temporary Committee to restore law and order; meanwhile, the socialist parties established the Petrograd Soviet to represent workers and soldiers. The remaining loyal units switched allegiance the next day.
The Tsar directed the royal train back towards Petrograd, which was stopped on , by a group of revolutionaries at Malaya Vishera. When the Tsar finally arrived at Pskov, the Army Chief Nikolai Ruzsky, and the Duma deputies Alexander Guchkov and Vasily Shulgin suggested in unison that he abdicate the throne. He did so on , on behalf of himself, and then, having taken advice on behalf of his son, the Tsarevich. Nicholas nominated his brother, the Grand Duke Michael Alexandrovich, to succeed him. But the Grand Duke realised that he would have little support as ruler, so he declined the crown on , stating that he would take it only if that was the consensus of democratic action. Six days later, Nicholas, no longer Tsar and addressed with contempt by the sentries as "Nicholas Romanov", was reunited with his family at the Alexander Palace at Tsarskoye Selo. He was placed under house arrest with his family by the Provisional Government.
The immediate effect of the February Revolution was a widespread atmosphere of elation and excitement in Petrograd. On , a provisional government was announced. The center-left was well represented, and the government was initially chaired by a liberal aristocrat, Prince Georgy Yevgenievich Lvov, a member of the Constitutional Democratic Party (KD). The socialists had formed their rival body, the Petrograd Soviet (or workers' council) four days earlier. The Petrograd Soviet and the Provisional Government competed for power over Russia.
Dual power: Dvoyevlastiye
The effective power of the Provisional Government was challenged by the authority of an institution that claimed to represent the will of workers and soldiers and could, in fact, mobilize and control these groups during the early months of the revolution – the Petrograd Soviet Council of Workers' Deputies. The model for the Soviets were workers' councils that had been established in scores of Russian cities during the 1905 Revolution. In February 1917, striking workers elected deputies to represent them and socialist activists began organizing a citywide council to unite these deputies with representatives of the socialist parties. On 27 February, socialist Duma deputies, mainly Mensheviks and Socialist Revolutionaries, took the lead in organizing a citywide council. The Petrograd Soviet met in the Tauride Palace, room 13, permitted by the Provisional Government.
The leaders of the Petrograd Soviet believed that they represented particular classes of the population, not the whole nation. They also believed Russia was not ready for socialism. They viewed their role as limited to pressuring hesitant "bourgeoisie" to rule and to introduce extensive democratic reforms in Russia (the replacement of the monarchy by a republic, guaranteed civil rights, a democratic police and army, abolition of religious and ethnic discrimination, preparation of elections to a constituent assembly, and so on). They met in the same building as the emerging Provisional Government not to compete with the Duma Committee for state power, but to best exert pressure on the new government, to act, in other words, as a popular democratic lobby.
The relationship between these two major powers was complex from the beginning and would shape the politics of 1917. The representatives of the Provisional Government agreed to "take into account the opinions of the Soviet of Workers' Deputies", though they were also determined to prevent interference which would create an unacceptable situation of dual power. In fact, this was precisely what was being created, though this "dual power" (dvoyevlastiye) was the result less of the actions or attitudes of the leaders of these two institutions than of actions outside their control, especially the ongoing social movement taking place on the streets of Russia's cities, factories, shops, barracks, villages, and in the trenches.
A series of political crises – see the chronology below – in the relationship between population and government and between the Provisional Government and the Soviets (which developed into a nationwide movement with a national leadership). The All-Russian Central Executive Committee of Soviets (VTsIK) undermined the authority of the Provisional Government but also of the moderate socialist leaders of the Soviets. Although the Soviet leadership initially refused to participate in the "bourgeois" Provisional Government, Alexander Kerensky, a young, popular lawyer and a member of the Socialist Revolutionary Party (SRP), agreed to join the new cabinet, and became an increasingly central figure in the government, eventually taking leadership of the Provisional Government. As minister of war and later Prime Minister, Kerensky promoted freedom of speech, released thousands of political prisoners, continued the war effort, even organizing another offensive (which, however, was no more successful than its predecessors). Nevertheless, Kerensky still faced several great challenges, highlighted by the soldiers, urban workers, and peasants, who claimed that they had gained nothing by the revolution:
Other political groups were trying to undermine him.
Heavy military losses were being suffered on the front.
The soldiers were dissatisfied and demoralised and had started to defect. (On arrival back in Russia, these soldiers were either imprisoned or sent straight back into the front.)
There was enormous discontent with Russia's involvement in the war, and many were calling for an end to it.
There were great shortages of food and supplies, which was difficult to remedy because of the wartime economic conditions.
The political group that proved most troublesome for Kerensky, and would eventually overthrow him, was the Bolshevik Party, led by Vladimir Lenin. Lenin had been living in exile in neutral Switzerland and, due to democratization of politics after the February Revolution, which legalized formerly banned political parties, he perceived the opportunity for his Marxist revolution. Although return to Russia had become a possibility, the war made it logistically difficult. Eventually, German officials arranged for Lenin to pass through their territory, hoping that his activities would weaken Russia or even – if the Bolsheviks came to power – lead to Russia's withdrawal from the war. Lenin and his associates, however, had to agree to travel to Russia in a sealed train: Germany would not take the chance that he would foment revolution in Germany. After passing through the front, he arrived in Petrograd in April 1917.
On the way to Russia, Lenin prepared the April Theses, which outlined central Bolshevik policies. These included that the Soviets take power (as seen in the slogan "all power to the Soviets") and denouncing the liberals and social revolutionaries in the Provisional Government, forbidding co-operation with it. Many Bolsheviks, however, had supported the Provisional Government, including Lev Kamenev.
With Lenin's arrival, the popularity of the Bolsheviks increased steadily. Over the course of the spring, public dissatisfaction with the Provisional Government and the war, in particular among workers, soldiers and peasants, pushed these groups to radical parties. Despite growing support for the Bolsheviks, buoyed by maxims that called most famously for "all power to the Soviets", the party held very little real power in the moderate-dominated Petrograd Soviet. In fact, historians such as Sheila Fitzpatrick have asserted that Lenin's exhortations for the Soviet Council to take power were intended to arouse indignation both with the Provisional Government, whose policies were viewed as conservative, and the Soviets themselves, which were viewed as subservients to the conservative government. By some other historians' accounts, Lenin and his followers were unprepared for how their groundswell of support, especially among influential worker and soldier groups, would translate into real power in the summer of 1917.
On 18 June, the Provisional Government launched an attack against Germany that failed miserably. Soon after, the government ordered soldiers to go to the front, reneging on a promise. The soldiers refused to follow the new orders. The arrival of radical Kronstadt sailors – who had tried and executed many officers, including one admiral – further fueled the growing revolutionary atmosphere. Sailors and soldiers, along with Petrograd workers, took to the streets in violent protest, calling for "all power to the Soviets". The revolt, however, was disowned by Lenin and the Bolshevik leaders and dissipated within a few days. In the aftermath, Lenin fled to Finland under threat of arrest while Trotsky, among other prominent Bolsheviks, was arrested. The July Days confirmed the popularity of the anti-war, radical Bolsheviks, but their unpreparedness at the moment of revolt was an embarrassing gaffe that lost them support among their main constituent groups: soldiers and workers.
The Bolshevik failure in the July Days proved temporary. The Bolsheviks had undergone a spectacular growth in membership. Whereas, in February 1917, the Bolsheviks were limited to only 24,000 members, by September 1917 there were 200,000 members of the Bolshevik faction. Previously, the Bolsheviks had been in the minority in the two leading cities of RussiaSt. Petersburg and Moscow behind the Mensheviks and the Socialist Revolutionaries, by September the Bolsheviks were in the majority in both cities. Furthermore, the Bolshevik-controlled Moscow Regional Bureau of the Party also controlled the Party organizations of the 13 provinces around Moscow. These 13 provinces held 37% of Russia's population and 20% of the membership of the Bolshevik faction.
In August, poor and misleading communication led General Lavr Kornilov, the recently appointed Supreme Commander of Russian military forces, to believe that the Petrograd government had already been captured by radicals, or was in serious danger thereof. In response, he ordered troops to Petrograd to pacify the city. To secure his position, Kerensky had to ask for Bolshevik assistance. He also sought help from the Petrograd Soviet, which called upon armed Red Guards to "defend the revolution". The Kornilov Affair failed largely due to the efforts of the Bolsheviks, whose influence over railroad and telegraph workers proved vital in stopping the movement of troops. With his coup failing, Kornilov surrendered and was relieved of his position. The Bolsheviks' role in stopping the attempted coup further strengthened their position.
In early September, the Petrograd Soviet freed all jailed Bolsheviks and Trotsky became chairman of the Petrograd Soviet. Growing numbers of socialists and lower-class Russians viewed the government less as a force in support of their needs and interests. The Bolsheviks benefited as the only major organized opposition party that had refused to compromise with the Provisional Government, and they benefited from growing frustration and even disgust with other parties, such as the Mensheviks and Socialist Revolutionaries, who stubbornly refused to break with the idea of national unity across all classes.
In Finland, Lenin had worked on his book State and Revolution and continued to lead his party, writing newspaper articles and policy decrees. By October, he returned to Petrograd (present-day St. Petersburg), aware that the increasingly radical city presented him no legal danger and a second opportunity for revolution. Recognising the strength of the Bolsheviks, Lenin began pressing for the immediate overthrow of the Kerensky government by the Bolsheviks. Lenin was of the opinion that taking power should occur in both St. Petersburg and Moscow simultaneously, parenthetically stating that it made no difference which city rose up first. The Bolshevik Central Committee drafted a resolution, calling for the dissolution of the Provisional Government in favor of the Petrograd Soviet. The resolution was passed 10–2 (Lev Kamenev and Grigory Zinoviev prominently dissenting) promoting the October Revolution.
October Revolution
The October Revolution, which unfolded on 1917, was organized by the Bolshevik party. Lenin did not have any direct role in the revolution and he was hiding for his personal safety. However, in late October, Lenin secretly and at great personal risk entered Petrograd and attended a private gathering of the Bolshevik Central Committee on the evening of 23 October. The Revolutionary Military Committee established by the Bolshevik party was organizing the insurrection and Leon Trotsky was the chairman. 50,000 workers had passed a resolution in favour of Bolshevik demand for transfer of power to the soviets. However, Lenin played a crucial role in the debate in the leadership of the Bolshevik party for a revolutionary insurrection as the party in the autumn of 1917 received a majority in the soviets. An ally in the left fraction of the Revolutionary-Socialist Party, with huge support among the peasants who opposed Russia's participation in the war, supported the slogan 'All power to the Soviets'. The initial stage of the October Revolution which involved the assault on Petrograd occurred largely without any human casualties.
Liberal and monarchist forces, loosely organized into the White Army, immediately went to war against the Bolsheviks' Red Army, in a series of battles that would become known as the Russian Civil War. The Civil War began in early 1918 with domestic anti-Bolshevik forces confronting the nascent Red Army. In autumn of 1918 Allied countries needed to block German access to Russian supplies. They sent troops to support the "Whites" with supplies of weapons, ammunition and logistic equipment being sent from the main Western countries but this was not at all coordinated. Germany did not participate in the civil war as it surrendered to the Allies.
The provisional government with its second and third coalition was led by a right wing fraction of the Socialist-Revolutionary party, SR. This non-elected provisional government faced the revolutionary situation and the growing mood against the war by avoiding elections to the state Duma. However, the October revolution forced the political parties behind the newly dissolved provisional government to move and move fast for immediate elections. All happened so fast that the left SR fraction did not have time to reach out and be represented in ballots of the SR party which was part of the coalition in the provisional government. This non-elected government supported continuation of the war on the side of the allied forces. The elections to the State Duma 25 November 1917 therefore did not mirror the true political situation among peasants even if we don't know how the outcome would be if the anti-war left SR fraction had a fair chance to challenge the party leaders. In the elections, the Bolshevik party received 25% of the votes and the Socialist-Revolutionaries as much as 58%. It is possible the left SR had a good chance to reach more than 25% of the votes and thereby legitimate the October revolution but we can only guess.
Lenin did not believe that a socialist revolution necessarily presupposed a fully developed capitalist economy. A semi-capitalist country would suffice and Russia had a working class base of 5% of the population.
Though Lenin was the leader of the Bolshevik Party, it has been argued that since Lenin was not present during the actual takeover of the Winter Palace, it was really Trotsky's organization and direction that led the revolution, merely spurred by the motivation Lenin instigated within his party. Bolshevik figures such as Anatoly Lunacharsky, Moisei Uritsky and Dmitry Manuilsky agreed that Lenin’s influence on the Bolshevik party was decisive but the October insurrection was carried out according to Trotsky’s, not to Lenin’s plan.
Critics on the Right have long argued that the financial and logistical assistance of German intelligence via their key agent, Alexander Parvus was a key component as well, though historians are divided, since there is little evidence supporting that claim.Isaac Deutscher The Prophet Armed
Soviet membership was initially freely elected, but many members of the Socialist Revolutionary Party, anarchists, and other leftists created opposition to the Bolsheviks through the Soviets themselves. The elections to the Russian Constituent Assembly took place 25 November 1917. The Bolsheviks gained 25% of the vote. When it became clear that the Bolsheviks had little support outside of the industrialized areas of Saint Petersburg and Moscow, they simply barred non-Bolsheviks from membership in the Soviets. The Bolsheviks dissolved the Constituent Assembly in January 1918.
Russian Civil War
The October Revolution led by the Bolsheviks was not recognized by a variety of social and political groups, including army officers and cossacks, the "bourgeoisie" and the landowners, and political groups ranging from the far Right to the moderate socialists, the Socialist Revolutionaries and the Mensheviks, who opposed the drastic restructuring championed by the Bolsheviks following the collapse of the Provisional Government.article "Civil War and military intervention in Russia 1918–20", Big Soviet Encyclopedia, third edition (30 volumes), 1969–78
The Russian Civil War, which broke out in the months following the revolution, resulted in the deaths and suffering of millions of people regardless of their political orientation. The war was fought mainly between the Red Army ("Reds"), consisting of the Bolsheviks and the supporters of the Soviets, and the White movement ("Whites"), and their loosely allied "White Armies" led mainly by the right-leaning and conservative officers of the Russian Empire and the Cossacks and supported by the classes which lost their power and privileges with the Bolshevik revolution; the Civil War also included armed conflicts with nationalist movements for independence, armed struggle and terrorism by anti-Bolshevik socialists and anarchists, and uprisings of the peasants who organized themselves into the "Green armies". Although the views within the Russian Whites ranged from monarchism to socialism, the Whites generally preferred the Russian Empire to the revolution, and they were commonly seen as restorers of the old order as they fought the movements of the non-Russian nationalities in favour of "indivisible Russia" and opposed the land reform and defended the property rights of the upper classes; the socialists who opposed both factions saw the rule of the Whites (a military dictatorship headed by Alexander Kolchak and by the commanders of the White forces) as a right-wing dictatorship. The Russian Whites had backing from other countries such as the United Kingdom, France, the United States, and Japan, while the Reds possessed internal support, proving to be much more effective. Though the Allied nations, using external interference, provided substantial military aid to the Whites, they were ultimately defeated.
The Bolsheviks firstly assumed power in Petrograd, expanding their rule outwards. They eventually reached the Easterly Siberian Russian coast in Vladivostok, four years after the war began, an occupation that is believed to have ended all significant military campaigns in the nation. Less than one year later, the last area controlled by the White Army, the Ayano-Maysky District, directly to the north of the Krai containing Vladivostok, was given up when General Anatoly Pepelyayev capitulated in 1923.
Several revolts were initiated against the Bolsheviks and their army near the end of the war, notably the Kronstadt Rebellion. This was a naval mutiny engineered by Soviet Baltic sailors, former Red Army soldiers, and the people of Kronstadt. This armed uprising was fought against the antagonizing Bolshevik economic policies that farmers were subjected to, including seizures of grain crops by the Communists."The Kronstadt Mutiny notes on This all amounted to large-scale discontent. When delegates representing the Kronstadt sailors arrived at Petrograd for negotiations, they raised 15 demands primarily pertaining to the Russian right to freedom.Petrograd on the Eve of Kronstadt rising 1921 . Flag.blackened.net (10 March 1921). Retrieved on 26 July 2013. The Government firmly denounced the rebellions and labelled the requests as a reminder of the Social Revolutionaries, a political party that was popular among Soviets before Lenin, but refused to cooperate with the Bolshevik Army. The Government then responded with an armed suppression of these revolts and suffered ten thousand casualties before entering the city of Kronstadt. This ended the rebellions fairly quickly, causing many of the rebels to flee seeking political exile.
During the Civil War, Nestor Makhno led a Ukrainian anarchist movement. Makhno's Insurgent Army allied to the Bolsheviks thrice, with one of the powers ending the alliance each time. However, a Bolshevik force under Mikhail Frunze destroyed the Makhnovshchina, when the Makhnovists refused to merge into the Red Army. In addition, the so-called "Green Army" (peasants defending their property against the opposing forces) played a secondary role in the war, mainly in Ukraine.
Revolutionary tribunals
Revolutionary tribunals were present during both the Revolution and the Civil War, intended for the purpose of combatting forces of counter-revolution. At the Civil War's zenith, it is reported that upwards of 200,000 cases were investigated by approximately 200 tribunals. These tribunals established themselves more so from the Cheka as a more moderate force that acted under the banner of revolutionary justice, rather than a utilizer of strict brute force as the former did. However, these tribunals did come with their own set of inefficiencies, such as responding to cases in a matter of months and not having a concrete definition of "counter-revolution" that was determined on a case-by-case basis. The "Decree on Revolutionary Tribunals" used by the People's Commissar of Justice, states in article 2 that "In fixing the penalty, the Revolutionary Tribunal shall be guided by the circumstances of the case and the dictates of the revolutionary conscience." Revolutionary tribunals ultimately demonstrated that a form of justice was still prevalent in Russian society where the Russian Provisional Government failed. This, in part, triggered the political transition of the October Revolution and the Civil War that followed in its aftermath.
Murder of the imperial family
The Bolsheviks murdered the Tsar and his family on 16 July 1918. In early March 1917, the Provisional Government had placed Nicholas and his family under house arrest in the Alexander Palace at Tsarskoye Selo, south of Petrograd. But in August 1917, they evacuated the Romanovs to Tobolsk in the Urals to protect them from the rising tide of revolution. After the Bolsheviks came to power in October 1917, the conditions of their imprisonment grew stricter and talk of putting Nicholas on trial increased. In April and May 1918, the looming civil war led the Bolsheviks to move the family to the stronghold of Yekaterinburg.
During the early morning of 16 July, Nicholas, Alexandra, their children, their physician, and several servants were taken into the basement and shot. According to Edvard Radzinsky and Dmitrii Volkogonov, the order came directly from Lenin and Yakov Sverdlov in Moscow. However, this claim has never been confirmed. The murder may have been carried out on the initiative of local Bolshevik officials, or it may have been an option pre-approved in Moscow as White troops were rapidly approaching Yekaterinburg. Radzinsky noted that Lenin's bodyguard personally delivered the telegram ordering the killing and that he was ordered to destroy the evidence.
Symbolism
The Russian Revolution became the site for many instances of symbolism, both physical and non-physical. Communist symbolism is perhaps the most notable of this time period, such as the debut of the iconic hammer and sickle as a representation of the October Revolution in 1917, eventually becoming the official symbol of the USSR in 1924, and later the symbol of Communism as a whole. Although the Bolsheviks did not have extensive political experience, their portrayal of the revolution itself as both a political and symbolic order resulted in Communism's portrayal as a messianic faith, formally known as communist messianism. Portrayals of notable revolutionary figures such as Lenin were done in iconographic methods, equating them similarly to religious figures, though religion itself was banned in the USSR and groups such as the Russian Orthodox Church were persecuted.
The revolution and the world
The revolution ultimately led to the establishment of the future Soviet Union as an ideocracy; however, the establishment of such a state came as an ideological paradox, as Marx's ideals of how a socialist state ought to be created were based on the formation being natural and not artificially incited (i.e. by means of revolution). Leon Trotsky said that the goal of socialism in Russia would not be realized without the success of the world revolution. A revolutionary wave caused by the Russian Revolution lasted until 1923, but despite initial hopes for success in the German Revolution of 1918–19, the short-lived Hungarian Soviet Republic, and others like it, only the Mongolian Revolution of 1921 saw a Marxist movement at the time succeed in keeping power in its hands.
This issue is subject to conflicting views on communist history by various Marxist groups and parties. Joseph Stalin later rejected this concept, stating that socialism was possible in one country.The confusion regarding Stalin's position on the issue stems from the fact that, after Lenin's death in 1924, he successfully used Lenin's argument – the argument that socialism's success needs the support of workers of other countries in order to happen – to defeat his competitors within the party by accusing them of betraying Lenin and, therefore, the ideals of the October Revolution.
The Russian Revolution was perceived as a rupture with imperialism for various civil rights and decolonization struggles and providing a space for oppressed groups across the world. This was given further credence with the Soviet Union supporting many anti-colonial third world movements with financial funds against European colonial powers.
Historiography
Few events in historical research have been as conditioned by political influences as the October Revolution. The historiography of the Revolution generally divides into three schools of thought: the Soviet-Marxist (Marxist-Leninist) view, the Western 'totalitarian' view, and the 'revisionist' view. The 'totalitarian' historians are also referred to as 'traditionalists' and 'Cold War historians' for relying on interpretations rooted in the early years of the Cold War and described as a conservative direction; the Western Revisionists have lacked a full-fledged doctrine or philosophy of history, but were distinguished in the 1960s-1970s by their criticism of the 'traditionalist' bias towards the USSR and the left in general and their focus on "history from below" and social perspectives. While the 'totalitarian' historians described the Bolshevik revolution as a coup carried out by a minority which turned Russia into a totalitarian dictatorship, the 'revisionists' opposed such description and stressed the genuinely 'popular' nature of the Revolution. Since the Revolutions of 1989 and the dissolution of the Soviet Union in 1991, the Western-Totalitarian view has again become dominant and the Soviet-Marxist view has practically vanished in mainstream political analysis. The 'revisionists' achieved some success in challenging the 'traditionalists' and became accepted in academic circles, while 'totalitarian' historians retained popularity and influence in politics and public spheres.
Following the death of Vladimir Lenin, the Bolshevik government was thrown into a crisis. Lenin failed to designate who his successor would be or how they would be chosen. A power struggle broke out in the party between Leon Trotsky and his enemies. Trotsky was defeated by the anti-Trotsky bloc by the mid-1920s and his hopes for party leadership were dashed. Among Trotsky's opponents, Joseph Stalin would rise to assume unchallenged party leadership by 1928. In 1927, Trotsky was expelled from the party and in 1929 he lost his citizenship and was sent into exile. While in exile he began honing his own interpretation of Marxism called Trotskyism. The schism between Trotsky and Stalin is the focal point where the Revisionist view comes into existence. Trotsky traveled across the world denouncing Stalin and the Soviet Union under his leadership. He specifically focused his criticism on Stalin's doctrine, Socialism in One Country, claiming that it was incongruent with the ideology of the revolution. Eventually, Trotsky settled in Mexico City and founded a base of operations for him and his supporters. In 1937 at the height of the Great Purge, he published The Revolution Betrayed which outlined his ideological contradictions with Stalin, and how Stalin was guilty of subverting and debasing the 1917 revolution. He continued to vocally criticize Stalin and Stalinism until his assassination in 1940 on Stalin's orders.
The Soviet-Marxist interpretation is the belief that the Russian Revolution under the Bolsheviks was a proud and glorious effort of the working class which saw the removal of the Tsar, nobility, and capitalists from positions of power. The Bolsheviks and later the Communist Party took the first steps in liberating the proletariat and building a workers' state that practiced equality. Outside of Eastern Europe this view was heavily criticized as following the death of Lenin the Soviet Union became more authoritarian. Even though the Soviet Union no longer exists, the Soviet-Marxist view is still used as an interpretation in academia today. Both academics and Soviet supporters argue this view is supported by several events. First, the RSFSR made substantial advances to women's rights. It was the first country to decriminalize abortion and allowed women to be educated, which was forbidden under the Tsar. Furthermore, the RSFSR decriminalized homosexuality between consenting adults, which was seen as radical for the time period. The Bolshevik government also actively recruited working class citizens into positions of party leadership, thereby ensuring the proletariat was represented in policymaking. One of the most important aspects to this view was the Bolshevik victory in the Russian Civil War. On paper, the Bolsheviks should have been defeated in part due to the broad international support their enemies were receiving. Britain, France, the United States, Japan, and other countries sent aid to the White Army and expedition forces against the Bolsheviks. The Bolsheviks were further at a disadvantage due to factors such as: the small land area under their control, lack of professional officers, and supply shortages. In spite of this, the Red Army prevailed. The Red Army unlike many White factions maintained a high morale among their troops and civilians throughout the duration of the civil war. This was in part due to their skillful use of propaganda. Bolshevik propaganda portrayed the Red Army as liberators and stewards of the poor and downtrodden. Bolshevik support was further elevated by Lenin's initiatives to distribute land to the peasantry, and ending the war with Germany. During the civil war, the Bolsheviks were able to raise an army numbering around five million active soldiers. Domestic support and patriotism played a decisive role in the Russian Civil War. By 1923 the Bolsheviks had controlled the last of the White Army holdouts and the Russian Civil War concluded with a Bolshevik victory. This victory ultimately influenced how the Soviet Union interpreted its own ideology and the October Revolution itself. Starting in 1919, the Soviets would commemorate the event with a military parade and a public holiday. This tradition lasted up until the collapse of the Soviet Union. As time went on the Soviet-Marxist interpretation evolved with an "anti-Stalinist" version of it. This subsection attempts to draw a distinction between the "Lenin period" (1917–24) and the "Stalin period" (1928–53).
Nikita Khrushchev, Stalin's successor, argued that Stalin's regime differed greatly from the leadership of Lenin in his "Secret Speech", delivered in 1956. He was critical of the cult of the individual which was constructed around Stalin whereas Lenin stressed "the role of the people as the creator of history". He also emphasized that Lenin favored a collective leadership which relied on personal persuasion and recommended the removal of Stalin from the position of General Secretary. Khrushchev contrasted this with the "despotism" of Stalin which require absolute submission to his position and also highlighted that many of the people who were later annihilated as "enemies of the party", "had worked with Lenin during his life". He also contrasted the "severe methods" used by Lenin in the "most necessary cases" as a "struggle for survival" during the Civil War with the extreme methods and mass repressions used by Stalin even when the Revolution was "already victorious".
Views from the west were mixed. Socialists and labor organizations tended to support the October Revolution and the Bolshevik seizure of power. On the other hand, western governments were mortified. Western leaders, and later some academics concluded that the Russian Revolution only replaced one form of tyranny (Tsarism), with another (communism). Initially, the Bolsheviks were tolerant of opposing political factions. Upon seizing state power, they organized a parliament, the Russian Constituent Assembly. On 25 November, an election was held. Despite the Bolsheviks being the party that overthrew the Provisional Government and organizing the assembly, they lost the election. Rather than govern as a coalition, the Bolsheviks banned all political opposition. Historians point to this as the start of communist authoritarianism. Conservative historian Robert Service states, "he (Lenin) aided the foundations of dictatorship and lawlessness. He had consolidated the principle of state penetration of the whole society, its economy and its culture. Lenin had practiced terror and advocated revolutionary amoralism." in Lenin allowed for certain disagreement and debate but only within the highest organs of the Bolshevik party, and practicing democratic centralism. The RSFSR and later the Soviet Union continued to practice political repression until its dissolution in 1991.
Trotskyist theoreticians have disputed the view that a one-party state was a natural outgrowth of the Bolsheviks' actions. George Novack stressed the initial efforts by the Bolsheviks to form a government with the Left Socialist Revolutionaries and bring other parties such as the Mensheviks into political legality. Tony Cliff argued the Bolshevik–Left Socialist Revolutionary coalition government dissolved the Constituent Assembly due to a number of reasons. They cited the outdated voter-rolls which did not acknowledge the split among the Socialist Revolutionary party and the assemblies conflict with the Congress of the Soviets as an alternative democratic structure. Trotskyist historian Vadim Rogovin believed Stalinism had "discredited the idea of socialism in the eyes of millions of people throughout the world". Rogovin also argued that the Left Opposition, led by Leon Trotsky, was a political movement "which offered a real alternative to Stalinism, and that to crush this movement was the primary function of the Stalinist terror".
Cultural portrayal
Literature
The Twelve (1918) by the Symbolist poet Aleksandr Blok and Mystery-Bouffe (1918) and 150 000 000 by the Futurist poet Vladimir Mayakovsky were among the first poetic responses to the Revolution.
The White Guard by Mikhail Bulgakov (1925), partially autobiographical novel, portraying the life of one family torn apart by uncertainty of the Civil War times; his short novel Heart of a Dog (1925) has been interpreted as a satirical allegory of the Revolution.
The Life of Klim Samgin (1927–1936) by Maxim Gorky, a novel with a controversial reputation sometimes described as an example of Modernist literature, portrays the decline of Russian intelligentsia from the early 1870s to the Revolution as seen by a middle class intellectual during the course of his life.
Chevengur (1929) by Andrei Platonov depicts the Revolution and the Civil War in a grotesque way in a form of a Modernist parable, as a struggle between the Utopia and the Dystopia that confounds the both, and as associated by the motifs of death and apocalypse.
Mikhail Sholokhov's novel Quiet Flows the Don (1928–1940) describes the lives of Don Cossacks during the World War I, the Revolution, and the Civil War.
George Orwell's classic novella Animal Farm (1945) is an allegory of the Russian Revolution and its aftermath. It describes the dictator Joseph Stalin as a big Berkshire boar named, "Napoleon". Trotsky is represented by a pig called Snowball who is a brilliant talker and makes magnificent speeches. However, Napoleon overthrows Snowball as Stalin overthrew Trotsky and Napoleon takes over the farm the animals live on. Napoleon becomes a tyrant and uses force and propaganda to oppress the animals, while culturally teaching them that they are free.
Doctor Zhivago (1957) by Boris Pasternak describes the fate of Russian intelligentsia; the events take place between the Revolution of 1905 and World War II.
The Red Wheel (1984–1991) by Aleksandr Solzhenitsyn, a cycle of novels that describes the fall of the Russian Empire and the establishment of the Soviet Union.
Film
The Russian Revolution has been portrayed in or served as backdrop for many films. Among them, in order of release date:
The End of Saint Petersburg. 1927. Directed by Vsevolod Pudovkin and Mikhail Doller, USSR
October: Ten Days That Shook the World. 1927. Directed by Sergei Eisenstein and Grigori Aleksandrov. Soviet Union. Black and white. Silent.
Scarlet Dawn, a 1932 Pre-Code American romantic drama starring Douglas Fairbanks, Jr. and Nancy Carroll caught up in the fallout of the Russian Revolution.
Knight Without Armour. 1937. A British historical drama starring Marlene Dietrich and Robert Donat, with Dietrich as an imperiled aristocrat on the eve of the Russian Revolution.
Lenin in 1918. 1939. Directed by Mikhail Romm, E. Aron, and I. Simkov. Historical-revolutionary film about Lenin's activities in the first years of Soviet power.
Doctor Zhivago. 1965. A drama-romance-war film directed by David Lean, filmed in Europe with a largely European cast, loosely based on the famous novel of the same name by Boris Pasternak.
Reds. 1981. Directed by Warren Beatty, it is based on the book Ten Days that Shook the World.
Anastasia. 1997. An American animated feature, directed by Don Bluth and Gary Goldman.
See also
Index of articles related to the Russian Revolution and Civil War
April Crisis
Foreign relations of the Soviet Union
Iranian Revolution
Arthur Ransome
Paris Commune
Preference falsification
Ten Days That Shook the World
Explanatory footnotes
References
Sources
Further reading
Historiography
Participants' accounts
Primary sources
Includes private letters, press editorials, government decrees, diaries, philosophical tracts, belles-lettres, and memoirs.
External links
Albert, Gleb: Labour Movements, Trade Unions and Strikes (Russian Empire), in: 1914-1918-online. International Encyclopedia of the First World War.
The Bolsheviks and workers' control: the state and counter-revolution - Maurice Brinton
Brudek, Paweł: Revolutions (East Central Europe), in: 1914-1918-online. International Encyclopedia of the First World War.
Orlando Figes's free educational website on the Russian Revolution and Soviet history, May 2014
Gaida, Fedor Aleksandrovich: Governments, Parliaments and Parties (Russian Empire), in: 1914-1918-online. International Encyclopedia of the First World War.
Gatrell, Peter: Organization of War Economies (Russian Empire), in: 1914-1918-online. International Encyclopedia of the First World War.
Violence and Revolution in 1917. Mike Haynes for Jacobin. 17 July 2017.
Marks, Steven G.: War Finance (Russian Empire), in: 1914-1918-online. International Encyclopedia of the First World War.
Mawdsley, Evan: International Responses to the Russian Civil War (Russian Empire), in: 1914-1918-online. International Encyclopedia of the First World War.
Melancon, Michael S.: Social Conflict and Control, Protest and Repression (Russian Empire), in: 1914-1918-online. International Encyclopedia of the First World War.
Kevin Murphy's Isaac and Tamara Deutscher Memorial Prize lecture "Can we Write the History of the Russian Revolutionæ, which examines historical accounts of 1917 in the light of newly accessible archive material.
Read, Christopher: Revolutions (Russian Empire), in: 1914-1918-online. International Encyclopedia of the First World War.
Sanborn, Joshua A.: Russian Empire, in: 1914-1918-online. International Encyclopedia of the First World War.
Schell, Jonathan "The Mass Minority in Action: France and Russia"—Chapter 6 of The Unconquerable World: Power, Nonviolence and the Will of the People. Thanks to Trotsky, the 'insurrection' was bloodless.
Sumpf, Alexandre: Russian Civil War, in: 1914-1918-online. International Encyclopedia of the First World War.
Soviet history archive at www.marxists.org
Archival footage of the Russian Revolution // Net-Film Newsreels and Documentary Films Archive
()—A summary of the key events and factors of the 1917 Russian Revolution.
Revolution
Category:20th-century revolutions
Category:Aftermath of World War I in Russia and in the Soviet Union
Category:Communism in Russia
Category:Communist revolutions
Russian Revolution
Category:Rebellions against the Russian Empire
Category:Revolutions in the Russian Empire
Category:Uprisings during World War I
|
political_movements
| 10,651
|
26088
|
Red wolf
|
https://en.wikipedia.org/wiki/Red_wolf
|
The red wolf (Canis rufus) is a canine native to the southeastern United States. Its size is intermediate between the coyote (Canis latrans) and gray wolf (Canis lupus).
The red wolf's taxonomic classification as being a separate species has been contentious for nearly a century, being classified either as a subspecies of the gray wolf Canis lupus rufus, or a coywolf (a genetic admixture of wolf and coyote). Because of this, it is sometimes excluded from endangered species lists, despite its critically low numbers. Under the Endangered Species Act of 1973, the U.S. Fish and Wildlife Service recognizes the red wolf as an endangered species and grants it protected status. Since 1996, the IUCN has listed the red wolf as a Critically Endangered species; however, it is not listed in the CITES Appendices of endangered species.
History
Red wolves were once distributed throughout the southeastern and south-central United States from the Atlantic Ocean to central Texas, southeastern Oklahoma and southwestern Illinois in the west, and in the north from the Ohio River Valley, northern Pennsylvania, southern New York, and extreme southern Ontario in Canada south to the Gulf of Mexico. The red wolf was nearly driven to extinction by the mid-1900s due to aggressive predator-control programs, habitat destruction, and extensive hybridization with coyotes. By the late 1960s, it occurred in small numbers in the Gulf Coast of western Louisiana and eastern Texas.
Fourteen of these survivors were selected to be the founders of a captive-bred population, which was established in the Point Defiance Zoo and Aquarium between 1974 and 1980. After a successful experimental relocation to Bulls Island off the coast of South Carolina in 1978, the red wolf was declared extinct in the wild in 1980 so that restoration efforts could proceed. In 1987, the captive animals were released into the Alligator River National Wildlife Refuge (ARNWR) on the Albemarle Peninsula in North Carolina, with a second unsuccessful release taking place two years later in the Great Smoky Mountains National Park. Of 63 red wolves released from 1987 to 1994, the population rose to as many as 100–120 individuals in 2012, but due to the lack of regulation enforcement by the US Fish and Wildlife Service, the population has declined to 40 individuals in 2018, about 14 in 2019 and 8 as of October 2021. No wild litters were born between 2019 and 2020.
Under pressure from conservation groups, the US Fish and Wildlife Service resumed reintroductions in 2021 and increased protection. In 2022, the first wild litter was born since 2018. As of 2023, there are between 15 and 17 wild red wolves in ARNWR.
Description and behavior
The red wolf's appearance is typical of the genus Canis, and is generally intermediate in size between the coyote and gray wolf, though some specimens may overlap in size with small gray wolves. A study of Canis morphometrics conducted in eastern North Carolina reported that red wolves are morphometrically distinct from coyotes and hybrids. Adults measure 136–165 cm (53.5–65 in) in length, comprising a tail of about 37 cm (14.6 in). Their weight ranges from 20 to 39 kg (44–85 lbs) with males averaging 29 kg (64 lbs) and females 25 kg (55 lbs). Its pelage is typically more reddish and sparsely furred than the coyote's and gray wolf's, though melanistic individuals do occur. Its fur is generally tawny to grayish in color, with light markings around the lips and eyes. The red wolf has been compared by some authors to the greyhound in general form, owing to its relatively long and slender limbs. The ears are also proportionately larger than the coyote's and gray wolf's. The skull is typically narrow, with a long and slender rostrum, a small braincase and a well developed sagittal crest. Its cerebellum is unlike that of other Canis species, being closer in form to that of canids of the Vulpes and Urocyon genera, thus indicating that the red wolf is one of the more plesiomorphic members of its genus.
The red wolf is more sociable than the coyote, but less so than the gray wolf. It mates in January–February, with an average of 6–7 pups being born in March, April, and May. It is monogamous, with both parents participating in the rearing of young. Denning sites include hollow tree trunks, along stream banks and the abandoned earths of other animals. By the age of six weeks, the pups distance themselves from the den, and reach full size at the age of one year, becoming sexually mature two years later.
Using long-term data on red wolf individuals of known pedigree, it was found that inbreeding among first-degree relatives was rare. A likely mechanism for avoidance of inbreeding is independent dispersal trajectories from the natal pack. Many of the young wolves spend time alone or in small non-breeding packs composed of unrelated individuals. The union of two unrelated individuals in a new home range is the predominant pattern of breeding pair formation. Inbreeding is avoided because it results in progeny with reduced fitness (inbreeding depression) that is predominantly caused by the homozygous expression of recessive deleterious alleles.
Prior to its extinction in the wild, the red wolf's diet consisted of rabbits, rodents, and nutria (an introduced species). In contrast, the red wolves from the restored population rely on white-tailed deer, pig, raccoon, rice rats, muskrats, nutria, rabbits and carrion. White-tailed deer were largely absent from the last wild refuge of red wolves on the Gulf Coast between Texas and Louisiana (where specimens were trapped from the last wild population for captive breeding), which likely accounts for the discrepancy in their dietary habits listed here. Historical accounts of wolves in the southeast by early explorers such as William Hilton, who sailed along the Cape Fear River in what is now North Carolina in 1644, also note that they ate deer.
Predation
In Florida, red wolves may be eaten by some growth stage of invasive snakes like Burmese pythons, reticulated pythons, Southern African rock pythons, Central African rock pythons, boa constrictors, yellow anacondas, Bolivian anacondas, dark-spotted anacondas, and green anacondas.
Range and habitat
The originally recognized red wolf range extended throughout the southeastern United States from the Atlantic and Gulf Coasts, north to the Ohio River Valley and central Pennsylvania, and west to Central Texas and southeastern Missouri. Research into paleontological, archaeological and historical specimens of red wolves by Ronald Nowak expanded their known range to include land south of the Saint Lawrence River in Canada, along the eastern seaboard, and west to Missouri and mid-Illinois, terminating in the southern latitudes of Central Texas.
Given their wide historical distribution, red wolves probably used a large suite of habitat types at one time. The last naturally occurring population used coastal prairie marshes, swamps, and agricultural fields used to grow rice and cotton. However, this environment probably does not typify preferred red wolf habitat. Some evidence shows the species was found in highest numbers in the once extensive bottom-land river forests and swamps of the southeastern United States. Red wolves reintroduced into northeastern North Carolina have used habitat types ranging from agricultural lands to forest/wetland mosaics characterized by an overstory of pine and an understory of evergreen shrubs. This suggests that red wolves are habitat generalists and can thrive in most settings where prey populations are adequate and persecution by humans is slight.
Extirpation in the wild
In 1940, the biologist Stanley P. Young noted that the red wolf was still common in eastern Texas, where more than 800 had been caught in 1939 because of their attacks on livestock. He did not believe that they could be exterminated because of their habit of living concealed in thickets. In 1962 a study of skull morphology of wild Canis in the states of Arkansas, Louisiana, Oklahoma, and Texas indicated that the red wolf existed in only a few populations due to hybridization with the coyote. The explanation was that either the red wolf could not adapt to changes to its environment due to human land-use along with its accompanying influx of competing coyotes from the west, or that the red wolf was being hybridized out of existence by the coyote.
Reintroduced habitat
Since 1987, red wolves have been released into northeastern North Carolina, where they roam 1.7 million acres. These lands span five counties (Dare, Hyde, Tyrrell, Washington, and Beaufort) and include three national wildlife refuges, a U.S. Air Force bombing range, and private land. The red wolf recovery program is unique for a large carnivore reintroduction in that more than half of the land used for reintroduction lies on private property. Approximately are federal and state lands, and are private lands.
Beginning in 1991, red wolves were also released into the Great Smoky Mountains National Park in eastern Tennessee. However, due to exposure to environmental disease (parvovirus), parasites, and competition (with coyotes as well as intraspecific aggression), the red wolf was unable to successfully establish a wild population in the park. Low prey density was also a problem, forcing the wolves to leave the park boundaries in pursuit of food in lower elevations. In 1998, the FWS took away the remaining red wolves in the Great Smoky Mountains National Park, relocating them to Alligator River National Wildlife Refuge in eastern North Carolina. Other red wolves have been released on the coastal islands in Florida, Mississippi, and South Carolina as part of the captive breeding management plan. St. Vincent Island in Florida is currently the only active island propagation site.
Captive breeding and reintroduction
After the passage of the Endangered Species Act of 1973, formal efforts backed by the U.S. Fish and Wildlife Service began to save the red wolf from extinction, when a captive-breeding program was established at the Point Defiance Zoological Gardens, Tacoma, Washington. Four hundred animals were captured from southwestern Louisiana and southeastern Texas from 1973 to 1980 by the USFWS.
Measurements, vocalization analyses, and skull X-rays were used to distinguish red wolves from coyotes and red wolf × coyote hybrids. Of the 400 canids captured, only 43 were believed to be red wolves and sent to the breeding facility. The first litters were produced in captivity in May 1977. Some of the pups were determined to be hybrids, and they and their parents were removed from the program. Of the original 43 animals, only 17 were considered pure red wolves and since three were unable to breed, 14 became the breeding stock for the captive-breeding program. These 14 were so closely related that they had the genetic effect of being only eight individuals.
In 1996, the red wolf was listed by the International Union for Conservation of Nature as a critically endangered species.
20th century releases
1976 release in Cape Romain National Wildlife Refuge
In December 1976, two wolves were released onto Cape Romain National Wildlife Refuge's Bulls Island in South Carolina with the intent of testing and honing reintroduction methods. They were not released with the intent of beginning a permanent population on the island. The first experimental translocation lasted for 11 days, during which a mated pair of red wolves was monitored day and night with remote telemetry. A second experimental translocation was tried in 1978 with a different mated pair, and they were allowed to remain on the island for close to nine months. After that, a larger project was executed in 1987 to reintroduce a permanent population of red wolves back to the wild in the Alligator River National Wildlife Refuge (ARNWR) on the eastern coast of North Carolina. Also in 1987, Bulls Island became the first island breeding site. Pups were raised on the island and relocated to North Carolina until 2005.
1986 release in Alligator River National Wildlife Refuge
In September 1987, four male-female pairs of red wolves were released in the Alligator River National Wildlife Refuge, in northeastern North Carolina, and designated as an experimental population. Since then, the experimental population has grown and the recovery area expanded to include four national wildlife refuges, a Department of Defense bombing range, state-owned lands, and private lands, encompassing about .
1989 release on Horn Island, Mississippi
In 1989, the second island propagation project was initiated with release of a population on Horn Island off the Mississippi coast. This population was removed in 1998 because of a likelihood of encounters with humans. The third island propagation project introduced a population on St. Vincent Island, Florida, offshore between Cape San Blas and Apalachicola, Florida, in 1990, and in 1997, the fourth island propagation program introduced a population to Cape St. George Island, Florida, south of Apalachicola.
1991 release in the Great Smoky Mountains
In 1991, two pairs were reintroduced into the Great Smoky Mountains National Park, where the last known red wolf was killed in 1905. Despite some early success, the wolves were relocated to eastern North Carolina in 1998, ending the effort to reintroduce the species to the park.
21st century status
Over 30 facilities participate in the red wolf Species Survival Plan and oversee the breeding and reintroduction of over 150 wolves.
In 2007, the USFWS estimated that 300 red wolves remained in the world, with 207 of those in captivity. By late 2020, the number of wild individuals had shrunk to only about 7 radio-collared and a dozen uncollared individuals, with no wild pups born since 2018. This decline has been linked to shooting and poisoning of wolves by landowners, and suspended conservation efforts by the USFWS.
A 2019 analysis by the Center for Biological Diversity of available habitat throughout the red wolf's former range found that over 20,000 square miles of public land across five sites had viable habitat for red wolves to be reintroduced to in the future. These sites were chosen based on prey levels, isolation from coyotes and human development, and connectivity with other sites. These sites include: the Apalachicola and Osceola National Forests along with the Okefenokee National Wildlife Refuge and nearby protected lands; numerous national parks and national forests in the Appalachian Mountains including the Monongahela, George Washington & Jefferson, Cherokee, Pisgah, Nantahala, Chattahoochee, and Talladega National Forests along with Shenandoah National Park and the lower elevations of Great Smoky Mountains National Park; Croatoan National Forest and Hofmann Forest on the North Carolina coast, and the Ozark, Ouatchita, and Mark Twain National Forests in the central United States.
In late 2018, two canids that are largely coyote were found on Galveston Island, Texas with red wolf alleles (gene expressions) left from a ghost population of red wolves. Since these alleles are from a different population from the red wolves in the North Carolina captive breeding program, there has been a proposal to selectively cross-breed the Galveston Island coyotes into the captive red wolf population. Another study published around the same time analyzing canid scat and hair samples in southwestern Louisiana found genetic evidence of red wolf ancestry in about 55% of sampled canids, with one such individual having between 78 and 100% red wolf ancestry, suggesting the possibility of more red wolf genes in the wild that may not be present in the captive population.
From 2015 to 2019, there were no red wolves released into the wild. But in March 2020, the FWS released a new breeding pair of red wolves, including a young male red wolf from St. Vincent Island, Florida into the Alligator River National Wildlife Refuge. The pair were unsuccessful at producing a litter of pups in the wild. On March 1, 2021, two male red wolves from Florida were paired with two female wild red wolves from eastern North Carolina and released into the wild. One of the male wolves was killed by a car shortly after being released into the wild. On April 30 and May 1, four adult red wolves were released into the wild and four red wolf pups were fostered by a wild female red wolf. In addition to the eight released wolves, the total number of red wolves living in the wild amount to nearly thirty wild individuals, including a dozen other wolves not wearing radio collars.
A study published in 2020 reported camera traps recorded "the presence of a large canid possessing wolf-like characters" in northeast Texas and later hair samples and tracks from the area indicated the presence of red wolves.
By fall of 2021, a total of six red wolves had been killed, including the four adults that had been released in the spring. Three of the released adults had been killed in vehicle collisions, two had died from unknown cases, and the fourth released adult had been shot by a landowner who feared the wolf was attempting to get his chickens. These losses dropped the number of wolves in the wild down to about 20 wild individuals. In the winter of 2021–2022, the Fish and Wildlife Services selected nine captive adult red wolves to be released into the wild. A family of five red wolves were released into the Pocosin Lakes National Wildlife Refuge, while two new breeding pairs of adult wolves were released into the Alligator River National Wildlife Refuge. The release of these new wolves brought the number of wild red wolves in eastern North Carolina up to less than 30 wild individuals.
On April 22, 2022, one of the breeding pairs of adult red wolves produced a litter of six wolf pups, four females and two males. This new litter of red wolf pups became the first litter born in the wild since 2018. As of 2023, there are between 15 and 17 wild red wolves in Alligator River National Wildlife Refuge.
Existing population
In April and May 2023, two captive male red wolves were paired with two wild female wolves in acclimation pens and were later released into the wild. At the same time, the wild breeding pair that produced a litter of pups the previous year gave birth to a second litter of 5 pups, 2 males and 3 females. A male wolf pup from a captive litter was fostered into the pack, and with this new addition, the family of red wolves, which was named the Milltail pack by FWS, has grown to 13 wild individuals. These six new pups has brought the wild population of red wolves up to 23–25 wild individuals.
In May 2023, two families of red wolves were placed in acclimation pens to be released into the wild in the Pocosin Lakes National Wildlife Refuge in Tyrrell County. One family consisted of a breeding pair and three pups, while the other consisted of a breeding pair, a yearling female, and four young pups that were born in the acclamation pen. In early June 2023, the two families of red wolves were released into the wild to roam through PLNWR. With the addition of these two separate packs, the wild population of red wolves had increased to about 35 wild individuals. In addition to the wild population, there are approximately 270 red wolves in zoos and captive breeding programs across the U.S.
Coyote × re-introduced red wolf issues
Interbreeding with the coyote has been recognized as a threat affecting the restoration of red wolves. Adaptive management efforts are making progress in reducing the threat of coyotes to the red wolf population in northeastern North Carolina. Other threats, such as habitat fragmentation, disease, and human-caused mortality, are of concern in the restoration of red wolves. Efforts to reduce the threats are presently being explored.
By 1999, introgression of coyote genes was recognized as the single greatest threat to wild red wolf recovery and an adaptive management plan which included coyote sterilization has been successful, with coyote genes being reduced by 2015 to less than 4% of the wild red wolf population.
Since the 2014 programmatic review, the USFWS ceased implementing the red wolf adaptive management plan that was responsible for preventing red wolf hybridization with coyotes and allowed the release of captive-born red wolves into the wild population. Since then, the wild population has decreased from 100–115 red wolves to less than 30. Despite the controversy over the red wolf's status as a unique taxon as well as the USFWS' apparent disinterest towards wolf conservation in the wild, the vast majority of public comments (including NC residents) submitted to the USFWS in 2017 over their new wolf management plan were in favor of the original wild conservation plan.
A 2016 genetic study of canid scats found that despite high coyote density inside the Red Wolf Experimental Population Area (RWEPA), hybridization occurs rarely (4% are hybrids).
Contested killing of re-introduced red wolves
High wolf mortality related to anthropogenic causes appeared to be the main factor limiting wolf dispersal westward from the RWEPA. High anthropogenic wolf mortality similarly limits expansion of eastern wolves outside of protected areas in south-eastern Canada.
In 2012, the Southern Environmental Law Center filed a lawsuit against the North Carolina Wildlife Resources Commission for jeopardizing the existence of the wild red wolf population by allowing nighttime hunting of coyotes in the five-county restoration area in eastern North Carolina. A 2014 court-approved settlement agreement was reached that banned nighttime hunting of coyotes and requires permitting and reporting coyote hunting. In response to the settlement, the North Carolina Wildlife Resources Commission adopted a resolution requesting the USFWS to remove all wild red wolves from private lands, terminate recovery efforts, and declare red wolves extinct in the wild. This resolution came in the wake of a 2014 programmatic review of the red wolf conservation program conducted by The Wildlife Management Institute. The Wildlife Management Institute indicated the reintroduction of the red wolf was an incredible achievement. The report indicated that red wolves could be released and survive in the wild, but that illegal killing of red wolves threatens the long-term persistence of the population. The report stated that the USFWS needed to update its red wolf recovery plan, thoroughly evaluate its strategy for preventing coyote hybridization and increase its public outreach.
In 2014, the USFWS issued the first take permit for a red wolf to a private landowner. Since then, the USFWS issued several other take permits to landowners in the five-county restoration area. During June 2015, a landowner shot and killed a female red wolf after being authorized a take permit, causing a public outcry. In response, the Southern Environmental Law Center filed a lawsuit against the USFWS for violating the Endangered Species Act.
By 2016, the red wolf population of North Carolina had declined to 45–60 wolves. The largest cause of this decline was gunshot.
In June 2018, the USFWS announced a proposal that would limit the wolves' safe range to only Alligator River National Wildlife Refuge, where only about 35 wolves remain, thus allowing hunting on private land. In November 2018, Chief Judge Terrence W. Boyle found that the USFWS had violated its congressional mandate to protect the red wolf, and ruled that USFWS had no power to give landowners the right to shoot them.
Relationship to humans
Since before European colonization of the Americas, the red wolf has featured prominently in Cherokee spiritual beliefs, where it is known as wa'ya (ᏩᏯ), and is said to be the companion of Kana'ti - the hunter and father of the Aniwaya or Wolf Clan. Traditionally, Cherokee people generally avoid killing red wolves, as such an act is believed to bring about the vengeance of the killed animals' pack-mates.
Gallery
Taxonomy
The taxonomic status of the red wolf is debated. It has been described as either a species with a distinct lineage, a recent hybrid of the gray wolf and the coyote, an ancient hybrid of the gray wolf and the coyote which warrants species status, or a distinct species that has undergone recent hybridization with the coyote.
The naturalists John James Audubon and John Bachman were the first to suggest that the wolves of the southern United States were different from wolves in its other regions. In 1851, they recorded the "Black American Wolf" as C. l. var. ater that existed in Florida, South Carolina, North Carolina, Kentucky, southern Indiana, southern Missouri, Louisiana, and northern Texas. They also recorded the "Red Texan Wolf" as C. l. var. rufus that existed from northern Arkansas, through Texas, and into Mexico. In 1912, the zoologist Gerrit Smith Miller Jr. noted that the designation ater was unavailable and recorded these wolves as C. l. floridanus.
In 1937, the zoologist Edward Alphonso Goldman proposed a new species of wolf Canis rufus. Three subspecies of red wolf were originally recognized by Goldman, with two of these subspecies now being extinct. The Florida black wolf (Canis rufus floridanus) (Maine to Florida) has been extinct since 1908 and the Texas red wolf (Canis rufus rufus) (south-central United States) was declared extinct by 1970. By the 1970s, the Mississippi Valley red wolf (Canis rufus gregoryi) existed only in the coastal prairies and marshes of extreme southeastern Texas and southwestern Louisiana. These were removed from the wild to form a captive breeding program and reintroduced into eastern North Carolina in 1987.
In 1967, the zoologists Barbara Lawrence and William H. Bossert believed that the case for classifying C. rufus as a species was based too heavily on the small red wolves of central Texas, from where it was known that there existed hybridization with the coyote. They said that if an adequate number of specimens had been included from Florida, then the separation of C. rufus from C. lupus would have been unlikely. The taxonomic reference Catalogue of Life classifies the red wolf as a subspecies of Canis lupus. The mammalogist W. Christopher Wozencraft, writing in Mammal Species of the World (2005), regards the red wolf as a hybrid of the gray wolf and the coyote, but due to its uncertain status compromised by recognizing it as a subspecies of the gray wolf Canis lupus rufus.
In 2021, the American Society of Mammalogists considered the red wolf as its own species (Canis rufus).
Taxonomic debate
When European settlers first arrived to North America, the coyote's range was limited to the western half of the continent. They existed in the arid areas and across the open plains, including the prairie regions of the midwestern states. Early explorers found some in Indiana and Wisconsin. From the mid-1800s onward, coyotes began expanding beyond their original range.
The taxonomic debate regarding North American wolves can be summarised as follows:
Fossil evidence
The paleontologist Ronald M. Nowak notes that the oldest fossil remains of the red wolf are 10,000 years old and were found in Florida near Melbourne, Brevard County, Withlacoochee River, Citrus County, and Devil's Den Cave, Levy County. He notes that there are only a few, but questionable, fossil remains of the gray wolf found in the southeastern states. He proposes that following the extinction of the dire wolf, the coyote appears to have been displaced from the southeastern US by the red wolf until the last century, when the extirpation of wolves allowed the coyote to expand its range. He also proposes that the ancestor of all North American and Eurasian wolves was C. mosbachensis, which lived in the Middle Pleistocene 700,000–300,000 years ago.
C. mosbachensis was a wolf that once lived across Eurasia before going extinct. It was smaller than most North American wolf populations and smaller than C. rufus, and has been described as being similar in size to the small Indian wolf, Canis lupus pallipes. He further proposes that C. mosbachensis invaded North America where it became isolated by the later glaciation and there gave rise to C. rufus. In Eurasia, C. mosbachensis evolved into C. lupus, which later invaded North America.
The paleontologist and expert on the genus Canis natural history, Xiaoming Wang, looked at red wolf fossil material but could not state if it was, or was not, a separate species. He said that Nowak had put together more morphometric data on red wolves than anybody else, but Nowak's statistical analysis of the data revealed a red wolf that is difficult to deal with. Wang proposes that studies of ancient DNA taken from fossils might help settle the debate. In 2009, Tedford, Wang and Taylor reclassified the purported red wolf fossils as Canis armbrusteri and Canis edwardii.
Morphological evidence
In 1771, the English naturalist Mark Catesby referred to Florida and the Carolinas when he wrote that "The Wolves in America are like those of Europe, in shape and colour, but are somewhat smaller." They were described as being more timid and less voracious. In 1791, the American naturalist William Bartram wrote in his book Travels about a wolf which he had encountered in Florida that was larger than a dog, but was black in contrast to the larger yellow-brown wolves of Pennsylvania and Canada. In 1851, the naturalists John James Audubon and John Bachman described the "Red Texan Wolf" in detail. They noted that it could be found in Florida and other southeastern states, but it differed from other North American wolves and named it Canis lupus rufus. It was described as being more fox-like than the gray wolf, but retaining the same "sneaking, cowardly, yet ferocious disposition".
In 1905, the mammalogist Vernon Bailey referred to the "Texan Red Wolf" with the first use of the name Canis rufus. In 1937, the zoologist Edward Goldman undertook a morphological study of southeastern wolf specimens. He noted that their skulls and dentition differed from those of gray wolves and closely approached those of coyotes. He identified the specimens as all belonging to the one species which he referred to as Canis rufus. Goldman then examined a large number of southeastern wolf specimens and identified three subspecies, noting that their colors ranged from black, gray, and cinnamon-buff.
It is difficult to distinguish the red wolf from a red wolf × coyote hybrid. During the 1960s, two studies of the skull morphology of wild Canis in the southeastern states found them to belong to the red wolf, the coyote, or many variations in between. The conclusion was that there has been recent massive hybridization with the coyote. In contrast, another 1960s study of Canis morphology concluded that the red wolf, eastern wolf, and domestic dog were closer to the gray wolf than the coyote, while still remaining clearly distinctive from each other. The study regarded these 3 canines as subspecies of the gray wolf. However, the study noted that "red wolf" specimens taken from the edge of their range which they shared with the coyote could not be attributed to any one species because the cranial variation was very wide. The study proposed further research to ascertain if hybridization had occurred.
In 1971, a study of the skulls of C. rufus, C. lupus and C. latrans indicated that C. rufus was distinguishable by being in size and shape midway between the gray wolf and the coyote. A re-examination of museum canine skulls collected from central Texas between 1915 and 1918 showed variations spanning from C. rufus through to C. latrans. The study proposes that by 1930 due to human habitat modification, the red wolf had disappeared from this region and had been replaced by a hybrid swarm. By 1969, this hybrid swarm was moving eastwards into eastern Texas and Louisiana.
In the late 19th century, sheep farmers in Kerr County, Texas, stated that the coyotes in the region were larger than normal coyotes, and they believed that they were a gray wolf and coyote cross. In 1970, the wolf mammalogist L. David Mech proposed that the red wolf was a hybrid of the gray wolf and coyote. However, a 1971 study compared the cerebellum within the brain of six Canis species and found that the cerebellum of the red wolf indicated a distinct species, was closest to that of the gray wolf, but in contrast indicated some characteristics that were more primitive than those found in any of the other Canis species. In 2014, a three-dimensional morphometrics study of Canis species accepted only six red wolf specimens for analysis from those on offer, due to the impact of hybridization on the others.
DNA studies
Different DNA studies may give conflicting results because of the specimens selected, the technology used, and the assumptions made by the researchers.
Phylogenetic trees compiled using different genetic markers have given conflicting results on the relationship between the wolf, dog and coyote. One study based on SNPs (a single mutation), and another based on nuclear gene sequences (taken from the cell nucleus), showed dogs clustering with coyotes and separate from wolves. Another study based on SNPS showed wolves clustering with coyotes and separate from dogs. Other studies based on a number of markers show the more widely accepted result of wolves clustering with dogs separate from coyotes. These results demonstrate that caution is needed when interpreting the results provided by genetic markers.
Genetic marker evidence
In 1980, a study used gel electrophoresis to look at fragments of DNA taken from dogs, coyotes, and wolves from the red wolf's core range. The study found that a unique allele (expression of a gene) associated with Lactate dehydrogenase could be found in red wolves, but not dogs and coyotes. The study suggests that this allele survives in the red wolf. The study did not compare gray wolves for the existence of this allele.
Mitochondrial DNA (mDNA) passes along the maternal line and can date back thousands of years. In 1991, a study of red wolf mDNA indicates that red wolf genotypes match those known to belong to the gray wolf or the coyote. The study concluded that the red wolf is either a wolf × coyote hybrid or a species that has hybridized with the wolf and coyote across its entire range. The study proposed that the red wolf is a southeastern occurring subspecies of the gray wolf that has undergone hybridization due to an expanding coyote population; however, being unique and threatened that it should remain protected. This conclusion led to debate for the remainder of the decade.
In 2000, a study looked at red wolves and eastern Canadian wolves. The study agreed that these two wolves readily hybridize with the coyote. The study used eight microsatellites (genetic markers taken from across the genome of a specimen). The phylogenetic tree produced from the genetic sequences showed red wolves and eastern Canadian wolves clustering together. These then clustered next closer with the coyote and away from the gray wolf. A further analysis using mDNA sequences indicated the presence of coyote in both of these two wolves, and that these two wolves had diverged from the coyote 150,000–300,000 years ago. No gray wolf sequences were detected in the samples. The study proposes that these findings are inconsistent with the two wolves being subspecies of the gray wolf, that red wolves and eastern Canadian wolves evolved in North America after having diverged from the coyote, and therefore they are more likely to hybridize with coyotes.
In 2009, a study of eastern Canadian wolves using microsatellites, mDNA, and the paternally-inherited yDNA markers found that the eastern Canadian wolf was a unique ecotype of the gray wolf that had undergone recent hybridization with other gray wolves and coyotes. It could find no evidence to support the findings of the earlier 2000 study regarding the eastern Canadian wolf. The study did not include the red wolf.
In 2011, a study compared the genetic sequences of 48,000 single nucleotide polymorphisms (mutations) taken from the genomes of canids from around the world. The comparison indicated that the red wolf was about 76% coyote and 24% gray wolf with hybridization having occurred 287–430 years ago. The eastern wolf was 58% gray wolf and 42% coyote with hybridization having occurred 546–963 years ago. The study rejected the theory of a common ancestry for the red and eastern wolves. However the next year, a study reviewed a subset of the 2011 study's Single-nucleotide polymorphism (SNP) data and proposed that its methodology had skewed the results and that the red and eastern wolves are not hybrids but are in fact the same species separate from the gray wolf. The 2012 study proposed that there are three true Canis species in North America: The gray wolf, the western coyote, and the red wolf / eastern wolf. The eastern wolf was represented by the Algonquin wolf. The Great Lakes wolf was found to be a hybrid of the eastern wolf and the gray wolf. Finally, the study found the eastern coyote itself to be yet another a hybrid between the western coyote and the eastern (Algonquin) wolf (for more on eastern North American wolf-coyote hybrids, see coywolf).
Also in 2011, a scientific literature review was undertaken to help assess the taxonomy of North American wolves. One of the findings proposed was that the eastern wolf is supported as a separate species by morphological and genetic data. Genetic data supports a close relationship between the eastern and red wolves, but not close enough to support these as one species. It was "likely" that these were the separate descendants of a common ancestor shared with coyotes. This review was published in 2012. In 2014, the National Center for Ecological Analysis and Synthesis was invited by the United States Fish and Wildlife Service to provide an independent review of its proposed rule relating to gray wolves. The center's panel findings were that the proposed rule depended heavily upon a single analysis contained in a scientific literature review by Chambers et al. (2011 ), that that study was not universally accepted, that the issue was "not settled", and that the rule does not represent the "best available science".
Brzeski et al. (2016) conducted an mDNA analysis of three ancient (300–1,900 years old) wolf-like samples from the southeastern United States found that they grouped with the coyote clade, although their teeth were wolf-like. The study proposed that the specimens were either coyotes and this would mean that coyotes had occupied this region continuously rather than intermittently, a North American evolved red wolf lineage related to coyotes, or an ancient coyote–wolf hybrid. Ancient hybridization between wolves and coyotes would likely have been due to natural events or early human activities, not landscape changes associated with European colonization because of the age of these samples. Coyote–wolf hybrids may have occupied the southeastern United States for a long time, filling an important niche as a medium-large predator.
Whole-genome evidence
In July 2016, a whole-genome DNA study proposed, based on the assumptions made, that all of the North American wolves and coyotes diverged from a common ancestor less than 6,000–117,000 years ago. The study also indicated that all North America wolves have a significant amount of coyote ancestry and all coyotes some degree of wolf ancestry, and that the red wolf and Great Lakes region wolf are highly admixed with different proportions of gray wolf and coyote ancestry. One test indicated a wolf/coyote divergence time of 51,000 years before present that matched other studies indicating that the extant wolf came into being around this time. Another test indicated that the red wolf diverged from the coyote between 55,000 and 117,000 years before present and the Great Lakes region wolf 32,000 years before present. Other tests and modelling showed various divergence ranges and the conclusion was a range of less than 6,000 and 117,000 years before present. The study found that coyote ancestry was highest in red wolves from the southeast of the United States and lowest among the Great Lakes region wolves.
The theory proposed was that this pattern matched the south-to-north disappearance of the wolf due to European colonization and its resulting loss of habitat. Bounties led to the extirpation of wolves initially in the southeast, and as the wolf population declined wolf-coyote admixture increased. Later, this process occurred in the Great Lakes region with the influx of coyotes replacing wolves, followed by the expansion of coyotes and their hybrids across the wider region. The red wolf may possess some genomic elements that were unique to gray wolf and coyote lineages from the American South. The proposed timing of the wolf/coyote divergence conflicts with the finding of a coyote-like specimen in strata dated to 1 million years before present, and red wolf fossil specimens dating back 10,000 years ago. The study concluded by stating that because of the extirpation of gray wolves in the American Southeast, "the reintroduced population of red wolves in eastern North Carolina is doomed to genetic swamping by coyotes without the extensive management of hybrids, as is currently practiced by the USFWS."
In September 2016, the USFWS announced a program of changes to the red wolf recovery program and "will begin implementing a series of actions based on the best and latest scientific information". The service will secure the captive population which is regarded as not sustainable, determine new sites for additional experimental wild populations, revise the application of the existing experimental population rule in North Carolina, and complete a comprehensive Species Status Assessment.
In 2017, a group of canid researchers challenged the recent finding that the red wolf and the eastern wolf were the result of recent coyote-wolf hybridization. The group highlight that no testing had been undertaken to ascertain the time period that hybridization had occurred and that, by the previous study's own figures, the hybridization could not have occurred recently but supports a much more ancient hybridization. The group found deficiencies in the previous study's selection of specimens and the findings drawn from the different techniques used. Therefore, the group argues that both the red wolf and the eastern wolf remain genetically distinct North American taxa. This was rebutted by the authors of the earlier study. Another study in late 2018 of wild canids in southwestern Louisiana also supported the red wolf as a separate species, citing distinct red wolf DNA within hybrid canids.
In 2019, a literature review of the previous studies was undertaken by the National Academies of Sciences, Engineering, and Medicine. The position of the National Academies is that the historical red wolf forms a valid taxonomic species, the modern red wolf is distinct from wolves and coyotes, and modern red wolves trace some of their ancestry to historic red wolves. The species Canis rufus is supported for the modern red wolf, unless genomic evidence from historical red wolf specimens changes this assessment, due to a lack of continuity between the historic and the modern red wolves.
Wolf genome
Genetic studies relating to wolves or dogs have inferred phylogenetic relationships based on the only reference genome available, that of the Boxer dog. In 2017, the first reference genome of the wolf Canis lupus lupus was mapped to aid future research. In 2018, a study looked at the genomic structure and admixture of North American wolves, wolf-like canids, and coyotes using specimens from across their entire range that mapped the largest dataset of nuclear genome sequences against the wolf reference genome. The study supports the findings of previous studies that North American gray wolves and wolf-like canids were the result of complex gray wolf and coyote mixing. A polar wolf from Greenland and a coyote from Mexico represented the purest specimens. The coyotes from Alaska, California, Alabama, and Quebec show almost no wolf ancestry. Coyotes from Missouri, Illinois, and Florida exhibit 5–10% wolf ancestry. There was 40%:60% wolf to coyote ancestry in red wolves, 60%:40% in Eastern timber wolves, and 75%:25% in the Great Lakes wolves. There was 10% coyote ancestry in Mexican wolves and Atlantic Coast wolves, 5% in Pacific Coast and Yellowstone wolves, and less than 3% in Canadian archipelago wolves.
The study shows that the genomic ancestry of red, eastern timber and Great Lakes wolves were the result of admixture between modern gray wolves and modern coyotes. This was then followed by development into local populations. Individuals within each group showed consistent levels of coyote to wolf inheritance, indicating that this was the result of relatively ancient admixture. The eastern timber wolf (Algonquin Provincial Park) is genetically closely related to the Great Lakes wolf (Minnesota, Isle Royale National Park). If a third canid had been involved in the admixture of the North American wolf-like canids, then its genetic signature would have been found in coyotes and wolves, which it has not.
Gray wolves suffered a species-wide population bottleneck (reduction) approximately 25,000 YBP during the Last Glacial Maximum. This was followed by a single population of modern wolves expanding out of a Beringia refuge to repopulate the wolf's former range, replacing the remaining Late Pleistocene wolf populations across Eurasia and North America as they did so. This implies that if the coyote and red wolf were derived from this invasion, their histories date only tens of thousands and not hundreds of thousands of years ago, which is consistent with other studies.
The Endangered Species Act provides protection to endangered species, but does not provide protection for endangered admixed individuals, even if these serve as reservoirs for extinct genetic variation. Researchers on both sides of the red wolf debate argue that admixed canids warrant full protection under this Act.
Separate species that can be strengthened from hybrids
In 2020, a study conducted DNA sequencing of canines across southeastern US to detect those with any red wolf ancestry. The study found that red wolf ancestry exists in the coyote populations of southwestern Louisiana and southeastern Texas, but also newly detected in North Carolina. The red wolf ancestry of these populations possess unique red wolf alleles not found in the current captive red wolf population. The study proposes that the expanding coyotes admixed with red wolves to gain genetic material that was suited to the southeastern environment and would aid their adaptation to it, and that surviving red wolves admixed with coyotes because the red wolves were suffering from inbreeding.
In 2021, a study conducted DNA sequencing of canines across the remnant red wolf hybrid zone of southwestern Louisiana and southeastern Texas. The study found red wolf ancestry in the coyote genomes which increases up to 60% in a westward gradient. This was due to introgression from the remnant red wolf population over the past 100 years. The study proposes that coyotes expanded into the gulf region and admixed with red wolves prior to the red wolf going extinct in the wild due to loss of habitat and persecution. In the past two decades the hybrid region has expanded. The study presented the genetic evidence that the red wolf is a separate species, based on the structure of one of the loci of its X-chromosome which is accepted as a marker for distinct species. As such, the study suggested that the introgressed red wolf ancestry could be de-introgressed back as a basis for breeding further red wolves from the hybrids.
Pre-dates the coyote in North America
In 2021, a study of mitochondrial genomes sourced from specimens dated before the 20th century revealed that red wolves could be found across North America. With the arrival of the gray wolf between 80,000 and 60,000 years ago, the red wolf's range shrank to the eastern forests and California, and the coyote replaced the red wolf mid-continent between 60,000 and 30,000 years ago. The coyote expanded into California at the beginning of the Holocene era 12,000–10,000 years ago and admixed with the red wolf, phenotypically replacing them. The study proposes that the red wolf may pre-date the coyote in North America.
Explanatory footnotes
References
Further reading
External links
Red wolf, U.S. Fish and Wildlife Service
Category:Mammals described in 1851
Category:Mammals of the United States
Category:Canid hybrids
Category:Controversial mammal taxa
Category:Wolves
Category:Wolves in the United States
Category:Taxa named by John James Audubon
Category:Taxa named by John Bachman
Category:Critically endangered fauna of the United States
|
nature_wildlife
| 7,923
|
28442
|
Sorting algorithm
|
https://en.wikipedia.org/wiki/Sorting_algorithm
|
In computer science, a sorting algorithm is an algorithm that puts elements of a list into an order. The most frequently used orders are numerical order and lexicographical order, and either ascending or descending. Efficient sorting is important for optimizing the efficiency of other algorithms (such as search and merge algorithms) that require input data to be in sorted lists. Sorting is also often useful for canonicalizing data and for producing human-readable output.
Formally, the output of any sorting algorithm must satisfy two conditions:
The output is in monotonic order (each element is no smaller/larger than the previous element, according to the required order).
The output is a permutation (a reordering, yet retaining all of the original elements) of the input.
Although some algorithms are designed for sequential access, the highest-performing algorithms assume data is stored in a data structure which allows random access.
History and concepts
From the beginning of computing, the sorting problem has attracted a great deal of research, perhaps due to the complexity of solving it efficiently despite its simple, familiar statement. Among the authors of early sorting algorithms around 1951 was Betty Holberton, who worked on ENIAC and UNIVAC. Bubble sort was analyzed as early as 1956. Asymptotically optimal algorithms have been known since the mid-20th century new algorithms are still being invented, with the widely used Timsort dating to 2002, and the library sort being first published in 2006.
Comparison sorting algorithms have a fundamental requirement of n log n - 1.4427n + O(log n) comparisons. Algorithms not based on comparisons, such as counting sort, can have better performance.
Sorting algorithms are prevalent in introductory computer science classes, where the abundance of algorithms for the problem provides a gentle introduction to a variety of core algorithm concepts, such as big O notation, divide-and-conquer algorithms, data structures such as heaps and binary trees, randomized algorithms, best, worst and average case analysis, time–space tradeoffs, and upper and lower bounds.
Sorting small arrays optimally (in the fewest comparisons and swaps) or fast (i.e. taking into account machine-specific details) is still an open research problem, with solutions only known for very small arrays (<20 elements). Similarly optimal (by various definitions) sorting on a parallel machine is an open research topic.
Classification
Sorting algorithms can be classified by:
Computational complexity
Best, worst and average case behavior in terms of the size of the list. For typical serial sorting algorithms, good behavior is O(n log n), with parallel sort in O(log2 n), and bad behavior is O(n2). Ideal behavior for a serial sort is O(n), but this is not possible in the average case. Optimal parallel sorting is O(log n).
Swaps for "in-place" algorithms.
Memory usage (and use of other computer resources). In particular, some sorting algorithms are "in-place". Strictly, an in-place sort needs only O(1) memory beyond the items being sorted; sometimes O(log n) additional memory is considered "in-place".
Recursion: Some algorithms are either typically recursive or typically non-recursive, while others may typically be both (e.g., merge sort).
Stability: stable sorting algorithms maintain the relative order of records with equal keys (i.e., values).
Whether or not they are a comparison sort. A comparison sort examines the data only by comparing two elements with a comparison operator.
General method: insertion, exchange, selection, merging, etc. Exchange sorts include bubble sort and quicksort. Selection sorts include cycle sort and heapsort.
Whether the algorithm is serial or parallel. The remainder of this discussion almost exclusively concentrates on serial algorithms and assumes serial operation.
Adaptability: Whether or not the presortedness of the input affects the running time. Algorithms that take this into account are known to be adaptive.
Online: An algorithm such as Insertion Sort that is online can sort a constant stream of input.
Stability
Stable sorting algorithms sort equal elements in the same order that they appear in the input. For example, in the card sorting example to the right, the cards are being sorted by their rank, and their suit is being ignored. This allows the possibility of multiple different correctly sorted versions of the original list. Stable sorting algorithms choose one of these, according to the following rule: if two items compare as equal (like the two 5 cards), then their relative order will be preserved, i.e. if one comes before the other in the input, it will come before the other in the output.
Stability is important to preserve order over multiple sorts on the same data set. For example, say that student records consisting of name and class section are sorted dynamically, first by name, then by class section. If a stable sorting algorithm is used in both cases, the sort-by-class-section operation will not change the name order; with an unstable sort, it could be that sorting by section shuffles the name order, resulting in a nonalphabetical list of students.
More formally, the data being sorted can be represented as a record or tuple of values, and the part of the data that is used for sorting is called the key. In the card example, cards are represented as a record (rank, suit), and the key is the rank. A sorting algorithm is stable if whenever there are two records R and S with the same key, and R appears before S in the original list, then R will always appear before S in the sorted list.
When equal elements are indistinguishable, such as with integers, or more generally, any data where the entire element is the key, stability is not an issue. Stability is also not an issue if all keys are different.
Unstable sorting algorithms can be specially implemented to be stable. One way of doing this is to artificially extend the key comparison so that comparisons between two objects with otherwise equal keys are decided using the order of the entries in the original input list as a tie-breaker. Remembering this order, however, may require additional time and space.
One application for stable sorting algorithms is sorting a list using a primary and secondary key. For example, suppose we wish to sort a hand of cards such that the suits are in the order clubs (♣), diamonds (♦), hearts (♥), spades (♠), and within each suit, the cards are sorted by rank. This can be done by first sorting the cards by rank (using any sort), and then doing a stable sort by suit:
400px
Within each suit, the stable sort preserves the ordering by rank that was already done. This idea can be extended to any number of keys and is utilised by radix sort. The same effect can be achieved with an unstable sort by using a lexicographic key comparison, which, e.g., compares first by suit, and then compares by rank if the suits are the same.
Comparison of algorithms
This analysis assumes that the length of each key is constant and that all comparisons, swaps and other operations can proceed in constant time.
Legend:
is the number of records to be sorted.
Comparison column has the following ranking classifications: "Best", "Average" and "Worst" if the time complexity is given for each case.
"Memory" denotes the amount of additional storage required by the algorithm.
The run times and the memory requirements listed are inside big O notation, hence the base of the logarithms does not matter.
The notation means .
Comparison sorts
Below is a table of comparison sorts. Mathematical analysis demonstrates a comparison sort cannot perform better than on average.
+ Comparison sorts Name Best Average Worst Memory Stable In-place Method Other notes Heapsort No Yes Selection An optimized version of selection sort. Performs selection sort by constructing and maintaining a max heap to find the maximum in time. Introsort No Yes Partitioning & Selection Used in several STL implementations. Performs a combination of Quicksort, Heapsort, and Insertion sort. Merge sort Yes No Merging Highly parallelizable (up to using the Three Hungarians' Algorithm). In-Place Merge Sort Yes Yes Merging Variation of Mergesort which uses an in-place stable merge algorithm, such as rotate merge or symmerge. Tournament sort Yes No Selection An optimization of Selection Sort, which uses a tournament tree to select the min/max. Tree sort Yes No Insertion When using a self-balancing binary search tree. Block sort Yes Yes Insertion & Merging Combine a block-based in-place merge algorithm with a bottom-up merge sort. Smoothsort No Yes Selection Adaptive variant of heapsort based on the Leonardo sequence instead of a binary heap. Timsort Yes No Insertion & Merging Makes comparisons when the data is already sorted or reverse sorted. Patience sorting No No Insertion & Selection Finds all the longest increasing subsequences in . Cubesort Yes No Insertion Makes comparisons when the data is already sorted or reverse sorted. Quicksort No Yes Partitioning Quicksort can be done in-place with stack space. Fluxsort Yes No Partitioning & Merging An adaptive branchless stable introsort. Crumsort No Yes Partitioning & Merging An in-place, but unstable variant of Fluxsort. Library sort No No InsertionSimilar to a gapped insertion sort. Shellsort No Yes Insertion Small code size. Complexity may vary depending on gap sequence. Pratt's sequence has a worst-case of . The (Extended) Ciura sequence averages empirically. Comb sort No Yes Exchanging Faster than bubble sort on average. Insertion sort Yes Yes Insertion , in the worst case over sequences that have d inversions. Bubble sort Yes Yes Exchanging Tiny code size. Cocktail shaker sort Yes Yes ExchangingA bi-directional variant of Bubblesort. Gnome sort Yes Yes Exchanging Tiny code size. Odd–even sort Yes Yes Exchanging Can be run on parallel processors easily. Strand sort Yes No Selection Selection sort No Yes Selection Tiny code size. Noted for its simplicity and small number of element moves. Makes exactly swaps. Exchange sort No Yes Exchanging Tiny code size. Cycle sort No Yes Selection In-place with theoretically optimal number of writes.
Non-comparison sorts
The following table describes integer sorting algorithms and other sorting algorithms that are not comparison sorts. These algorithms are not limited to Ω(n log n) unless meet unit-cost random-access machine model as described below.
Complexities below assume items to be sorted, with keys of size , digit size , and the range of numbers to be sorted.
Many of them are based on the assumption that the key size is large enough that all entries have unique key values, and hence that , where ≪ means "much less than".
In the unit-cost random-access machine model, algorithms with running time of , such as radix sort, still take time proportional to , because is limited to be not more than , and a larger number of elements to sort would require a bigger in order to store them in the memory.
+ Non-comparison sorts Name Best Average Worst Memory Stable Notes Pigeonhole sort — Cannot sort non-integers. Bucket sort (uniform keys) — Assumes uniform distribution of elements from the domain in the array.
Also cannot sort non-integers. Bucket sort (integer keys) — If r is , then average time complexity is . Counting sort — If r is , then average time complexity is . LSD Radix Sort recursion levels, 2d for count array.
Unlike most distribution sorts, this can sort non-integers. MSD Radix Sort Stable version uses an external array of size to hold all of the bins.
Same as the LSD variant, it can sort non-integers. MSD Radix Sort (in-place) d=1 for in-place, recursion levels, no count array. Spreadsort Asymptotic are based on the assumption that , but the algorithm does not require this. Burstsort — Has better constant factor than radix sort for sorting strings. Though relies somewhat on specifics of commonly encountered strings. Flashsort Requires uniform distribution of elements from the domain in the array to run in linear time. If distribution is extremely skewed then it can go quadratic if underlying sort is quadratic (it is usually an insertion sort). In-place version is not stable. Postman sort — — A variation of bucket sort, which works very similarly to MSD Radix Sort. Specific to post service needs. Recombinant sort Hashing, Counting, Dynamic Programming, Multidimensional data
Samplesort can be used to parallelize any of the non-comparison sorts, by efficiently distributing data into several buckets and then passing down sorting to several processors, with no need to merge as buckets are already sorted between each other.
Others
Some algorithms are slow compared to those discussed above, such as the bogosort with unbounded run time and the stooge sort which has O(n2.7) run time. These sorts are usually described for educational purposes to demonstrate how the run time of algorithms is estimated. The following table describes some sorting algorithms that are impractical for real-life use in traditional software contexts due to extremely poor performance or specialized hardware requirements.
Name Best Average Worst Memory Stable Comparison Other notes Bead sort Works only with positive integers. Requires specialized hardware for it to run in guaranteed time. There is a possibility for software implementation, but running time will be , where is the sum of all integers to be sorted; in the case of small integers, it can be considered to be linear. Merge-insertion sort Makes very few comparisons worst case compared to other sorting algorithms.
Mostly of theoretical interest due to implementational complexity and suboptimal data moves. "I Can't Believe It Can Sort" Notable primarily for appearing to be an erroneous implementation of either Insertion Sort or Exchange Sort. Spaghetti (Poll) sort Polling This is a linear-time, analog algorithm for sorting a sequence of items, requiring O(n) stack space, and the sort is stable. This requires n parallel processors. See . Sorting network (stable sorting networks require more comparisons) Order of comparisons are set in advance based on a fixed network size. Bitonic sorter An effective variation of Sorting networks. Bogosort Random shuffling. Used for example purposes only, as even the expected best-case runtime is awful..
Worst case is unbounded when using randomization, but a deterministic version guarantees worst case. LinearSort Parody sorting algorithm to show the risk of overly relying on Big O notation - runs a merge sort and then sleeps until a fixed constant amount of time has elapsed from the function call, thus (wastefully) guaranteeing a fixed runtime below the hardcoded minimum. Stooge sort Slower than most of the sorting algorithms (even naive ones) with a time complexity of Can be made stable, and is also a sorting network. Slowsort A multiply and surrender algorithm, antonymous with divide-and-conquer algorithm. Franceschini's method Makes data moves in the worst case. Possesses ideal comparison sort asymptotic bounds but is only of theoretical interest.Heat Death Sort (depends on the underlying sorting algorithm) (depends on the underlying sorting algorithm) Created as an April Fools' joke to highlight the weakness of ignoring the size of the constant in Big O notation.
Theoretical computer scientists have invented other sorting algorithms that provide better than O(n log n) time complexity assuming certain constraints, including:
Thorup's algorithm, a randomized integer sorting algorithm, taking time and O(n) space.
AHNR algorithm, an integer sorting algorithm which runs in time deterministically, and also has a randomized version which runs in linear time when words are large enough, specifically (where w is the word size).
A randomized integer sorting algorithm taking expected time and O(n) space.
Popular sorting algorithms
While there are a large number of sorting algorithms, in practical implementations a few algorithms predominate. Insertion sort is widely used for small data sets, while for large data sets an asymptotically efficient sort is used, primarily heapsort, merge sort, or quicksort. Efficient implementations generally use a hybrid algorithm, combining an asymptotically efficient algorithm for the overall sort with insertion sort for small lists at the bottom of a recursion. Highly tuned implementations use more sophisticated variants, such as Timsort (merge sort, insertion sort, and additional logic), used in Android, Java, and Python, and introsort (quicksort and heapsort), used (in variant forms) in some C++ sort implementations and in .NET.
For more restricted data, such as numbers in a fixed interval, distribution sorts such as counting sort or radix sort are widely used. Bubble sort and variants are rarely used in practice, but are commonly found in teaching and theoretical discussions.
When physically sorting objects (such as alphabetizing papers, tests or books) people intuitively generally use insertion sorts for small sets. For larger sets, people often first bucket, such as by initial letter, and multiple bucketing allows practical sorting of very large sets. Often space is relatively cheap, such as by spreading objects out on the floor or over a large area, but operations are expensive, particularly moving an object a large distance – locality of reference is important. Merge sorts are also practical for physical objects, particularly as two hands can be used, one for each list to merge, while other algorithms, such as heapsort or quicksort, are poorly suited for human use. Other algorithms, such as library sort, a variant of insertion sort that leaves spaces, are also practical for physical use.
Simple sorts
Two of the simplest sorts are insertion sort and selection sort, both of which are efficient on small data, due to low overhead, but not efficient on large data. Insertion sort is generally faster than selection sort in practice, due to fewer comparisons and good performance on almost-sorted data, and thus is preferred in practice, but selection sort uses fewer writes, and thus is used when write performance is a limiting factor.
Insertion sort
Insertion sort is a simple sorting algorithm that is relatively efficient for small lists and mostly sorted lists, and is often used as part of more sophisticated algorithms. It works by taking elements from the list one by one and inserting them in their correct position into a new sorted list similar to how one puts money in their wallet. In arrays, the new list and the remaining elements can share the array's space, but insertion is expensive, requiring shifting all following elements over by one. Shellsort is a variant of insertion sort that is more efficient for larger lists.
Selection sort
Selection sort is an in-place comparison sort. It has O(n2) complexity, making it inefficient on large lists, and generally performs worse than the similar insertion sort. Selection sort is noted for its simplicity and also has performance advantages over more complicated algorithms in certain situations.
The algorithm finds the minimum value, swaps it with the value in the first position, and repeats these steps for the remainder of the list. It does no more than n swaps and thus is useful where swapping is very expensive.
Efficient sorts
Practical general sorting algorithms are almost always based on an algorithm with average time complexity (and generally worst-case complexity) O(n log n), of which the most common are heapsort, merge sort, and quicksort. Each has advantages and drawbacks, with the most significant being that simple implementation of merge sort uses O(n) additional space, and simple implementation of quicksort has O(n2) worst-case complexity. These problems can be solved or ameliorated at the cost of a more complex algorithm.
While these algorithms are asymptotically efficient on random data, for practical efficiency on real-world data various modifications are used. First, the overhead of these algorithms becomes significant on smaller data, so often a hybrid algorithm is used, commonly switching to insertion sort once the data is small enough. Second, the algorithms often perform poorly on already sorted data or almost sorted data – these are common in real-world data and can be sorted in O(n) time by appropriate algorithms. Finally, they may also be unstable, and stability is often a desirable property in a sort. Thus more sophisticated algorithms are often employed, such as Timsort (based on merge sort) or introsort (based on quicksort, falling back to heapsort).
Merge sort
Merge sort takes advantage of the ease of merging already sorted lists into a new sorted list. It starts by comparing every two elements (i.e., 1 with 2, then 3 with 4...) and swapping them if the first should come after the second. It then merges each of the resulting lists of two into lists of four, then merges those lists of four, and so on; until at last two lists are merged into the final sorted list. Of the algorithms described here, this is the first that scales well to very large lists, because its worst-case running time is O(n log n). It is also easily applied to lists, not only arrays, as it only requires sequential access, not random access. However, it has additional O(n) space complexity and involves a large number of copies in simple implementations.
Merge sort has seen a relatively recent surge in popularity for practical implementations, due to its use in the sophisticated algorithm Timsort, which is used for the standard sort routine in the programming languages Python and Java (as of JDK7). Merge sort itself is the standard routine in Perl, among others, and has been used in Java at least since 2000 in JDK1.3.Merge sort in Java 1.3, Sun.
Heapsort
Heapsort is a much more efficient version of selection sort. It also works by determining the largest (or smallest) element of the list, placing that at the end (or beginning) of the list, then continuing with the rest of the list, but accomplishes this task efficiently by using a data structure called a heap, a special type of binary tree. Once the data list has been made into a heap, the root node is guaranteed to be the largest (or smallest) element. When it is removed and placed at the end of the list, the heap is rearranged so the largest element remaining moves to the root. Using the heap, finding the next largest element takes O(log n) time, instead of O(n) for a linear scan as in simple selection sort. This allows Heapsort to run in O(n log n) time, and this is also the worst-case complexity.
Recombinant sort
Recombinant sort is a non-comparison-based sorting algorithm developed by Peeyush Kumar et al in 2020. The algorithm combines bucket sort, counting sort, radix sort, hashing, and dynamic programming techniques. It employs an n-dimensional Cartesian space mapping approach consisting of two primary phases: a Hashing cycle that maps elements to a multidimensional array using a special hash function, and an Extraction cycle that retrieves elements in sorted order. Recombinant Sort achieves O(n) time complexity for best, average, and worst cases, and can process both numerical and string data types, including mixed decimal and non-decimal numbers.
Quicksort
Quicksort is a divide-and-conquer algorithm which relies on a partition operation: to partition an array, an element called a pivot is selected. All elements smaller than the pivot are moved before it and all greater elements are moved after it. This can be done efficiently in linear time and in-place. The lesser and greater sublists are then recursively sorted. This yields an average time complexity of O(n log n), with low overhead, and thus this is a popular algorithm. Efficient implementations of quicksort (with in-place partitioning) are typically unstable sorts and somewhat complex but are among the fastest sorting algorithms in practice. Together with its modest O(log n) space usage, quicksort is one of the most popular sorting algorithms and is available in many standard programming libraries.
The important caveat about quicksort is that its worst-case performance is O(n2); while this is rare, in naive implementations (choosing the first or last element as pivot) this occurs for sorted data, which is a common case. The most complex issue in quicksort is thus choosing a good pivot element, as consistently poor choices of pivots can result in drastically slower O(n2) performance, but good choice of pivots yields O(n log n) performance, which is asymptotically optimal. For example, if at each step the median is chosen as the pivot then the algorithm works in O(n log n). Finding the median, such as by the median of medians selection algorithm is however an O(n) operation on unsorted lists and therefore exacts significant overhead with sorting. In practice choosing a random pivot almost certainly yields O(n log n) performance.
If a guarantee of O(n log n) performance is important, there is a simple modification to achieve that. The idea, due to Musser, is to set a limit on the maximum depth of recursion. If that limit is exceeded, then sorting is continued using the heapsort algorithm. Musser proposed that the limit should be , which is approximately twice the maximum recursion depth one would expect on average with a randomly ordered array.
Shellsort
Shellsort was invented by Donald Shell in 1959. It improves upon insertion sort by moving out of order elements more than one position at a time. The concept behind Shellsort is that insertion sort performs in time, where k is the greatest distance between two out-of-place elements. This means that generally, they perform in O(n2), but for data that is mostly sorted, with only a few elements out of place, they perform faster. So, by first sorting elements far away, and progressively shrinking the gap between the elements to sort, the final sort computes much faster. One implementation can be described as arranging the data sequence in a two-dimensional array and then sorting the columns of the array using insertion sort.
The worst-case time complexity of Shellsort is an open problem and depends on the gap sequence used, with known complexities ranging from O(n2) to O(n4/3) and Θ(n log2 n). This, combined with the fact that Shellsort is in-place, only needs a relatively small amount of code, and does not require use of the call stack, makes it is useful in situations where memory is at a premium, such as in embedded systems and operating system kernels.
Bubble sort and variants
Bubble sort, and variants such as the Comb sort and cocktail sort, are simple, highly inefficient sorting algorithms. They are frequently seen in introductory texts due to ease of analysis, but they are rarely used in practice.
Bubble sort
Bubble sort is a simple sorting algorithm. The algorithm starts at the beginning of the data set. It compares the first two elements, and if the first is greater than the second, it swaps them. It continues doing this for each pair of adjacent elements to the end of the data set. It then starts again with the first two elements, repeating until no swaps have occurred on the last pass. This algorithm's average time and worst-case performance is O(n2), so it is rarely used to sort large, unordered data sets. Bubble sort can be used to sort a small number of items (where its asymptotic inefficiency is not a high penalty). Bubble sort can also be used efficiently on a list of any length that is nearly sorted (that is, the elements are not significantly out of place). For example, if any number of elements are out of place by only one position (e.g. 0123546789 and 1032547698), bubble sort's exchange will get them in order on the first pass, the second pass will find all elements in order, so the sort will take only 2n time.
Comb sort
Comb sort is a relatively simple sorting algorithm based on bubble sort and originally designed by Włodzimierz Dobosiewicz in 1980. It was later rediscovered and popularized by Stephen Lacey and Richard Box with a Byte Magazine article published in April 1991. The basic idea is to eliminate turtles, or small values near the end of the list, since in a bubble sort these slow the sorting down tremendously. (Rabbits, large values around the beginning of the list, do not pose a problem in bubble sort) It accomplishes this by initially swapping elements that are a certain distance from one another in the array, rather than only swapping elements if they are adjacent to one another, and then shrinking the chosen distance until it is operating as a normal bubble sort. Thus, if Shellsort can be thought of as a generalized version of insertion sort that swaps elements spaced a certain distance away from one another, comb sort can be thought of as the same generalization applied to bubble sort.
Exchange sort
Exchange sort is sometimes confused with bubble sort, although the algorithms are in fact distinct. Exchange sort works by comparing the first element with all elements above it, swapping where needed, thereby guaranteeing that the first element is correct for the final sort order; it then proceeds to do the same for the second element, and so on. It lacks the advantage that bubble sort has of detecting in one pass if the list is already sorted, but it can be faster than bubble sort by a constant factor (one less pass over the data to be sorted; half as many total comparisons) in worst-case situations. Like any simple O(n2) sort it can be reasonably fast over very small data sets, though in general insertion sort will be faster.
Distribution sorts
Distribution sort refers to any sorting algorithm where data is distributed from their input to multiple intermediate structures which are then gathered and placed on the output. For example, both bucket sort and flashsort are distribution-based sorting algorithms. Distribution sorting algorithms can be used on a single processor, or they can be a distributed algorithm, where individual subsets are separately sorted on different processors, then combined. This allows external sorting of data too large to fit into a single computer's memory.
Counting sort
Counting sort is applicable when each input is known to belong to a particular set, S, of possibilities. The algorithm runs in O(|S| + n) time and O(|S|) memory where n is the length of the input. It works by creating an integer array of size |S| and using the ith bin to count the occurrences of the ith member of S in the input. Each input is then counted by incrementing the value of its corresponding bin. Afterward, the counting array is looped through to arrange all of the inputs in order. This sorting algorithm often cannot be used because S needs to be reasonably small for the algorithm to be efficient, but it is extremely fast and demonstrates great asymptotic behavior as n increases. It also can be modified to provide stable behavior.
Bucket sort
Bucket sort is a divide-and-conquer sorting algorithm that generalizes counting sort by partitioning an array into a finite number of buckets. Each bucket is then sorted individually, either using a different sorting algorithm or by recursively applying the bucket sorting algorithm.
A bucket sort works best when the elements of the data set are evenly distributed across all buckets.
Radix sort
Radix sort is an algorithm that sorts numbers by processing individual digits. n numbers consisting of k digits each are sorted in O(n · k) time. Radix sort can process digits of each number either starting from the least significant digit (LSD) or starting from the most significant digit (MSD). The LSD algorithm first sorts the list by the least significant digit while preserving their relative order using a stable sort. Then it sorts them by the next digit, and so on from the least significant to the most significant, ending up with a sorted list. While the LSD radix sort requires the use of a stable sort, the MSD radix sort algorithm does not (unless stable sorting is desired). In-place MSD radix sort is not stable. It is common for the counting sort algorithm to be used internally by the radix sort. A hybrid sorting approach, such as using insertion sort for small bins, improves performance of radix sort significantly.
Memory usage patterns and index sorting
When the size of the array to be sorted approaches or exceeds the available primary memory, so that (much slower) disk or swap space must be employed, the memory usage pattern of a sorting algorithm becomes important, and an algorithm that might have been fairly efficient when the array fit easily in RAM may become impractical. In this scenario, the total number of comparisons becomes (relatively) less important, and the number of times sections of memory must be copied or swapped to and from the disk can dominate the performance characteristics of an algorithm. Thus, the number of passes and the localization of comparisons can be more important than the raw number of comparisons, since comparisons of nearby elements to one another happen at system bus speed (or, with caching, even at CPU speed), which, compared to disk speed, is virtually instantaneous.
For example, the popular recursive quicksort algorithm provides quite reasonable performance with adequate RAM, but due to the recursive way that it copies portions of the array it becomes much less practical when the array does not fit in RAM, because it may cause a number of slow copy or move operations to and from disk. In that scenario, another algorithm may be preferable even if it requires more total comparisons.
One way to work around this problem, which works well when complex records (such as in a relational database) are being sorted by a relatively small key field, is to create an index into the array and then sort the index, rather than the entire array. (A sorted version of the entire array can then be produced with one pass, reading from the index, but often even that is unnecessary, as having the sorted index is adequate.) Because the index is much smaller than the entire array, it may fit easily in memory where the entire array would not, effectively eliminating the disk-swapping problem. This procedure is sometimes called "tag sort".
Another technique for overcoming the memory-size problem is using external sorting, for example, one of the ways is to combine two algorithms in a way that takes advantage of the strength of each to improve overall performance. For instance, the array might be subdivided into chunks of a size that will fit in RAM, the contents of each chunk sorted using an efficient algorithm (such as quicksort), and the results merged using a k-way merge similar to that used in merge sort. This is faster than performing either merge sort or quicksort over the entire list.Donald Knuth, The Art of Computer Programming, Volume 3: Sorting and Searching, Second Edition. Addison-Wesley, 1998, , Section 5.4: External Sorting, pp. 248–379.Ellis Horowitz and Sartaj Sahni, Fundamentals of Data Structures, H. Freeman & Co., .
Techniques can also be combined. For sorting very large sets of data that vastly exceed system memory, even the index may need to be sorted using an algorithm or combination of algorithms designed to perform reasonably with virtual memory, i.e., to reduce the amount of swapping required.
Related algorithms
Related problems include approximate sorting (sorting a sequence to within a certain amount of the correct order), partial sorting (sorting only the k smallest elements of a list, or finding the k smallest elements, but unordered) and selection (computing the kth smallest element). These can be solved inefficiently by a total sort, but more efficient algorithms exist, often derived by generalizing a sorting algorithm. The most notable example is quickselect, which is related to quicksort. Conversely, some sorting algorithms can be derived by repeated application of a selection algorithm; quicksort and quickselect can be seen as the same pivoting move, differing only in whether one recurses on both sides (quicksort, divide-and-conquer) or one side (quickselect, decrease-and-conquer).
A kind of opposite of a sorting algorithm is a shuffling algorithm. These are fundamentally different because they require a source of random numbers. Shuffling can also be implemented by a sorting algorithm, namely by a random sort: assigning a random number to each element of the list and then sorting based on the random numbers. This is generally not done in practice, however, and there is a well-known simple and efficient algorithm for shuffling: the Fisher–Yates shuffle.
Sorting algorithms are ineffective for finding an order in many situations. Usually, when elements have no reliable comparison function (crowdsourced preferences like voting systems), comparisons are very costly (sports), or when it would be impossible to pairwise compare all elements for all criteria (search engines). In these cases, the problem is usually referred to as ranking and the goal is to find the "best" result for some criteria according to probabilities inferred from comparisons or rankings. A common example is in chess, where players are ranked with the Elo rating system, and rankings are determined by a tournament system instead of a sorting algorithm.
There are sorting algorithms for a "noisy" (potentially incorrect) comparator and sorting algorithms for a pair of "fast and dirty" (i.e. "noisy") and "clean" comparators. This can be useful when the full comparison function is costly.
See also
K-sorted sequence
References
Further reading
External links
.
Sequential and parallel sorting algorithms – Explanations and analyses of many sorting algorithms.
Dictionary of Algorithms, Data Structures, and Problems – Dictionary of algorithms, techniques, common functions, and problems.
Slightly Skeptical View on Sorting Algorithms – Discusses several classic algorithms and promotes alternatives to the quicksort algorithm.
15 Sorting Algorithms in 6 Minutes (Youtube) – Visualization and "audibilization" of 15 Sorting Algorithms in 6 Minutes.
A036604 sequence in OEIS database titled "Sorting numbers: minimal number of comparisons needed to sort n elements" – Performed by Ford–Johnson algorithm.
XiSort – External merge sort with symbolic key transformation – A variant of merge sort applied to large datasets using symbolic techniques.
XiSort reference implementation – C/C++ Library of the xisort algorithm in reference
Sorting Algorithms Used on Famous Paintings (Youtube) – Visualization of Sorting Algorithms on Many Famous Paintings.
A Comparison of Sorting Algorithms – Runs a series of tests of 9 of the main sorting algorithms using Python timeit and Google Colab.
Category:Data processing
|
computer_science
| 6,261
|
29328
|
Six-Day War
|
https://en.wikipedia.org/wiki/Six-Day_War
|
The Six-Day War, also known as the June war, 1967 Arab–Israeli war or third Arab–Israeli war, was fought between Israel and a coalition of Arab states, primarily Egypt, Syria, and Jordan from 5 to 10June 1967.
Military hostilities broke out amid poor relations between Israel and its Arab neighbors, who had been observing the 1949 Armistice Agreements signed at the end of the First Arab–Israeli War. In 1956, regional tensions over the Straits of Tiran (giving access to Eilat, a port on the southeast tip of Israel) escalated in what became known as the Suez Crisis, when Israel invaded Egypt over the Egyptian closure of maritime passageways to Israeli shipping, ultimately resulting in the re-opening of the Straits of Tiran to Israel as well as the deployment of the United Nations Emergency Force (UNEF) along the Egypt–Israel border.
In the months prior to the outbreak of the Six-Day War in June 1967, tensions again became dangerously heightened: Israel reiterated its post-1956 position that another Egyptian closure of the Straits of Tiran to Israeli shipping would be a definite casus belli. In May 1967, Egyptian president Gamal Abdel Nasser announced that the Straits of Tiran would again be closed to Israeli vessels. He subsequently mobilized the Egyptian military into defensive lines along the border with Israel and ordered the immediate withdrawal of all UNEF personnel.
On 5 June 1967, as the UNEF was in the process of leaving the zone, Israel launched a series of airstrikes against Egyptian airfields and other facilities in what is known as Operation Focus. Egyptian forces were caught by surprise, and nearly all of Egypt's military aerial assets were destroyed, giving Israel air supremacy. Simultaneously, the Israeli military launched a ground offensive into Egypt's Sinai Peninsula as well as the Egyptian-occupied Gaza Strip. After some initial resistance, Nasser ordered an evacuation of the Sinai Peninsula; by the sixth day of the conflict, Israel had occupied the entire Sinai Peninsula. Jordan, which had entered into a defense pact with Egypt just a week before the war began, did not take on an all-out offensive role against Israel, but launched attacks against Israeli forces to slow Israel's advance. On the fifth day, Syria joined the war by shelling Israeli positions in the north.
Egypt and Jordan agreed to a ceasefire on 8 June, and Syria on 9 June, and it was signed with Israel on 11 June. The Six-Day War resulted in more than 15,000 Arab fatalities, while Israel suffered fewer than 1,000. Alongside the combatant casualties were the deaths of 20 Israeli civilians killed in Arab forces air strikes on Jerusalem, 15 UN peacekeepers killed by Israeli strikes in the Sinai at the outset of the war, and 34 US personnel killed in the USS Liberty incident in which Israeli air forces struck a United States Navy technical research ship.
At the time of the cessation of hostilities, Israel had occupied the Golan Heights from Syria, the West Bank including East Jerusalem from Jordan, and the Sinai Peninsula and the Gaza Strip from Egypt. The displacement of civilian populations as a result of the Six-Day War would have long-term consequences, as around 280,000 to 325,000 Palestinians and 100,000 Syrians fled or were expelled from the West Bank and the Golan Heights, respectively. Nasser resigned in shame after Israel's victory, but was later reinstated following a series of protests across Egypt. In the aftermath of the conflict, Egypt closed the Suez Canal from 1967 to 1975.
Background
After the 1956 Suez Crisis, Egypt agreed to the stationing of a United Nations Emergency Force (UNEF) in the Sinai to ensure all parties would comply with the 1949 Armistice Agreements. In the following years there were numerous minor border clashes between Israel and its Arab neighbors, particularly Syria. In early November 1966, Syria signed a mutual defense agreement with Egypt.. "Some sources date the agreement to 4 November, others to 7 November. Most sources simply say November." Soon after this, in response to Palestine Liberation Organisation (PLO) guerilla activity,Schiff, Zeev (1974) History of the Israeli Army, Straight Arrow Books. p. 145 including a mine attack that left three dead, the Israeli Defense Force (IDF) attacked the village of as-Samu in the Jordanian-ruled West Bank. Jordanian units that engaged the Israelis were quickly beaten back. King Hussein of Jordan criticized Egyptian President Gamal Abdel Nasser for failing to come to Jordan's aid, and "hiding behind UNEF skirts".: "Towards the War of June 1967: Growing tensions in the region were clearly visible long before Israel's November attack on Samu and two other West Bank towns. An escalating spiral of raid and retaliation had already been set in motion..."
In May 1967, Nasser received false reports from the Soviet Union that Israel was massing on the Syrian border. Nasser began massing his troops in two defensive lines in the Sinai Peninsula on Israel's border (16 May), expelled the UNEF force from Gaza and Sinai (19 May) and took over UNEF positions at Sharm el-Sheikh, overlooking the Straits of Tiran. Israel repeated declarations it had made in 1957 that any closure of the Straits would be considered an act of war, or justification for war, but Nasser closed the Straits to Israeli shipping on 22–23 May. After the war, U.S. President Lyndon Johnson commented:"LBJ Pledges U.S. to Peace Effort ", Eugene Register-Guard (19 June 1967). See also Johnson, Lyndon. "Address at the State Department's Foreign Policy Conference for Educators" (19 June 1967).
On 30 May, Jordan and Egypt signed a defense pact. The following day, at Jordan's invitation, the Iraqi army began deploying troops and armored units in Jordan. They were later reinforced by an Egyptian contingent. On 1 June, Israel formed a National Unity Government by widening its cabinet, and on 4 June the decision was made to go to war. The next morning, Israel launched Operation Focus, a large-scale, surprise air strike that launched the Six-Day War.
Military preparation
Before the war, Israeli pilots and ground crews had trained extensively in rapid refitting of aircraft returning from sorties, enabling a single aircraft to sortie up to four times a day, as opposed to the norm in Arab air forces of one or two sorties per day. This enabled the Israeli Air Force (IAF) to send several attack waves against Egyptian airfields on the first day of the war, overwhelming the Egyptian Air Force and allowing the IAF to knock out other Arab air forces on the same day. This has contributed to the Arab belief that the IAF was helped by foreign air forces (see Controversies relating to the Six-Day War). Pilots were extensively schooled about their targets, memorized layouts in detail, and rehearsed the operation multiple times on dummy runways in total secrecy.
The Egyptians had constructed fortified defenses in the Sinai. These designs were based on the assumption that an attack would come along the few roads leading through the desert, rather than through the difficult desert terrain. The Israelis chose not to risk attacking the Egyptian defenses head-on, and instead surprised them from an unexpected direction.
James Reston, writing in The New York Times on 23 May 1967, noted, "In; discipline, training, morale, equipment and general competence his [Nasser's] army and the other Arab forces, without the direct assistance of the Soviet Union, are no match for the Israelis. ... Even with 50,000 troops and the best of his generals and air force in Yemen, he has not been able to work his way in that small and primitive country, and even his effort to help the Congo rebels was a flop."
On the eve of the war, Israel believed it could win a war in 3–4 days. The United States estimated Israel would need 7–10 days to win, with British estimates supporting the U.S. view.
Armies and weapons
Armies
The Israeli army had a total strength, including reservists, of 264,000, though this number could not be sustained during a long conflict, as the reservists were vital to civilian life.
Against Jordan's forces on the West Bank, Israel deployed about 40,000 troops and 200 tanks (eight brigades). Israeli Central Command forces consisted of five brigades. The first two were permanently stationed near Jerusalem and were the Jerusalem Brigade and the mechanized Harel Brigade. Mordechai Gur's 55th Paratroopers Brigade was summoned from the Sinai front. The 10th Armored Brigade was stationed north of the West Bank. The Israeli Northern Command comprised a division of three brigades led by Major General Elad Peled which was stationed in the Jezreel Valley to the north of the West Bank.
On the eve of the war, Egypt massed approximately 100,000 of its 160,000 troops in the Sinai, including all seven of its divisions (four infantry, two armored and one mechanized), four independent infantry brigades and four independent armored brigades. Over a third of these soldiers were veterans of Egypt's continuing intervention into the North Yemen Civil War and another third were reservists. These forces had 950 tanks, 1,100 APCs, and more than 1,000 artillery pieces.
Syria's army had a total strength of 75,000 and was deployed along the border with Israel. Professor David W. Lesch wrote that "One would be hard-pressed to find a military less prepared for war with a clearly superior foe" since Syria's army had been decimated in the months and years prior through coups and attempted coups that had resulted in a series of purges, fracturings and uprisings within the armed forces.: "Syria was severely unprepared for war. Despite the bombastic and jingoistic rhetoric, the Baathist regime viewed its actions against Israel as low-level warfare that was not meant to lead to an all-out war. The months and years prior to the 1967 Arab-Israeli war were filled with military purges associated with actual and attempted coups that decimated and further fractured the military and party, resulting in an inexperienced officer corps as well as a deep distrust between the rank and file and officers in the army. In addition, there were uprisings by discontented elements of the Syrian population, less than satisfactory encounters with Israeli forces, and lukewarm Soviet support... One would be hard-pressed to find a military less prepared for war with a clearly superior foe."
The Jordanian Armed Forces included 11 brigades, totaling 55,000 troops. Nine brigades (45,000 troops, 270 tanks, 200 artillery pieces) were deployed in the West Bank, including the elite armored 40th, and two in the Jordan Valley. They possessed sizable numbers of M113 APCs and were equipped with some 300 modern Western tanks, 250 of which were U.S. M48 Pattons. They also had 12 battalions of artillery, six batteries of 81 mm and 120 mm mortars, a paratrooper battalion trained in the new U.S.-built school and a new battalion of mechanized infantry. The Jordanian Army was a long-term-service, professional army, relatively well-equipped and well-trained. Israeli post-war briefings said that the Jordanian staff acted professionally, but was always left "half a step" behind by the Israeli moves. The small Royal Jordanian Air Force consisted of only 24 British-made Hawker Hunter fighters, six transport aircraft and two helicopters. According to the Israelis, the Hawker Hunter was essentially on par with the French-built Dassault Mirage III – the IAF's best plane.
One hundred Iraqi tanks and an infantry division were readied near the Jordanian border. Two squadrons of Iraqi fighter-aircraft, Hawker Hunters and MiG 21s, were rebased adjacent to the Jordanian border.
In the weeks leading up to the war, Saudi Arabia mobilized forces for deployment to the Jordanian front. A Saudi infantry battalion entered Jordan on 6 June, followed by another on 8 June. Both were based in Jordan's southernmost city, Ma'an. By 17 June, the Saudi contingent in Jordan had grown to include a single infantry brigade, a tank company, two artillery batteries, a heavy mortar company, and a maintenance and support unit. By the end of July, a second tank company and a third artillery battery had been added. These forces remained in Jordan until the end of 1977, when they were recalled for re-equipment and retraining in the Karak region near the Dead Sea.
The Arab air forces were reinforced by aircraft from Libya, Algeria, Morocco, Kuwait, and Saudi Arabia to make up for the massive losses suffered on the first day of the war. They were also aided by volunteer pilots from the Pakistan Air Force acting in an independent capacity. PAF pilots like Saiful Azam shot down several Israeli planes.
Weapons
With the exception of Jordan, the Arabs relied principally on Soviet weaponry. Jordan's army was equipped with American weaponry, and its air force was composed of British aircraft.
Egypt had by far the largest and the most modern of all the Arab air forces, consisting of about 420 combat aircraft, all of them Soviet-built and with a large number of top-of-the-line MiG-21s. Of particular concern to the Israelis were the 30 Tu-16 "Badger" medium bombers, capable of inflicting heavy damage on Israeli military and civilian centers.
Israeli weapons were mainly of Western origin. Its air force was composed principally of French aircraft, while its armored units were mostly of British and American design and manufacture. Some light infantry weapons, including the ubiquitous Uzi, were of Israeli origin.
Type Arab armies IDF AFVs Egypt, Syria and Iraq used T-34/85, T-54, T-55, PT-76, and SU-100/152 World War II-vintage Soviet self-propelled guns. Jordan used US M47, M48, and M48A1 Patton tanks. Panzer IV, Sturmgeschütz III and Jagdpanzer IV (ex-German vehicles all used by Syria)de Mazarrasa, Javier (1994) (in Spanish). Blindados en España 2ª Parte: La Dificil Postguerra 1939–1960. Valladolid, Spain: Quiron Ediciones. p. 50. Perrett, Bryan (1999). Panzerkampfwagen IV medium tank: 1936–1945. Oxford, United Kingdom: Osprey. p. 44. M50 and M51 Shermans, M48A3 Patton, Centurion, AMX-13, M32 tank recovery vehicle. The Centurion was upgraded with the British 105 mm L7 gun prior to the war. The Sherman also underwent extensive modifications including a larger 105 mm medium velocity, French gun, redesigned turret, wider tracks, more armor, and upgraded engine and suspension. APCs/IFVs BTR-40, BTR-152, BTR-50, BTR-60 APCs M2, / M3 Half-track, Panhard AML Artillery M1937 howitzer, BM-21, D-30 (2A18) howitzer, M1954 field gun, M-52 105 mm self-propelled howitzer (used by Jordan) M50 self-propelled howitzer and Makmat 160 mm self-propelled mortar, M7 Priest, Obusier de 155 mm Modèle 50, AMX 105 mm self-propelled howitzer Aircraft MiG-21, MiG-19, MiG-17, Su-7B, Tu-16, Il-28, Il-18, Il-14, An-12, Hawker Hunter used by Jordan and Iraq Dassault Mirage III, Dassault Super Mystère, Sud Aviation Vautour, Mystere IV, Dassault Ouragan, Fouga Magister trainer outfitted for attack missions, Nord 2501IS military cargo plane Helicopters Mi-6, Mi-4 Super Frelon, Sikorsky S-58 AAW SA-2 Guideline, ZSU-57-2 mobile anti-aircraft cannon MIM-23 Hawk, Bofors 40 mm Infantry weapons Port Said submachine gun, AK-47, RPK, RPD, DShK HMG, B-10 and B-11 recoilless rifles Uzi, FN FAL, FN MAG, AK-47, M2 Browning, Cobra, Nord SS.10, Nord SS.11, RL-83 Blindicide anti-tank infantry weapon, Jeep-mounted 106 mm recoilless rifle
Nuclear Weapons
Fighting fronts
Initial attack
The first and most critical move of the conflict was a surprise Israeli attack on the Egyptian Air Force. Initially, both Egypt and Israel announced that they had been attacked by the other country.
On 5 June at 7:45 Israeli time, with civil defense sirens sounding all over Israel, the IAF launched Operation Focus (Moked). All but 12 of its nearly 200 operational jets launched a mass attack against Egypt's airfields. (author interview with Moredechai Hod, 7 May 2002). The Egyptian defensive infrastructure was extremely poor, and no airfields were yet equipped with hardened aircraft shelters capable of protecting Egypt's warplanes. Most of the Israeli warplanes headed out over the Mediterranean Sea, flying low to avoid radar detection, before turning toward Egypt. Others flew over the Red Sea.
Meanwhile, the Egyptians hindered their own defense by effectively shutting down their entire air defense system: they were worried that rebel Egyptian forces would shoot down the plane carrying Field Marshal Abdel Hakim Amer and Lt-Gen. Sidqi Mahmoud, who were en route from al Maza to Bir Tamada in the Sinai to meet the commanders of the troops stationed there. It did not make a great deal of difference as the Israeli pilots came in below Egyptian radar cover and well below the lowest point at which its SA-2 surface-to-air missile batteries could bring down an aircraft. (author interview with General Salahadeen Hadidi who presided over the first court-martial of the heads of the air force and the air defense system after the war).
Although the powerful Jordanian radar facility at Ajloun detected waves of aircraft approaching Egypt and reported the code word for "war" up the Egyptian command chain, Egyptian command and communications problems prevented the warning from reaching the targeted airfields. The Israelis employed a mixed-attack strategy: bombing and strafing runs against planes parked on the ground, and bombing to disable runways with special tarmac-shredding penetration bombs developed jointly with France, leaving surviving aircraft unable to take off.
The runway at the Arish airfield was spared, as the Israelis expected to turn it into a military airport for their transports after the war. Surviving aircraft were taken out by later attack waves. The operation was more successful than expected, catching the Egyptians by surprise and destroying virtually all of the Egyptian Air Force on the ground, with few Israeli losses. Only four unarmed Egyptian training flights were in the air when the strike began. A total of 338 Egyptian aircraft were destroyed and 100 pilots were killed, although the number of aircraft lost by the Egyptians is disputed., says 282 out of 420. , says 304 out of 419. , says over 350 planes were destroyed.
Among the Egyptian planes lost were all 30 Tu-16 bombers, 27 out of 40 Il-28 bombers, 12 Su-7 fighter-bombers, over 90 MiG-21s, 20 MiG-19s, 25 MiG-17 fighters, and around 32 transport planes and helicopters. In addition, Egyptian radars and SAM missiles were also attacked and destroyed. The Israelis lost 19 planes, including two destroyed in air-to-air combat and 13 downed by anti-aircraft artillery. One Israeli plane, which was damaged and unable to break radio silence, was shot down by Israeli Hawk missiles after it strayed over the Negev Nuclear Research Center. Another was destroyed by an exploding Egyptian bomber.
The attack guaranteed Israeli air supremacy for the rest of the war. Attacks on other Arab air forces by Israel took place later in the day as hostilities broke out on other fronts.
The large numbers of Arab aircraft claimed destroyed by Israel on that day were at first regarded as "greatly exaggerated" by the Western press, but the fact that the Egyptian Air Force, along with other Arab air forces attacked by Israel, made practically no appearance for the remaining days of the conflict proved that the numbers were most likely authentic. Throughout the war, Israeli aircraft continued strafing Arab airfield runways to prevent their return to usability. Meanwhile, Egyptian state-run radio had reported an Egyptian victory, falsely claiming that 70 Israeli planes had been downed on the first day of fighting.
Gaza Strip and Sinai Peninsula
The Egyptian forces consisted of seven divisions: four armored, two infantry, and one mechanized infantry. Overall, Egypt had around 100,000 troops and 900–950 tanks in the Sinai, backed by 1,100 APCs and 1,000 artillery pieces. This arrangement was thought to be based on the Soviet doctrine, where mobile armor units at strategic depth provide a dynamic defense while infantry units engage in defensive battles.
Israeli forces concentrated on the border with Egypt included six armored brigades, one infantry brigade, one mechanized infantry brigade, three paratrooper brigades, giving a total of around 70,000 men and 700 tanks, who were organized in three armored divisions. They had massed on the border the night before the war, camouflaging themselves and observing radio silence before being ordered to advance.
The Israeli plan was to surprise the Egyptian forces in both timing (the attack exactly coinciding with the IAF strike on Egyptian airfields), and in location (attacking via northern and central Sinai routes, as opposed to the Egyptian expectations of a repeat of the 1956 war, when the IDF attacked via the central and southern routes) and method (using a combined-force flanking approach, rather than direct tank assaults).
Northern (El Arish) Israeli division
On 5 June, at 7:50 am, the northernmost Israeli division, consisting of three brigades and commanded by Major General Israel Tal, one of Israel's most prominent armor commanders, crossed the border at two points, opposite Nahal Oz and south of Khan Yunis. They advanced swiftly, holding their fire to maintain the element of surprise. Tal's forces assaulted the "Rafah Gap", an stretch containing the shortest of three main routes through the Sinai towards El Qantara and the Suez Canal. The Egyptians had four divisions in the area, backed by minefields, pillboxes, underground bunkers, hidden gun emplacements and trenches. The terrain on either side of the route was impassable. The Israeli plan was to hit the Egyptians at selected key points with concentrated armor.
Tal's advance was led by the 7th Armored Brigade, under Colonel Shmuel Gonen. The Israeli plan called for the 7th Brigade to outflank Khan Yunis from the north and the 60th Armored Brigade, under Colonel Menachem Aviram, to advance from the south. The two brigades would link up and surround Khan Yunis, while the paratroopers would take Rafah. Gonen entrusted the breakthrough to a single battalion of his brigade.
Initially, the advance encountered light resistance, as Egyptian intelligence had concluded that it was a diversion for the main attack. As Gonen's lead battalion advanced, it suddenly came under intense fire and took heavy losses. A second battalion was brought up, but was also pinned down. Meanwhile, the 60th Brigade became bogged down in the sand, while the paratroopers had trouble navigating through the dunes. The Israelis continued to press their attack, and despite heavy losses, cleared the Egyptian positions and reached the Khan Yunis railway junction in a little over four hours.
Gonen's brigade then advanced to Rafah in twin columns. Rafah itself was circumvented, and the Israelis attacked Sheikh Zuweid, to the southwest, which was defended by two brigades. Though inferior in numbers and equipment, the Egyptians were deeply entrenched and camouflaged. The Israelis were pinned down by fierce Egyptian resistance and called in air and artillery support to enable their lead elements to advance. Many Egyptians abandoned their positions after their commander and several of his staff were killed.
The Israelis broke through with tank-led assaults, but Aviram's forces misjudged the Egyptians' flank and were pinned between strongholds before being extracted after several hours. By nightfall, the Israelis had finished mopping up resistance. The Israelis had taken significant losses, with Colonel Gonen later telling reporters that "we left many of our dead soldiers in Rafah and many burnt-out tanks." The Egyptians suffered some 2,000 casualties and lost 40 tanks.
Advance on Arish
On 5 June, with the road open, Israeli forces continued advancing towards Arish. Already by late afternoon, elements of the 79th Armored Battalion had charged through the Jiradi defile, a narrow pass defended by well-emplaced troops of the Egyptian 112th Infantry Brigade. In fierce fighting, which saw the pass change hands several times, the Israelis charged through the position. The Egyptians suffered heavy casualties and tank losses, while Israeli losses stood at 66 dead, 93 wounded and 28 tanks. Emerging at the western end, Israeli forces advanced to the outskirts of Arish. As it reached the outskirts of Arish, Tal's division also consolidated its hold on Rafah and Khan Yunis.
The following day, 6 June, the Israeli forces on the outskirts of Arish were reinforced by the 7th Brigade, which fought its way through the Jiradi pass. After receiving supplies via an airdrop, the Israelis entered the city and captured the airport at 7:50 am. The Israelis entered the city at 8:00 am. Company commander Yossi Peled recounted that "Al-Arish was totally quiet, desolate. Suddenly, the city turned into a madhouse. Shots came at us from every alley, every corner, every window and house." An IDF record stated that "clearing the city was hard fighting. The Egyptians fired from the rooftops, from balconies and windows. They dropped grenades into our half-tracks and blocked the streets with trucks. Our men threw the grenades back and crushed the trucks with their tanks." Gonen sent additional units to Arish, and the city was eventually taken.
Brigadier-General Avraham Yoffe's assignment was to penetrate Sinai south of Tal's forces and north of Sharon's. Yoffe's attack allowed Tal to complete the capture of the Jiradi defile, Khan Yunis. All of them were taken after fierce fighting. Gonen subsequently dispatched a force of tanks, infantry and engineers under Colonel Yisrael Granit to continue down the Mediterranean coast towards the Suez Canal, while a second force led by Gonen himself turned south and captured Bir Lahfan and Jabal Libni.
Mid-front (Abu-Ageila) Israeli division
Further south, on 6 June, the Israeli 38th Armored Division under Major-General Ariel Sharon assaulted Um-Katef, a heavily fortified area defended by the Egyptian 2nd Infantry Division under Major-General Sa'adi Naguib (though Naguib was actually absent) of Soviet World War II armor, which included 90 T-34-85 tanks, 22 SU-100 tank destroyers, and about 16,000 men. The Israelis had about 14,000 men and 150 post-World War II tanks including the AMX-13, Centurions, and M50 Super Shermans (modified M-4 Sherman tanks).
Two armored brigades in the meantime, under Avraham Yoffe, slipped across the border through sandy wastes that Egypt had left undefended because they were considered impassable. Simultaneously, Sharon's tanks from the west were to engage Egyptian forces on Um-Katef ridge and block any reinforcements. Israeli infantry would clear the three trenches, while heliborne paratroopers would land behind Egyptian lines and silence their artillery. An armored thrust would be made at al-Qusmaya to unnerve and isolate its garrison.
As Sharon's division advanced into the Sinai, Egyptian forces staged successful delaying actions at Tarat Umm, Umm Tarfa, and Hill 181. An Israeli jet was downed by anti-aircraft fire, and Sharon's forces came under heavy shelling as they advanced from the north and west. The Israeli advance, which had to cope with extensive minefields, took a large number of casualties. A column of Israeli tanks managed to penetrate the northern flank of Abu Ageila, and by dusk, all units were in position. The Israelis then brought up ninety 105 mm and 155 mm artillery cannon for a preparatory barrage, while civilian buses brought reserve infantrymen under Colonel Yekutiel Adam and helicopters arrived to ferry the paratroopers. These movements were unobserved by the Egyptians, who were preoccupied with Israeli probes against their perimeter.
As night fell, the Israeli assault troops lit flashlights, each battalion a different colour, to prevent friendly fire incidents. At 10:00 pm, Israeli artillery began a barrage on Um-Katef, firing some 6,000 shells in less than twenty minutes, the most concentrated artillery barrage in Israel's history.Leslie Stein,The Making of Modern Israel: 1948–1967 , Polity Press, 2013 p. 181 Israeli tanks assaulted the northernmost Egyptian defenses and were largely successful, though an entire armored brigade was stalled by mines, and had only one mine-clearance tank. Israeli infantrymen assaulted the triple line of trenches in the east. To the west, paratroopers commanded by Colonel Danny Matt landed behind Egyptian lines, though half the helicopters got lost and never found the battlefield, while others were unable to land due to mortar fire.
Those that successfully landed on target destroyed Egyptian artillery and ammunition dumps and separated gun crews from their batteries, sowing enough confusion to significantly reduce Egyptian artillery fire. Egyptian reinforcements from Jabal Libni advanced towards Um-Katef to counterattack but failed to reach their objective, being subjected to heavy air attacks and encountering Israeli lodgements on the roads. Egyptian commanders then called in artillery attacks on their own positions. The Israelis accomplished and sometimes exceeded their overall plan, and had largely succeeded by the following day. The Egyptians suffered about 2,000 casualties, while the Israelis lost 42 dead and 140 wounded.
Yoffe's attack allowed Sharon to complete the capture of the Um-Katef, after fierce fighting. The main thrust at Um-Katef was stalled due to mines and craters. After IDF engineers had cleared a path by 4:00 pm, Israeli and Egyptian tanks engaged in fierce combat, often at ranges as close as ten yards. The battle ended in an Israeli victory, with 40 Egyptian and 19 Israeli tanks destroyed. Meanwhile, Israeli infantry finished clearing out the Egyptian trenches, with Israeli casualties standing at 14 dead and 41 wounded and Egyptian casualties at 300 dead and 100 taken prisoner.
Other Israeli forces
Further south, on 5 June, the 8th Armored Brigade under Colonel Albert Mandler, initially positioned as a ruse to draw off Egyptian forces from the real invasion routes, attacked the fortified bunkers at Kuntilla, a strategically valuable position whose capture would enable Mandler to block reinforcements from reaching Um-Katef and to join Sharon's upcoming attack on Nakhl. The defending Egyptian battalion outnumbered and outgunned, fiercely resisted the attack, hitting several Israeli tanks. Most of the defenders were killed, and only three Egyptian tanks, one of them damaged, survived. By nightfall, Mandler's forces had taken Kuntilla.
With the exceptions of Rafah and Khan Yunis, Israeli forces had initially avoided entering the Gaza Strip. Israeli Defense Minister Moshe Dayan had expressly forbidden entry into the area. After Palestinian positions in Gaza opened fire on the Negev settlements of Nirim and Kissufim, IDF Chief of Staff Yitzhak Rabin overrode Dayan's instructions and ordered the 11th Mechanized Brigade under Colonel Yehuda Reshef to enter the Strip. The force was immediately met with heavy artillery fire and fierce resistance from Palestinian forces and remnants of the Egyptian forces from Rafah.
By sunset, the Israelis had taken the strategically vital Ali Muntar ridge, overlooking Gaza City, but were beaten back from the city itself. Some 70 Israelis were killed, along with Israeli journalist Ben Oyserman and American journalist Paul Schutzer. Twelve members of UNEF were also killed. On the war's second day, 6 June, the Israelis were bolstered by the 35th Paratroopers Brigade under Colonel Rafael Eitan and took Gaza City along with the entire Strip. The fighting was fierce and accounted for nearly half of all Israeli casualties on the southern front. Gaza rapidly fell to the Israelis.
Meanwhile, on 6 June, two Israeli reserve brigades under Yoffe, each equipped with 100 tanks, penetrated the Sinai south of Tal's division and north of Sharon's, capturing the road junctions of Abu Ageila, Bir Lahfan, and Arish, taking all of them before midnight. Two Egyptian armored brigades counterattacked, and a fierce battle took place until the following morning. The Egyptians were beaten back by fierce resistance coupled with airstrikes, sustaining heavy tank losses. They fled west towards Jabal Libni.
The Egyptian Army
During the ground fighting, remnants of the Egyptian Air Force attacked Israeli ground forces but took losses from the Israeli Air Force and from Israeli anti-aircraft units. Throughout the last four days, Egyptian aircraft flew 150 sorties against Israeli units in the Sinai.
Many of the Egyptian units remained intact and could have tried to prevent the Israelis from reaching the Suez Canal, or engaged in combat in the attempt to reach the canal, but when the Egyptian Field Marshal Abdel Hakim Amer heard about the fall of Abu-Ageila, he panicked and ordered all units in the Sinai to retreat. This order effectively meant the defeat of Egypt.
Meanwhile, President Nasser, having learned of the results of the Israeli air strikes, decided together with Field Marshal Amer to order a general retreat from the Sinai within 24 hours. No detailed instructions were given concerning the manner and sequence of withdrawal.
Next fighting days
As Egyptian columns retreated, Israeli aircraft and artillery attacked them. Israeli jets used napalm bombs during their sorties. The attacks destroyed hundreds of vehicles and caused heavy casualties. At Jabal Libni, retreating Egyptian soldiers were fired upon by their own artillery. At Bir Gafgafa, the Egyptians fiercely resisted advancing Israeli forces, knocking out three tanks and eight half-tracks, and killing 20 soldiers. Due to the Egyptians' retreat, the Israeli High Command decided not to pursue the Egyptian units but rather to bypass and destroy them in the mountainous passes of West Sinai.
Therefore, in the following two days (6 and 7 June), all three Israeli divisions (Sharon and Tal were reinforced by an armored brigade each) rushed westwards and reached the passes. Sharon's division first went southward then westward, via An-Nakhl, to Mitla Pass with air support. It was joined there by parts of Yoffe's division, while its other units blocked the Gidi Pass. These passes became killing grounds for the Egyptians, who ran right into waiting Israeli positions and suffered heavy losses in both soldiers and vehicles. According to Egyptian diplomat Mahmoud Riad, 10,000 men were killed in one day alone, and many others died from thirst. Tal's units stopped at various points to the length of the Suez Canal.
Israel's blocking action was partially successful. Only the Gidi pass was captured before the Egyptians approached it, but at other places, Egyptian units managed to pass through and cross the canal to safety. Due to the haste of the Egyptian retreat, soldiers often abandoned weapons, military equipment, and hundreds of vehicles. Many Egyptian soldiers were cut off from their units had to walk about on foot before reaching the Suez Canal with limited supplies of food and water and were exposed to intense heat. Thousands died as a result. Many Egyptian soldiers chose instead to surrender to the Israelis, who eventually exceeded their capabilities to provide for prisoners. As a result, they began directing soldiers towards the Suez Canal and only imprisoned high-ranking officers, who were expected to be exchanged for captured Israeli pilots.
According to some accounts, during the Egyptian retreat from the Sinai, a unit of Soviet Marines based on a Soviet warship in Port Said at the time came ashore and attempted to cross the Suez Canal eastward. The Soviet force was reportedly decimated by an Israeli air attack and lost 17 dead and 34 wounded. Among the wounded was the commander, Lt. Col. Victor Shevchenko.Ginor, Isabella and Remez, Gideon: The Soviet-Israeli War, 1967–1973: The USSR's Military Intervention in the Egyptian-Israeli Conflict, p. 23
During the offensive, the Israeli Navy landed six combat divers from the Shayetet 13 naval commando unit to infiltrate Alexandria harbor. The divers sank an Egyptian minesweeper before being taken prisoner. Shayetet 13 commandos also infiltrated Port Said harbor, but found no ships there. A planned commando raid against the Syrian Navy never materialized. Both Egyptian and Israeli warships made movements at sea to intimidate the other side throughout the war but did not engage each other. Israeli warships and aircraft hunted for Egyptian submarines throughout the war.
On 7 June, Israel began its attack on Sharm el-Sheikh. The Israeli Navy started the operation with a probe of Egyptian naval defenses. An aerial reconnaissance flight found that the area was less defended than originally thought. At about 4:30 am, three Israeli missile boats opened fire on Egyptian shore batteries, while paratroopers and commandos boarded helicopters and Nord Noratlas transport planes for an assault on Al-Tur, as Chief of Staff Rabin was convinced it was too risky to land them directly in Sharm el-Sheikh. The city had been largely abandoned the day before, and reports from air and naval forces finally convinced Rabin to divert the aircraft to Sharm el-Sheikh. There, the Israelis engaged in a pitched battle with the Egyptians and took the city, killing 20 Egyptian soldiers and taking eight more prisoners. At 12:15 pm, Defense Minister Dayan announced that the Straits of Tiran constituted an international waterway open to all ships without restriction.
On 8 June, Israel completed the capture of the Sinai by sending infantry units to Ras Sudar on the western coast of the peninsula.
Several tactical elements made the swift Israeli advance possible:
The surprise attack that quickly gave the Israeli Air Force complete air superiority over the Egyptian Air Force.
The determined implementation of an innovative battle plan.
The lack of coordination among Egyptian troops.
These factors would prove to be decisive elements on Israel's other fronts as well.
West Bank
Egyptian control of Jordanian forces
King Hussein had given control of his army to Egypt on 1 June, on which date Egyptian General Riad arrived in Amman to take control of the Jordanian military.
Egyptian Field Marshal Amer used the confusion of the first hours of the conflict to send a cable to Amman that he was victorious; he claimed as evidence a radar sighting of a squadron of Israeli aircraft returning from bombing raids in Egypt, which he said was an Egyptian aircraft en route to attack Israel. In this cable, sent shortly before 9:00 am, Riad was ordered to attack.
Initial attack
One of the Jordanian brigades stationed in the West Bank was sent to the Hebron area in order to link with the Egyptians.
The IDF's strategic plan was to remain on the defensive along the Jordanian front, to enable focus in the expected campaign against Egypt.
Intermittent machine-gun exchanges began taking place in Jerusalem at 9:30 am, and the fighting gradually escalated as the Jordanians introduced mortar and recoilless rifle fire. Under the orders from General Narkis, the Israelis responded only with small-arms fire, firing in a flat trajectory to avoid hitting civilians, holy sites or the Old City. At 10:00 am on 5 June, the Jordanian Army began shelling Israel. Two batteries of 155 mm Long Tom cannons opened fire on the suburbs of Tel Aviv and Ramat David Airbase. The commanders of these batteries were instructed to lay a two-hour barrage against military and civilian settlements in central Israel. Some shells hit the outskirts of Tel Aviv."On June 5, Israel sent a message to Hussein urging him not to open fire. Despite shelling into West Jerusalem, Netanya, and the outskirts of Tel Aviv, Israel did nothing." The Six-Day War and Its Enduring Legacy . Summary of remarks by Michael Oren at the Washington Institute for Near East Policy, 29 May 2002.
By 10:30 am, Eshkol had sent a message via Odd Bull to King Hussein promising not to initiate any action against Jordan if it stayed out of the war. King Hussein replied that it was too late, and "the die was cast". At 11:15 am, Jordanian howitzers began a 6,000-shell barrage at Israeli Jerusalem. The Jordanians initially targeted kibbutz Ramat Rachel in the south and Mount Scopus in the north, then ranged into the city center and outlying neighborhoods. Military installations, the Prime Minister's Residence, and the Knesset compound were also targeted. Jordanian forces shelled the Beit HaNassi and the Biblical Zoo, killing fifteen civilians. Israeli civilian casualties totalled 20 dead and over 1,000 wounded. Some 900 buildings were damaged, including Hadassah Ein Kerem Hospital, which had its Chagall-made windows destroyed.
Around midday, eight Iraqi Hawker Hunters attacked the Kfar Sirkin airfield, destroying a Noratlas transport aircraft and a Piper Super Cub. Four Jordanian Hunters also hit a factory hall in Netanya, killing one civilian and wounding seven.
Israeli cabinet meets
When the Israeli cabinet convened to decide on a plan of action, Yigal Allon and Menahem Begin argued that this was an opportunity to take the Old City of Jerusalem, but Eshkol decided to defer any decision until Moshe Dayan and Yitzhak Rabin could be consulted. Uzi Narkiss made proposals for military action, including the capture of Latrun, but the cabinet turned him down. Dayan rejected multiple requests from Narkiss for permission to mount an infantry assault towards Mount Scopus but sanctioned some limited retaliatory actions.
Initial response
Shortly before 12:30 pm, the Israeli Air Force attacked Jordan's two airbases. The Hawker Hunters were refueling at the time of the attack. The Israeli aircraft attacked in two waves, the first of which cratered the runways and knocked out the control towers, and the second wave destroyed all 21 of Jordan's Hawker Hunter fighters, along with six transport aircraft and two helicopters. One Israeli jet was shot down by ground fire.
Three Israeli Vautours also attacked H-3, an airfield in western Iraq used by the Iraqi Air Force. During the attack, three MiG-21s, one Hunter, one de Havilland Dove and one Antonov An-12 were destroyed on the ground. They also damaged the runway, although it was repaired by the next morning. The Jordanian radar facility at Ajloun was also destroyed in an Israeli airstrike.
Israeli Fouga Magister jets attacked the Jordanian 40th Brigade with rockets as it moved south from the Damia Bridge. Dozens of tanks were knocked out, and a convoy of 26 trucks carrying ammunition was destroyed. In Jerusalem, Israel responded to Jordanian shelling with a missile strike that devastated Jordanian positions. The Israelis used the L missile, a surface-to-surface missile developed jointly with France in secret.
The next morning, three Iraqi Hawker Hunters attacked a group of tanks in the process of refueling next to the road between Nazareth and Haifa. An Iraqi Tupolev Tu-16 also bombed a military installation 10 kilometers southeast of Afula, killing two Israeli soldiers, while another attacked Netanya and Ramat David Airbase, before being shot down near the Megiddo airfield. The aircraft crashed into a military storage complex hidden in a forest, killing its crew and 16 Israeli soldiers. Four Israeli Vautours escorted by two Mirages re-attacked the H-3 airfield, resulting in one Hunter crashing on take-off, and a Hunter and a MiG-21 being damaged in air combat.
On 7 June, four Vautours escorted by four Mirages attacked the H-3 airfield for the third time. This resulted in an air combat with Hunters, piloted by Iraqis, as well as a Jordanian and Pakistani pilot Saiful Azam. One Iraqi Hunter was shot down and its pilot killed, while the Israelis lost two Vautours and one Mirage, with three crewmen dead and two taken prisoner.
Jordanian battalion at Government House
A Jordanian battalion advanced up Government House ridge and dug in at the perimeter of Government House, the headquarters of the United Nations observers, and opened fire on Ramat Rachel, the Allenby Barracks and the Jewish section of Abu Tor with mortars and recoilless rifles. UN observers fiercely protested the incursion into the neutral zone, and several manhandled a Jordanian machine gun out of Government House after the crew had set it up in a second-floor window. After the Jordanians occupied Jabel Mukaber, an advance patrol was sent out and approached Ramat Rachel, where they came under fire from four civilians, including the wife of the director, who were armed with old Czech-made weapons.
The immediate Israeli response was an offensive to retake Government House and its ridge. The Jerusalem Brigade's Reserve Battalion 161, under Lieutenant-Colonel Asher Dreizin, was given the task. Dreizin had two infantry companies and eight tanks under his command, several of which broke down or became stuck in the mud at Ramat Rachel, leaving three for the assault. The Jordanians mounted fierce resistance, knocking out two tanks.
The Israelis broke through the compound's western gate and began clearing the building with grenades, before General Odd Bull, commander of the UN observers, compelled the Israelis to hold their fire, telling them that the Jordanians had already fled. The Israelis proceeded to take the Antenna Hill, directly behind Government House, and clear out a series of bunkers to the west and south. The fighting often conducted hand-to-hand, continued for nearly four hours before the surviving Jordanians fell back to trenches held by the Hittin Brigade, which were steadily overwhelmed. By 6:30 am, the Jordanians had retreated to Bethlehem, having suffered about 100 casualties. All but ten of Dreizin's soldiers were casualties, and Dreizin himself was wounded three times.
Israeli invasion
During the late afternoon of 5 June, the Israelis launched an offensive to encircle Jerusalem, which lasted into the following day. During the night, they were supported by intense tank, artillery and mortar fire to soften up Jordanian positions. Searchlights placed atop the Labor Federation building, then the tallest in Israeli Jerusalem, exposed and blinded the Jordanians. The Jerusalem Brigade moved south of Jerusalem, while the mechanized Harel Brigade and 55th Paratroopers Brigade under Mordechai Gur encircled it from the north.
A combined force of tanks and paratroopers crossed no-man's land near the Mandelbaum Gate. Gur's 66th paratroop battalion approached the fortified Police Academy. The Israelis used Bangalore torpedoes to blast their way through barbed wire leading up to the position while exposed and under heavy fire. With the aid of two tanks borrowed from the Jerusalem Brigade, they captured the Police Academy. After receiving reinforcements, they moved up to attack Ammunition Hill.
The Jordanian defenders, who were heavily dug-in, fiercely resisted the attack. All of the Israeli officers except for two company commanders were killed, and the fighting was mostly led by individual soldiers. The fighting was conducted at close quarters in trenches and bunkers and was often hand-to-hand. The Israelis captured the position after four hours of heavy fighting. During the battle, 36 Israeli and 71 Jordanian soldiers were killed. Even after the fighting on Ammunition Hill had ended, Israeli soldiers were forced to remain in the trenches due to Jordanian sniper fire from Givat HaMivtar until the Harel Brigade overran that outpost in the afternoon.
The 66th battalion subsequently drove east, and linked up with the Israeli enclave on Mount Scopus and its Hebrew University campus. Gur's other battalions, the 71st and 28th captured the other Jordanian positions around the American Colony, despite being short on men and equipment and having come under a Jordanian mortar bombardment while waiting for the signal to advance.
At the same time, the IDF's 4th Brigade attacked the fortress at Latrun, which the Jordanians had abandoned due to heavy Israeli tank fire. The mechanized Harel Brigade attacked Har Adar, but seven tanks were knocked out by mines, forcing the infantry to mount an assault without armored cover. The Israeli soldiers advanced under heavy fire, jumping between rocks to avoid mines and the fighting was conducted at close quarters with knives and bayonets.
The Jordanians fell back after a battle that left two Israeli and eight Jordanian soldiers dead, and Israeli forces advanced through Beit Horon towards Ramallah, taking four fortified villages along the way. By the evening, the brigade arrived in Ramallah. Meanwhile, the 163rd Infantry Battalion secured Abu Tor following a fierce battle, severing the Old City from Bethlehem and Hebron.
Meanwhile, 600 Egyptian commandos stationed in the West Bank moved to attack Israeli airfields. Led by Jordanian intelligence scouts, they crossed the border and began infiltrating through Israeli settlements towards Ramla and Hatzor. They were soon detected and sought shelter in nearby fields, which the Israelis set on fire. Some 450 commandos were killed, and the remainder escaped to Jordan.
From the American Colony, the paratroopers moved towards the Old City. Their plan was to approach it via the lightly defended Salah al-Din Street but made a wrong turn onto the heavily defended Nablus Road and ran into fierce resistance. Their tanks fired at point-blank range down the street, while the paratroopers mounted repeated charges. Despite repelling repeated Israeli charges, the Jordanians gradually gave way to Israeli firepower and momentum. The Israelis suffered some 30 casualties – half the original force – while the Jordanians lost 45 dead and 142 wounded.
Meanwhile, the Israeli 71st Battalion breached barbed wire and minefields and emerged near Wadi Joz, near the base of Mount Scopus, from where the Old City could be cut off from Jericho and East Jerusalem from Ramallah. Israeli artillery targeted the one remaining route from Jerusalem to the West Bank, and shellfire deterred the Jordanians from counterattacking from their positions at Augusta-Victoria. An Israeli detachment then captured the Rockefeller Museum after a brief skirmish.
Afterwards, the Israelis broke through to the Jerusalem-Ramallah road. At Tel al-Ful, the Harel Brigade fought a running battle with up to thirty Jordanian tanks. The Jordanians stalled the advance and destroyed some half-tracks, but the Israelis launched air attacks and exploited the vulnerability of the external fuel tanks mounted on the Jordanian tanks. The Jordanians lost half their tanks, and retreated towards Jericho. Joining up with the 4th Brigade, the Israelis then descended through Shuafat and the site of what is now French Hill, through Jordanian defenses at Mivtar, emerging at Ammunition Hill.
With Jordanian defenses in Jerusalem crumbling, elements of the Jordanian 60th Brigade and an infantry battalion were sent from Jericho to reinforce Jerusalem. Its original orders were to repel the Israelis from the Latrun corridor, but due to the worsening situation in Jerusalem, the brigade was ordered to proceed to Jerusalem's Arab suburbs and attack Mount Scopus. Parallel to the brigade were infantrymen from the Imam Ali Brigade, who were approaching Issawiya. The brigades were spotted by Israeli aircraft and decimated by rocket and cannon fire. Other Jordanian attempts to reinforce Jerusalem were beaten back, either by armored ambushes or airstrikes.
Fearing damage to holy sites and the prospect of having to fight in built-up areas, Dayan ordered his troops not to enter the Old City. He also feared that Israel would be subjected to a fierce international backlash and the outrage of Christians worldwide if it forced its way into the Old City. Privately, he told David Ben-Gurion that he was also concerned over the prospect of Israel capturing Jerusalem's holy sites, only to be forced to give them up under the threat of international sanctions.
The West Bank
Israel was to gain almost total control of the West Bank by the evening of 7 June, and began its military occupation of the West Bank on that day, issuing a military order, the "Proclamation Regarding Law and Administration (The West Bank Area) (No. 2)—1967", which established the military government in the West Bank and granted the commander of the area full legislative, executive, and judicial power. Jordan had realised that it had no hope of defense as early as the morning of 6 June, just a day after the conflict had begun. At Nasser's request, Egypt's Abdul Munim Riad sent a situation update at midday on 6 June:
The situation on the West Bank is rapidly deteriorating. A concentrated attack has been launched on all axes, together with heavy fire, day and night. Jordanian, Syrian and Iraqi air forces in position H3 have been virtually destroyed. Upon consultation with King Hussein I have been asked to convey to you the following choices:
1. A political decision to cease fighting to be imposed by a third party (the USA, the Soviet Union or the Security Council).
2. To vacate the West Bank tonight.
3. To go on fighting for one more day, resulting in the isolation and destruction of the entire Jordanian Army.
King Hussein has asked me to refer this matter to you for an immediate reply.
An Egyptian order for Jordanian forces to withdraw across the Jordan River was issued at 10 am on 6 June; that afternoon King Hussein learned of the impending United Nations Security Council Resolution 233 and decided instead to hold out in the hope that a ceasefire would be implemented soon. It was already too late, as the counter-order caused confusion and in many cases, it was not possible to regain positions that had been left.
On 7 June, Dayan ordered his troops not to enter the Old City but, upon hearing that the UN was about to declare a ceasefire, he changed his mind, and without cabinet clearance, decided to capture it. Two paratroop battalions attacked Augusta-Victoria Hill, high ground overlooking the Old City from the east. One battalion attacked from Mount Scopus, and another attacked from the valley between it and the Old City. Another paratroop battalion, personally led by Gur, broke into the Old City and was joined by the other two battalions after their missions were complete. The paratroopers met little resistance. The fighting was conducted solely by the paratroopers; the Israelis did not use armor during the battle out of fear of severe damage to the Old City.
In the north, a battalion from Peled's division checked Jordanian defenses in the Jordan Valley. A brigade from Peled's division captured the western part of the West Bank. One brigade attacked Jordanian artillery positions around Jenin, which were shelling Ramat David Airbase. The Jordanian 12th Armored Battalion, which outnumbered the Israelis, held off repeated attempts to capture Jenin. Israeli air attacks took their toll, and the Jordanian M48 Pattons, with their external fuel tanks, proved vulnerable at short distances, even to the Israeli-modified Shermans. Twelve Jordanian tanks were destroyed, and only six remained operational.
Just after dusk, Israeli reinforcements arrived. The Jordanians continued to fiercely resist, and the Israelis were unable to advance without artillery and air support. One Israeli jet attacked the Jordanian commander's tank, wounding him and killing his radio operator and intelligence officer. The surviving Jordanian forces then withdrew to Jenin, where they were reinforced by the 25th Infantry Brigade. The Jordanians were effectively surrounded in Jenin.
Jordanian infantry and their three remaining tanks managed to hold off the Israelis until 4:00 am, when three battalions arrived to reinforce them in the afternoon. The Jordanian tanks charged and knocked out multiple Israeli vehicles, and the tide began to shift. After sunrise, Israeli jets and artillery conducted a two-hour bombardment against the Jordanians. The Jordanians lost 10 dead and 250 wounded, and had only seven tanks left, including two without gas, and sixteen APCs. The Israelis then fought their way into Jenin and captured the city after fierce fighting.
After the Old City fell, the Jerusalem Brigade reinforced the paratroopers, and continued to the south, capturing Judea and Gush Etzion. Hebron was taken without any resistance. Fearful that Israeli soldiers would exact retribution for the 1929 massacre of the city's Jewish community, Hebron's residents flew white sheets from their windows and rooftops. The Harel Brigade proceeded eastward, descending to the Jordan River.
On 7 June, Israeli forces seized Bethlehem, taking the city after a brief battle that left some 40 Jordanian soldiers dead, with the remainder fleeing. On the same day, one of Peled's brigades seized Nablus; then it joined one of Central Command's armored brigades to fight the Jordanian forces; as the Jordanians held the advantage of superior equipment and were equal in numbers to the Israelis.
Again, the air superiority of the IAF proved paramount as it immobilized the Jordanians, leading to their defeat. One of Peled's brigades joined with its Central Command counterparts coming from Ramallah, and the remaining two blocked the Jordan river crossings together with the Central Command's 10th. Engineering Corps sappers blew up the Abdullah and Hussein bridges with captured Jordanian mortar shells, while elements of the Harel Brigade crossed the river and occupied positions along the east bank to cover them, but quickly pulled back due to American pressure. The Jordanians, anticipating an Israeli offensive deep into Jordan, assembled the remnants of their army and Iraqi units in Jordan to protect the western approaches to Amman and the southern slopes of the Golan Heights.
As Israel continued its offensive on 7 June, taking no account of the UN ceasefire resolution, the Egyptian-Jordanian command ordered a full Jordanian withdrawal for the second time, in order to avoid an annihilation of the Jordanian army.: "Shortly after the order for the withdrawal had been issued [10.00 a.m. on 6 June], the Jordanians were informed that the UN Security Council was meeting to consider a resolution for an unconditional ceasefire. On learning of this the Jordanian command decided that the order for withdrawal had been premature, since if a ceasefire went into effect that day they would still be in possession of the West Bank. Consequently, the order was countermanded and those forces which had already withdrawn were asked to return to their original positions... The Security Council ceasefire resolution was passed unanimously at 11.00 p.m. on 6 June. However, Jordan's hope that this would enable it to hold the West Bank was destroyed when Israel continued its offensive. On learning of this Riad once again ordered a complete withdrawal from the West Bank as he feared that failure to do so would result in the annihilation of the remains of the Jordanian Army. By nightfall on 7 June most elements of the army had withdrawn to the East Bank and by mid-day on 8 June Jordan was once again the Transjordan of King Abdullah, while Israel completed total occupation of historical Palestine." This was complete by nightfall on 7 June.
After the Old City was captured, Dayan told his troops to "dig in" to hold it. When an armored brigade commander entered the West Bank on his own initiative, and stated that he could see Jericho, Dayan ordered him back. It was only after intelligence reports indicated that Hussein had withdrawn his forces across the Jordan River that Dayan ordered his troops to capture the West Bank. According to Narkis:
First, the Israeli government had no intention of capturing the West Bank. On the contrary, it was opposed to it. Second, there was not any provocation on the part of the IDF. Third, the rein was only loosened when a real threat to Jerusalem's security emerged. This is truly how things happened on June 5, although it is difficult to believe. The result was something that no one had planned.
Golan Heights
In May–June 1967, in preparation for conflict, the Israeli government planned to confine the confrontation to the Egyptian front, whilst taking into account the possibility of some fighting on the Syrian front.
Syrian front 5–8 June
Syria largely stayed out of the conflict for the first four days.: "Except for some sporadic Syrian shelling of Israeli settlements along the border, Syria stayed pretty much out of the war for the first four days... the Syrians were confused by what they slowly learned was the scale of the destruction on the Egyptian front. They were astounded. They did not understand what was going on, nor did they have the military experience and capability, especially in the officer corps, to react to the new situation. With no air support, how could they move forward against Israel? They reasoned that if they sat tight, they could emerge from this with little damage."
False Egyptian reports of a crushing victory against the Israeli army and forecasts that Egyptian forces would soon be attacking Tel Aviv influenced Syria's decision to enter the war – in a sporadic manner – during this period. Syrian artillery began shelling northern Israel, and twelve Syrian jets attacked Israeli settlements in the Galilee. Israeli fighter jets intercepted the Syrian aircraft, shooting down three and driving off the rest. In addition, two Lebanese Hawker Hunter jets, two of the twelve Lebanon had, crossed into Israeli airspace and began strafing Israeli positions in the Galilee. They were intercepted by Israeli fighter jets, and one was shot down.
On the evening of 5 June, the Israeli Air Force attacked Syrian airfields. The Syrian Air Force lost some 32 MiG 21s, 23 MiG-15 and MiG-17 fighters, and two Ilyushin Il-28 bombers, two-thirds of its fighting strength. The Syrian aircraft that survived the attack retreated to distant bases and played no further role in the war. Following the attack, Syria realized that the news it had received from Egypt of the near-total destruction of the Israeli military could not have been true.
On 6 June, a minor Syrian force tried to capture the water plants at Tel Dan (the subject of a fierce escalation two years earlier), Dan, and She'ar Yashuv. These attacks were repulsed with the loss of twenty soldiers and seven tanks. An Israeli officer was also killed. But a broader Syrian offensive quickly failed. Syrian reserve units were broken up by Israeli air attacks, and several tanks were reported to have sunk in the Jordan River.
Other problems included tanks being too wide for bridges, lack of radio communications between tanks and infantry, and units ignoring orders to advance. A post-war Syrian army report concluded:
Our forces did not go on the offensive either because they did not arrive or were not wholly prepared or because they could not find shelter from the enemy's aircraft. The reserves could not withstand the air attacks; they dispersed after their morale plummeted.
The Syrians bombarded Israeli civilian settlements in the Galilee Panhandle with two battalions of M-46 130mm guns, four companies of heavy mortars, and dug-in Panzer IV tanks. The Syrian bombardment killed two civilians and hit 205 houses as well as farming installations. An inaccurate report from a Syrian officer said that as a result of the bombardment that "the enemy appears to have suffered heavy losses and is retreating".
Israelis debate whether the Golan Heights should be attacked
On 7 and 8 June, the Israeli leadership debated about whether to attack the Golan Heights as well. Syria had supported pre-war raids that had helped raise tensions and had routinely shelled Israel from the Heights, so some Israeli leaders wanted to see Syria punished. Military opinion was that the attack would be extremely costly since it would entail an uphill battle against a strongly fortified enemy. The western side of the Golan Heights consists of a rock escarpment that rises 500 meters (1,700 ft) from the Sea of Galilee and the Jordan River, and then flattens to a gently sloping plateau. Dayan opposed the operation bitterly at first, believing such an undertaking would result in losses of 30,000 and might trigger Soviet intervention. Prime Minister Eshkol, on the other hand, was more open to the possibility, as was the head of the Northern Command, David Elazar, whose unbridled enthusiasm for and confidence in the operation may have eroded Dayan's reluctance.
Eventually, the situation on the Southern and Central fronts cleared up, intelligence estimated that the likelihood of Soviet intervention had been reduced, reconnaissance showed some Syrian defenses in the Golan region collapsing, and an intercepted cable revealed that Nasser was urging the President of Syria to immediately accept a ceasefire. At 3 am on 9 June, Syria announced its acceptance of the ceasefire. Despite this announcement, Dayan became more enthusiastic about the idea and four hours later at 7 am, "gave the order to go into action against Syria" without consultation or government authorization.
The Syrian army consisted of about 75,000 men grouped in nine brigades, supported by an adequate amount of artillery and armor. Israeli forces used in combat consisted of two brigades (the 8th Armored Brigade and the Golani Brigade) in the northern part of the front at Givat HaEm, and another two (infantry and one of Peled's brigades summoned from Jenin) in the center. The Golan Heights' unique terrain (mountainous slopes crossed by parallel streams every several kilometers running east to west), and the general lack of roads in the area channeled both forces along east–west axes of movement and restricted the ability of units to support those on either flank. Thus the Syrians could move north–south on the plateau itself, and the Israelis could move north–south at the base of the Golan escarpment. An advantage Israel possessed was the intelligence collected by Mossad operative Eli Cohen (who was captured and executed in Syria in 1965) regarding the Syrian battle positions. Syria had built extensive defensive fortifications in depths up to 15 kilometers.
As opposed to all the other campaigns, IAF was only partially effective in the Golan because the fixed fortifications were so effective. The Syrian forces proved unable to put up effective defense largely because the officers were poor leaders and treated their soldiers badly; often officers would retreat from danger, leaving their men confused and ineffective. The Israelis also had the upper hand during close combat that took place in the numerous Syrian bunkers along the Golan Heights, as they were armed with the Uzi, a submachine gun designed for close combat, while Syrian soldiers were armed with the heavier AK-47 assault rifle, designed for combat in more open areas.
Israeli attack: first day (9 June)
On the morning of 9 June, Israeli jets began carrying out dozens of sorties against Syrian positions from Mount Hermon to Tawfiq, using rockets salvaged from captured Egyptian stocks. The airstrikes knocked out artillery batteries and storehouses and forced transport columns off the roads. The Syrians suffered heavy casualties and a drop in morale, with some senior officers and troops deserting. The attacks also provided time as Israeli forces cleared paths through Syrian minefields. The airstrikes did not seriously damage the Syrians' bunkers and trench systems, and the bulk of Syrian forces on the Golan remained in their positions.
About two hours after the airstrikes began, the 8th Armored Brigade, led by Colonel Albert Mandler, advanced into the Golan Heights from Givat HaEm. Its advance was spearheaded by Engineering Corps sappers and eight bulldozers, which cleared away barbed wire and mines. As they advanced, the force came under fire, and five bulldozers were immediately hit. The Israeli tanks, with their manoeuvrability sharply reduced by the terrain, advanced slowly under fire toward the fortified village of Sir al-Dib, with their ultimate objective being the fortress at Qala. Israeli casualties steadily mounted.
Part of the attacking force lost its way and emerged opposite Za'ura, a redoubt manned by Syrian reservists. With the situation critical, Colonel Mandler ordered simultaneous assaults on Za'ura and Qala. Heavy and confused fighting followed, with Israeli and Syrian tanks struggling around obstacles and firing at extremely short ranges. Mandler recalled that "the Syrians fought well and bloodied us. We beat them only by crushing them under our treads and by blasting them with our cannons at very short range, from 100 to 500 meters." The first three Israeli tanks to enter Qala were stopped by a Syrian bazooka team, and a relief column of seven Syrian tanks arrived to repel the attackers.
The Israelis took heavy fire from the houses, but could not turn back, as other forces were advancing behind them, and they were on a narrow path with mines on either side. The Israelis continued pressing forward and called for air support. A pair of Israeli jets destroyed two of the Syrian tanks, and the remainder withdrew. The surviving defenders of Qala retreated after their commander was killed. Meanwhile, Za'ura fell in an Israeli assault, and the Israelis also captured the 'Ein Fit fortress.
In the central sector, the Israeli 181st Battalion captured the strongholds of Dardara and Tel Hillal after fierce fighting. Desperate fighting also broke out along the operation's northern axis, where Golani Brigade attacked thirteen Syrian positions, including the formidable Tel Fakhr position. Navigational errors placed the Israelis directly under the Syrians' guns. In the fighting that followed, both sides took heavy casualties, with the Israelis losing all nineteen of their tanks and half-tracks. The Israeli battalion commander then ordered his twenty-five remaining men to dismount, divide into two groups, and charge the northern and southern flanks of Tel Fakhr. The first Israelis to reach the perimeter of the southern approach laid on the barbed wire, allowing their comrades to vault over them. From there, they assaulted the fortified Syrian positions. The fighting was waged at extremely close quarters, often hand-to-hand.
On the northern flank, the Israelis broke through within minutes and cleared out the trenches and bunkers. During the seven-hour battle, the Israelis lost 31 dead and 82 wounded, while the Syrians lost 62 dead and 20 captured. Among the dead was the Israeli battalion commander. The Golani Brigade's 51st Battalion took Tel 'Azzaziat, and Darbashiya also fell to Israeli forces.
By the evening of 9 June, the four Israeli brigades had all broken through to the plateau, where they could be reinforced and replaced. Thousands of reinforcements began reaching the front, those tanks and half-tracks that had survived the previous day's fighting were refuelled and replenished with ammunition, and the wounded were evacuated. By dawn, the Israelis had eight brigades in the sector.
Syria's first line of defense had been shattered, but the defenses beyond that remained largely intact. Mount Hermon and the Banias in the north, and the entire sector between Tawfiq and Customs House Road in the south remained in Syrian hands. In a meeting early on the night of 9 June, Syrian leaders decided to reinforce those positions as quickly as possible and to maintain a steady barrage on Israeli civilian settlements.
Israeli attack: second day (10 June)
Throughout the night, the Israelis continued their advance, though it was slowed by fierce resistance. An anticipated Syrian counterattack never materialized. At the fortified village of Jalabina, a garrison of Syrian reservists, levelling their anti-aircraft guns, held off the Israeli 65th Paratroop Battalion for four hours before a small detachment managed to penetrate the village and knock out the heavy guns.
Meanwhile, the 8th Brigade's tanks moved south from Qala, advancing to Wasit under heavy artillery and tank bombardment. At the Banias in the north, Syrian mortar batteries opened fire on advancing Israeli forces only after Golani Brigade sappers had cleared a path through a minefield, killing 16 Israeli soldiers and wounding four.
On the next day, 10 June, the central and northern groups joined in a pincer movement on the plateau, but that fell mainly on empty territory as the Syrian forces retreated. At 8:30 am, the Syrians began blowing up their own bunkers, burning documents and retreating. Several units joined by Elad Peled's troops climbed to the Golan from the south, only to find the positions mostly empty. When the 8th Brigade reached Mansura, from Wasit, the Israelis met no opposition and found abandoned equipment, including tanks, in perfect working condition. In the fortified Banias village, Golani Brigade troops found only several Syrian soldiers chained to their positions.
During the day, the Israeli units stopped after obtaining manoeuvre room between their positions and a line of volcanic hills to the west. In some locations, Israeli troops advanced after an agreed-upon cease-fire to occupy strategically strong positions. To the east, the ground terrain is an open gently sloping plain. This position later became the cease-fire line known as the "Purple Line".
Time magazine reported: "In an effort to pressure the United Nations into enforcing a ceasefire, Damascus Radio undercut its own army by broadcasting the fall of the city of Quneitra three hours before it actually capitulated. That premature report of the surrender of their headquarters destroyed the morale of the Syrian troops left in the Golan area."
Conclusion
By 10 June, Israel had completed its final offensive in the Golan Heights, and a ceasefire was signed the day after. Israel had seized the Gaza Strip, the Sinai Peninsula, the West Bank of the Jordan River (including East Jerusalem), and the Golan Heights. About one million Arabs were placed under Israel's direct control in the newly captured territories. Israel's strategic depth grew to at least 300 kilometers in the south, 60 kilometers in the east, and 20 kilometers of extremely rugged terrain in the north, a security asset that would prove useful in the Yom Kippur War six years later.
Speaking three weeks after the war ended, as he accepted an honorary degree from Hebrew University, Yitzhak Rabin gave his reasoning behind the success of Israel:
Our airmen, who struck the enemies' planes so accurately that no one in the world understands how it was done and people seek technological explanations or secret weapons; our armoured troops who beat the enemy even when their equipment was inferior to his; our soldiers in all other branches ... who overcame our enemies everywhere, despite the latter's superior numbers and fortifications—all these revealed not only coolness and courage in the battle but ... an understanding that only their personal stand against the greatest dangers would achieve victory for their country and for their families, and that if victory was not theirs the alternative was annihilation.
In recognition of contributions, Rabin was given the honor of naming the war for the Israelis. From the suggestions proposed, including the "War of Daring", "War of Salvation", and "War of the Sons of Light", he "chose the least ostentatious, the Six-Day War, evoking the days of creation".
Dayan's final report on the war to the Israeli general staff listed several shortcomings in Israel's actions, including misinterpretation of Nasser's intentions, overdependence on the United States, and reluctance to act when Egypt closed the Straits. He also credited several factors for Israel's success: Egypt did not appreciate the advantage of striking first and their adversaries did not accurately gauge Israel's strength and its willingness to use it.
In Egypt, according to Heikal, Nasser had admitted his responsibility for the military defeat in June 1967. According to historian Abd al-Azim Ramadan, Nasser's mistaken decisions to expel the international peacekeeping force from the Sinai Peninsula and close the Straits of Tiran in 1967 led to a state of war with Israel, despite Egypt's lack of military preparedness.
After the 1973 Yom Kippur War, Egypt reviewed the causes of its loss of the 1967 war. Issues that were identified included "the individualistic bureaucratic leadership"; "promotions on the basis of loyalty, not expertise, and the army's fear of telling Nasser the truth"; lack of intelligence; and better Israeli weapons, command, organization, and will to fight.
Casualties and losses
Between 776 and 983 Israelis were killed and 4,517 were wounded. Fifteen Israeli soldiers were captured. Arab casualties were far greater. Between 9,800El Gamasy 1993 p. 79 and 15,000 Egyptian soldiers were listed as killed or missing in action. An additional 4,338 Egyptian soldiers were captured. Jordanian losses are estimated to be 700 killed in action with another 2,500 wounded. The Syrians were estimated to have sustained between 1,000 and 2,500 killed in action. Between 367 and 591 Syrians were captured.
Casualties were also suffered by UNEF, the United Nations Emergency Force that was stationed on the Egyptian side of the border. In three different episodes, Israeli forces attacked a UNEF convoy, camps in which UNEF personnel were concentrated and the UNEF headquarters in Gaza, resulting in one Brazilian peacekeeper and 14 Indian officials killed by Israeli forces, with an additional seventeen peacekeepers wounded in both contingents.
Regarding material losses, 46 Israeli aircraft and 400 tanks were destroyed.Zaloga, Steven (1981). Armour of the Middle East Wars 1948–78 (Vanguard). Osprey Publishing. Egyptian losses were reported at 700 tanks according to President Nasser,Simon Dunstan. The Six Day War 1967: Sinai. Osprey Publishing, 2009. P. 88. although Israeli officials claimed to have destroyed 509 Egyptian tanks.Six-Day War 1967, Operation Focus and the 12 hours that changed the Middle East. Shlomo Aloni. Osprey Publishing. 2019. P.92 Egyptian aircraft losses range from 282 to 350. Syria lost close to 60 aircraft. Jordanian losses accounted to 21, including 17 military aircraft, 1 helicopter and 3 passenger aircraft.Hawker Hunters at War. Iraq and Jordan, 1958-1967. Tom Cooper, Patricia Salti. Helion & Company. 2016. P.45.59 Iraq lost 9 aircraft;Hawker Hunters at War. Iraq and Jordan, 1958-1967. Tom Cooper, Patricia Salti. Helion & Company. 2016. P.46-53 Lebanon lost 2.
Controversies
Preemptive war vs war of aggression
At the commencement of hostilities, both Egypt and Israel announced that they had been attacked by the other country. The Israeli government later abandoned its initial position, acknowledging Israel had struck first, describing the attack as a preemptive move to prevent an anticipated invasion by Egypt. The Arab view was that it was unjustified to attack Egypt.UN Security Council meeting 1347 (5 June 1967) Many scholars consider the war a case of preventative war as a form of self-defense.. "Terence Taylor wrote in 2004...that 'many scholars' considered Israel to have 'conducted the (1967) action in anticipatory of self-defense'." The war has been assessed by others as a war of aggression.
Allegations of atrocities committed against Egyptian soldiers
It has been alleged that Nasser did not want Egypt to learn of the true extent of his defeat and so ordered the killing of Egyptian army stragglers making their way back to the Suez canal zone. There have also been allegations from both Israeli and Egyptian sources that Israeli troops killed unarmed Egyptian prisoners.Bar-Zohar, Michael 'The Reactions of Journalists to the Army's Murders of POWs', Maariv, 17 August 1995.Fisher, Ronal 'Mass Murder in the 1956 War', Ma'ariv, 8 August 1995.
Allegations of military support from the US, UK and Soviet Union
There have been allegations of direct military support of Israel during the war by the US and the UK, including the supply of equipment (despite an embargo) and the participation of US forces in the conflict. Many of these allegations and conspiracy theories have been disputed and it has been claimed that some were given currency in the Arab world to explain the Arab defeat.
It has also been claimed that the Soviet Union, in support of its Arab allies, used its naval strength in the Mediterranean to act as a major restraint on the US Navy.
America features prominently in Arab conspiracy theories purporting to explain the June 1967 defeat. Mohamed Hassanein Heikal, a confidant of Nasser, claims that President Lyndon B. Johnson was obsessed with Nasser and that Johnson conspired with Israel to bring him down. The reported Israeli troop movements seemed all the more threatening because they were perceived in the context of a US conspiracy against Egypt. Salah Bassiouny of the Foreign Ministry, claims that Foreign Ministry saw the reported Israeli troop movements as credible because Israel had reached the level at which it could find strategic alliance with the United States.
During the war, Cairo announced that American and British planes were participating in the Israeli attack. Nasser broke off diplomatic relations following this allegation. Nasser's image of the United States was such that he might well have believed the worst. Anwar Sadat implied that Nasser used this deliberate conspiracy in order to accuse the United States as a political cover-up for domestic consumption. Lutfi Abd al-Qadir, the director of Radio Cairo during the late 1960s, who accompanied Nasser to his visits in Moscow, had his conspiracy theory that both the Soviets and the Western powers wanted to topple Nasser or to reduce his influence.
USS Liberty incident
On 8 June 1967, USS Liberty, a United States Navy electronic intelligence vessel sailing off Arish (just outside Egypt's territorial waters), was attacked by Israeli jets and torpedo boats, nearly sinking the ship, killing 34 sailors and wounding 171. Israel said the attack was a case of mistaken identity, and that the ship had been misidentified as the Egyptian vessel El Quseir. Israel apologized for the mistake and paid compensation to the victims or their families, and to the United States for damage to the ship. After an investigation, the U.S. accepted the explanation that the incident was an accident and the issue was closed by the exchange of diplomatic notes in 1987. Others, including the then United States Secretary of State Dean Rusk and Chief of Naval Operations Admiral Thomas Moorer, some survivors of the attack, and intelligence officials familiar with transcripts of intercepted signals on the day, have rejected these conclusions as unsatisfactory and maintain that the attack was made in the knowledge that the ship was American.Tim Fischer, "Six days of war, 40 years of secrecy" (). The Age, 27 May 2007. Cf. Dean Rusk, As I Saw it: A Secretary of State's Memoirs, W. W. Norton, 1990, pp. 386–388.
Aftermath
The political importance of the 1967 War was immense. Israel demonstrated again that it was able and willing to initiate strategic strikes that could change the regional balance. Egypt and Syria learned tactical lessons and would launch an attack in 1973 in an attempt to reclaim their lost territories.
After following other Arab nations in declaring war, Mauritania remained in a declared state of war with Israel until about 1999."War and its Legacy: Amos Oz", In Bed with Phillip. . 10 September 1991, re-broadcast on ABC Radio National 23 December 2011. The United States imposed an embargo on new arms agreements to all Middle East countries, including Israel. The embargo remained in force until the end of 1967, despite urgent Israeli requests to lift it.
Exodus of Arabs from Israeli-occupied territories
There was extensive displacement of populations in the occupied territories: of about one million Palestinians in the West Bank and Gaza, 280,000 to 325,000 were displaced from their homes. Most of them settled in Jordan.. The other 700,000 remained. In the Golan Heights, over 100,000 fled. Israel allowed only the inhabitants of East Jerusalem and the Golan Heights to receive full Israeli citizenship, applying its law, administration and jurisdiction to these territories in 1967 and 1981, respectively. The vast majority of the populations in both territories declined to take citizenship. See also Israeli–Palestinian conflict and Golan Heights.
In his book Righteous Victims (1999), Israeli "New Historian" Benny Morris writes:
In addition, between 80,000 and 110,000 Syrians fled the Golan Heights, of which about 20,000 were from the city of Quneitra. According to more recent research by the Israeli daily Haaretz, a total of 130,000 Syrian inhabitants fled or were expelled from the territory, most of them pushed out by the Israeli army.Shay Fogelman, "The disinherited" , Haaretz, 30 July 2010
Israel and Zionism
Following the war, Israel experienced a wave of national euphoria, and the press praised the military's performance for weeks afterwards. New "victory coins" were minted to celebrate. In addition, the world's interest in Israel grew, and the country's economy, which had been in crisis before the war, flourished due to an influx of tourists and donations, as well as the extraction of oil from the Sinai's wells. The aftermath of the war also saw a baby boom, which lasted for four years.
The aftermath of the war is also of religious significance. Under Jordanian rule, Jews were expelled from Jerusalem and were effectively barred from visiting the Western Wall, despite Article VIII of the 1949 Armistice Agreement which required making arrangements for Israeli Jewish access to the Western Wall. Jewish holy sites were not maintained, and Jewish cemeteries had been desecrated. After the annexation to Israel, each religious group was granted administration over its holy sites. For the first time since 1948, Jews could visit the Old City of Jerusalem and pray at the Western Wall, the holiest site where Jews are permitted to pray, an event celebrated every year during Yom Yerushalayim.
Despite the Temple Mount where the Al-Aqsa compound is located being the most important holy site in Jewish tradition, it has been under sole administration of the Jordanian Muslim Waqf, and Jews are barred from praying there, although they are allowed to visit.The "Status Quo" on the Temple Mount November–December 2014Jerusalem in the unholy grip of religious fervor , The Times of Israel. 6 November 2014 In Hebron, Jews gained access to the Cave of the Patriarchs – the second-most holy site in Judaism, after the Temple Mount – for the first time since the 14th century (previously Jews were allowed to pray only at the entrance).Cave of the Patriarchs Chabad.org Other Jewish holy sites, such as Rachel's Tomb in Bethlehem and Joseph's Tomb in Nablus, also became accessible.
The war inspired the Jewish diaspora, which was swept up in overwhelming support for Israel. According to Michael Oren, the war enabled American Jews to "walk with their backs straight and flex their political muscle as never before. American Jewish organizations which had previously kept Israel at arms length suddenly proclaimed their Zionism." Thousands of Jewish immigrants arrived from Western countries such as the United States, United Kingdom, Canada, France and South Africa after the war. Many of them returned to their countries of origin after a few years; one survey found that 58% of American Jews who immigrated to Israel between 1961 and 1972 returned to the United States. Nevertheless, immigration to Israel of Jews from Western countries, which was previously only a trickle, was a significant force for the first time.The Rise – and Rise – of French Jewry's Immigration to Israel Judy Maltz, 13 January 2015. haaretz.com
Most notably, the war stirred Zionist passions among Jews in the Soviet Union, who had by that time been forcibly assimilated. Many Soviet Jews subsequently applied for exit visas and began protesting for their right to immigrate to Israel. Following diplomatic pressure from the West, the Soviet government began granting exit visas to Jews in growing numbers. From 1970 to 1988, some 291,000 Soviet Jews were granted exit visas, of whom 165,000 immigrated to Israel and 126,000 immigrated to the United States.Tolts, Mark. Post-Soviet Aliyah and Jewish Demographic Transformation The great rise in Jewish pride in the wake of Israel's victory also fueled the beginnings of the baal teshuva movement, the return of secular Jews to religious Judaism. The war gave impetus to a campaign in which the leader of the hasidic Lubavitch movement directed his male followers around the world to wear tefillin (small leather boxes) during morning prayers.
Jews in Arab countries
In the Arab nations, populations of minority Jews faced persecution and expulsion following the Israeli victory, contributing to the Jewish exodus from Arab lands, which had been ongoing since 1948. As a result, Jewish populations in Arab countries further diminished as many Jews emigrated to Israel and other Western countries. According to historian and ambassador Michael Oren:
Mobs attacked Jewish neighborhoods in Egypt, Yemen, Lebanon, Tunisia, and Morocco, burning synagogues and assaulting residents. A pogrom in Tripoli, Libya, left 18 Jews dead and 25 injured; the survivors were herded into detention centers. Of Egypt's 4,000 Jews, 800 were arrested, including the chief rabbis of both Cairo and Alexandria, and their property sequestered by the government. The ancient communities of Damascus and Baghdad were placed under house arrest, their leaders imprisoned and fined. A total of 7,000 Jews were expelled, many with merely a satchel.
Antisemitism in Communist countries
Following the war, a series of antisemitic purges began in Communist countries.Włodzimierz Rozenbaum, CIAO: Intermarium, National Convention of the American Association for the Advancement of Slavic Studies, Atlanta, Ga., 8–11 October 1975. Some 11,200 Jews from Poland immigrated to Israel during the 1968 Polish political crisis and the following year.Communiqué: Investigation regarding communist state officers who publicly incited hatred towards people of different nationality. Institute of National Remembrance, Warsaw. Publication on Polish site of IPN: 25 July 2007.
War of Attrition
Following the war, Egypt initiated clashes along the Suez Canal in what became known as the War of Attrition.The Normalization of War in Israeli Discourse, 1967–2008 , Dalia Gavriely-Nuri, Lexington Books, page 107
Palestinian terrorism
As a result of Israel's victory, the Palestinian leadership concluded that the Arab world was not able to defeat Israel in open warfare, which in turn led to an increase in terrorist attacks with an international reach.Jenkins, B. M. (1978). International terrorism: trends and potentialities. Journal of International Affairs, 115-123. "Some perceive today's terrorism as the outgrowth of unique political circumstances prevailing in the late 1960s: the Israeli defeat of the Arabs in 1967, which caused Palestinians to abandon their dependence on Arab military power and turn to terrorism." While the Palestine Liberation Organization (PLO) was established in 1964, it became more active after the Six-Day War; its actions gave credibility to those who claimed that only terror could end Israel's existence. Also after the war, the Popular Front for the Liberation of Palestine emerged, with its leader George Habash speaking of turning the occupied territories into an "inferno whose fires consume the usurpers". These events led to a series of hijackings, bombings, and kidnappings that culminated in the massacre of Israeli athletes during the 1972 Munich Olympics.
Peace and diplomacy
alt=Khartoum Arab Summit, 1967|thumb|Some of the attending heads of state at the Arab League Summit in Khartoum following the Six-Day War. From left to right: Faisal of Saudi Arabia, Gamal Abdel Nasser of Egypt, Abdullah al-Sallal of Yemen, Sabah Al-Salim Al-Sabah of Kuwait and Abd al-Rahman Arif of Iraq, 2 September 1967
Following the war, Israel made an offer for peace that included the return of most of the recently captured territories. According to Chaim Herzog:
The 19 June Israeli cabinet decision did not include the Gaza Strip and left open the possibility of Israel permanently acquiring parts of the West Bank. On 25–27 June, Israel incorporated East Jerusalem together with areas of the West Bank to the north and south into Jerusalem's new municipal boundaries.
The Israeli decision was to be conveyed to the Arab nations by the United States. The U.S. was informed of the decision, but not that it was to transmit it. There is no evidence of receipt from Egypt or Syria, and some historians claim that they may never have received the offer.
In September, the Khartoum Arab Summit resolved that there would be "no peace, no recognition and no negotiation with Israel". Avraham Sela notes that the Khartoum conference effectively marked a shift in the perception of the conflict by the Arab states away from one centered on the question of Israel's legitimacy, toward one focusing on territories and boundaries. This was shown on 22 November when Egypt and Jordan accepted United Nations Security Council Resolution 242. Nasser forestalled any movement toward direct negotiations with Israel. In dozens of speeches and statements, Nasser posited the equation that any direct peace talks with Israel were tantamount to surrender.
After the war, the entire Soviet bloc of Eastern Europe (with the exception of Romania) broke off diplomatic relations with Israel.
Mao-era China contended that the Arab defeat in the Six-Day War demonstrated that only people's war, not other strategies or methods, could defeat imperialism in the Middle East.
The 1967 War laid the foundation for future discord in the region, as the Arab states resented Israel's victory and did not want to give up territory.
On 22 November 1967, the United Nations Security Council adopted Resolution 242, the "land for peace" formula, which called for Israeli withdrawal "from territories occupied" in 1967 and "the termination of all claims or states of belligerency". Resolution 242 recognized the right of "every state in the area to live in peace within secure and recognized boundaries free from threats or acts of force." Israel returned the Sinai to Egypt in 1978, after the Camp David Accords. In the summer of 2005, Israel withdrew all military forces and evacuated all civilians from the Gaza Strip. Its army frequently re-enters Gaza for military operations and still retains control of the seaports, airports and most of the border crossings.
Long term
Israel made peace with Egypt following the Camp David Accords of 1978 and completed a staged withdrawal from the Sinai in 1982. The position of the other occupied territories has been a long-standing and bitter cause of conflict for decades between Israel and the Palestinians, and the Arab world in general. Jordan and Egypt eventually withdrew their claims to sovereignty over the West Bank and Gaza, respectively. Israel and Jordan signed a peace treaty in 1994.Asher Susser, "Fifty Years since the Six-Day War," The RUSI Journal (2017) 162:3, 40-48, DOI: 10.1080/03071847.2017.1353270
After the Israeli occupation of these territories, the Gush Emunim movement launched a large settlement effort in these areas to secure a permanent foothold. There are now hundreds of thousands of Israeli settlers in the West Bank. They are a matter of controversy within Israel, both among the general population and within different political administrations, supporting them to varying degrees. Palestinians consider them a provocation. The Israeli settlements in Gaza were evacuated in August 2005 as a part of Israel's disengagement from Gaza.Susser, "Fifty Years since the Six-Day War,"Idith Zertal and Akiva Eldar, Lords of the Land: The War over Israel's Settlements in the Occupied Territories, 1967-2007 (Nation Books, 2007), ch. 1.
See also
Catch 67, a 2017 Israeli philosophy book on the West Bank occupation that launched a public dialogue on the war's 50th anniversary
Abba Eban, Israeli Foreign Minister
Israeli MIAs
Ras Sedr massacre
Syrian towns and villages depopulated in the Arab–Israeli conflict
References
Explanatory notes
It was twenty minutes after the capture of the Western Wall that David Rubinger shot his "signature" photograph of three Israeli paratroopers gazing in wonder up at the wall. As part of the terms for his access to the front lines, Rubinger handed the negatives to the Israeli government, who then distributed this image widely. Although he was displeased with the violation of his copyright, the widespread use of his photo made it famous, and it is now considered a defining image of the conflict and one of the best-known in the history of Israel.
Gideon Rafael [Israeli Ambassador to the UN] received a message from the Israeli foreign office: "Inform immediately the President of the Sec. Co. that Israel is now engaged in repelling Egyptian land and air forces." At 3:10 am, Rafael woke ambassador Hans Tabor, the Danish President of the Security Council for June, with the news that Egyptian forces had "moved against Israel".
[At Security Council meeting of 5 June], both Israel and Egypt claimed to be repelling an invasion by the other.
"Egyptian sources claimed that Israel had initiated hostilities [...] but Israeli officials – Eban and Evron – swore that Egypt had fired first".
"Gideon Rafael phoned Danish ambassador Hans Tabor, Security Council president for the month of June, and informed him that Israel was responding to a 'cowardly and treacherous' attack from Egypt...".
Citations
: "the prominent historian and commentator Abd al-Azim Ramadan, In a series of articles published in AlWafd, subsequently compiled in a hook published in 2000, Ramadan criticized the Nasser cult, .... The events leading up to the nationalization of the Suez Canal Company, as other events during Nasser's rule, Ramadan wrote, showed Nasser to be far from a rational, responsible leader. ... His decision to nationalize the Suez Canal was his alone, made without political or military consultation. ... The source of all this evil. Ramadan noted, was Nasser's inclination to solitary decision making... the revolutionary regime led by the same individual—Nasser—repeated its mistakes when it decided to expel the international peacekeeping force from the Sinai Peninsula and close the Straits of Tiran in 1967. Both decisions led to a state of war with Israel, despite the lack of military preparedness."
: "The most outstanding exponent of the Nasserist narrative was Muhammad Hasanayn Haykal, who also embodied the revolutionary heritage personally as Nasser's closest aid and the editor in chief of the state-sponsored dailies Al-Akhbar and Al-Ahram.... Haykal acknowledged that Nasser had erred in various fields, noting that he had admitted, for example, his responsibility for the military defeat in the June 1967 War."
: "In May–June 1967 Eshkol's government did everything in its power to confine the confrontation to the Egyptian front. Eshkol and his colleagues took into account the possibility of some fighting on the Syrian front. But they wanted to avoid having a clash with Jordan and the inevitable complications of having to deal with the predominantly Palestinian population of the West Bank. The fighting on the eastern front was initiated by Jordan, not by Israel. King Hussein got carried along by a powerful current of Arab nationalism. On 30 May he flew to Cairo and signed a defense pact with Nasser. On 5 June, Jordan started shelling the Israeli side in Jerusalem. This could have been interpreted either as a salvo to uphold Jordanian honour or as a declaration of war. Eshkol decided to give King Hussein the benefit of the doubt. Through General Odd Bull, the Norwegian commander of UNTSO, he sent the following message the morning of 5 June: "We shall not initiate any action whatsoever against Jordan. However, should Jordan open hostilities, we shall react with all our might, and the king will have to bear the full responsibility of the consequences." King Hussein told General Bull that it was too late; the die was cast."
Israel Ministry of Foreign Affairs (2008). The Six-Day War (June 1967) .
Israel Ministry of Foreign Affairs (2004). Background on Israeli POWs and MIAs .
General and cited sources
Available in multiple PDF files from the Combat Studies Institute and the Combined Arms Research Library, CSI Publications in parts .
Google Books
Further reading
al-Qusi, Abdallah Ahmad Hamid. (1999). Al-Wisam fi at-Ta'rikh. Cairo: Al-Mu'asasa al-'Arabiya al-Haditha. No ISBN available.
Aloni, Shlomo (2001). Arab–Israeli Air Wars 1947–1982. Osprey Aviation.
Alteras, Isaac. (1993). Eisenhower and Israel: U.S.–Israeli Relations, 1953–1960, University Press of Florida. .
Bachmutsky, Roi. "Otherwise occupied: The legal status of the Gaza strip 50 years after the six-day war." Virginia Journal of International Law 57 (2017): 413+ online .
Barzilai, Gad (1996). Wars, Internal Conflicts, and Political Order: A Jewish Democracy in the Middle East. New York University Press.
Ben-Gurion, David. (1999). Ben-Gurion diary: May–June 1967. Israel Studies 4(2), 199–220.
Black, Ian (1992). Israel's Secret Wars: A History of Israel's Intelligence Services. Grove Press.
Bober, Arie (ed.) (1972). The other Israel. Doubleday Anchor. .
Boczek, Boleslaw Adam (2005). International Law: A Dictionary. Scarecrow Press.
Borowiec, Andrew. (1998). Modern Tunisia: A Democratic Apprenticeship. Greenwood Publishing Group. .
Brecher, Michael. (1996). Eban and Israeli foreign policy: Diplomacy, war and disengagement. In A Restless Mind: Essays in Honor of Amos Perlmutter, Benjamin Frankel (ed.), pp. 104–117. Routledge.
Bregman, Ahron (2000). Israel's Wars, 1947–1993. Routledge. .
Bregman, Ahron (2002). Israel's Wars: A History Since 1947. London: Routledge.
Christie, Hazel (1999). Law of the Sea. Manchester: Manchester University Press.
Colaresi, Michael P. (2005). Scare Tactics: The politics of international rivalry. Syracuse University Press.
Cristol, A Jay (2002). Liberty Incident: The 1967 Israeli Attack on the U.S. Navy Spy Ship. Brassey's.
Eban, Abba (1977). Abba Eban: An Autobiography. Random House.
El-Gamasy, Mohamed Abdel Ghani. (1993). The October War. The American University in Cairo Press. .
Finkelstein, Norman (June 2017). Analysis of the war and its aftermath, on the 50th anniversary of the June 1967 war (3 parts, each about 30 min)
Gelpi, Christopher (2002). Power of Legitimacy: Assessing the Role of Norms in Crisis Bargaining. Princeton University Press.
Gerner, Deborah J. (1994). One Land, Two Peoples. Westview Press. , p. 112
Gerteiny, Alfred G. & Ziegler, Jean (2007). The Terrorist Conjunction: The United States, the Israeli-Palestinian Conflict, and Al-Qā'ida. Greenwood Publishing Group. , p. 142
Gilbert, Martin. (2008). Israel – A History. McNally & Loftin Publishers. . Chapter available online: Chapter 21: Nasser's Challenge .
Goldstein, Erik (1992). Wars and Peace Treaties, 1816–1991. Routledge.
Haddad, Yvonne. (1992). Islamists and the "Problem of Israel": The 1967 Awakening. Middle East Journal, Vol. 46, No. 2, pp. 266–285.
Hajjar, Sami G. The Israel-Syria Track, Middle East Policy, Volume VI, February 1999, Number 3. Retrieved 30 September 2006.
Handel, Michael I. (1973). Israel's political-military doctrine. Center for International Affairs, Harvard University.
Herbert, Nicholas (17 May 1967). Egyptian Forces On Full Alert: Ready to fight for Syria. The Times, p. 1; Issue 56943; col E.
Higham, Robin. (2003). 100 Years of Air Power and Aviation. TAMU Press. .
Hinnebusch, Raymond A. (2003). The international politics of the Middle East. Manchester University Press.
Hopwood, Derek (1991). Egypt: Politics and Society. London: Routledge.
Hussein of Jordan (1969). My "War" with Israel. London: Peter Owen.
James, Laura (2005). The Nassar And His Enemies: Foreign Policy Decision Making In Egypt On The Eve Of The Six Day War. The Middle East Review of International Affairs. Volume 9, No. 2, Article 2.
Jia, Bing Bing. (1998). The Regime of Straits in International Law (Oxford Monographs in International Law). Oxford University Press, USA. .
Katz, Samuel M. (1991) Israel's Air Force; The Power Series. Motorbooks International Publishers & Wholesalers, Osceola, WI.
Koboril, Iwao and Glantz, Michael H. (1998). Central Eurasian Water Crisis. United Nations University Press.
Lavoy, Peter R.; Sagan, Scott Douglas & Wirtz, James J. (Eds.) (2000). Planning the Unthinkable: How New Powers Will Use Nuclear, Biological, and Chemical Weapons. Cornell University Press. .
Leibler, Isi (1972). The Case For Israel. Australia: The Executive Council of Australian Jewry. .
Little, Douglas. "Nasser Delenda Est: Lyndon Johnson, The Arabs, and the 1967 Six-Day War," in H.W. Brands, ed. The foreign policies of Lyndon Johnson : beyond Vietnam (1999) pp 145–167. online
Lyndon Baines Johnson Library. (1994). Transcript, Robert S. McNamara Oral History, Special Interview I, 26 March 1993, by Robert Dallek, Internet Copy, LBJ Library. Retrieved 20 July 2010.
Makiya, Kanan (1998). Republic of Fear: The Politics of Modern Iraq. University of California Press.
Maoz, Zeev (2006). Defending the Holy Land: A Critical Analysis of Israel's Security & Foreign Policy. The University of Michigan Press.
Miller, Benjamin. (2007). States, Nations, and the Great Powers: The Sources of Regional War and Peace. Cambridge University Press.
Morris, Benny (1997). Israel's Border Wars, 1949–1956. Oxford: Oxford University Press.
Murakami, Masahiro. (1995). Managing Water for Peace in the Middle East: Alternative Strategies . United Nations University Press. .
Nordeen, Lon & Nicole, David. (1996). Phoenix over the Nile: A history of Egyptian Air Power 1932–1994. Washington DC: Smithsonian Institution. .
Oren, Michael. (2005). The Revelations of 1967: New Research on the Six Day War and Its Lessons for the Contemporary Middle East , Israel Studies, volume 10, number 2. (Subscription required).
Oren, Michael. (2006). "The Six-Day War", in Bar-On, Mordechai (ed.), Never-Ending Conflict: Israeli Military History. Greenwood Publishing Group. .
Parker, Richard B. (1996). The Six-day War: A Retrospective. University Press of Florida. .
Pimlott, John. (1983). Middle East Conflicts: From 1945 to the Present. Orbis. .
Pressfield, Steven (2014). The Lion's Gate: On the Front Lines of the Six Day War. Sentinel HC, 2014.
Quandt, William B. (2005). Peace Process: American Diplomacy and the Arab–Israeli Conflict Since 1967. Brookings Institution Press and the University of California Press; 3 edition.
Quigley, John B. (2005). Case for Palestine: An International Law Perspective. Duke University Press.
Quigley, John B. (1990). Palestine and Israel: A Challenge to Justice. Duke University Press.
Rabil, Robert G. (2003). Embattled Neighbors: Syria, Israel, and Lebanon. Lynne Rienner Publishers.
Rabin, Yitzhak (1996). The Rabin Memoirs. University of California Press. .
Rezun, Miron (1990). "Iran and Afghanistan." In A. Kapur (Ed.). Diplomatic Ideas and Practices of Asian States (pp. 9–25). Brill Academic Publishers.
Rikhye, Indar Jit (1980). The Sinai Blunder. London: Routledge.
Robarge, David S. (2007). Getting It Right: CIA Analysis of the 1967 Arab-Israeli War , Center for the Study of Intelligence, Vol. 49 No. 1
Rubenberg, Cheryl A. (1989). Israel and the American National Interest. University of Illinois Press.
Sadeh, Eligar (1997). Militarization and State Power in the Arab–Israeli Conflict: Case Study of Israel, 1948–1982. Universal Publishers.
Sandler, Deborah; Aldy, Emad & Al-Khoshman Mahmoud A. (1993). Protecting the Gulf of Aqaba. – A regional environmental challenge. Environmental Law Institute. 0911937463.
Seale, Patrick (1988). Asad: The Struggle for Peace in the Middle East. University of California Press.
Shafqat, Saeed (2004). Islamic world and South Asia: Rise of Islamism and Terror, Causes and Consequences?. In Kaniz F. Yusuf (Ed.) Unipolar World & The Muslim States. Islamabad: Pakistan Forum, pp 217–246.
Shemesh, Moshe (2008). Arab Politics, Palestinian Nationalism and the Six Day War. Sussex Academic Press. .
Smith, Grant (2006). Deadly Dogma. Institute for Research: Middle Eastern Policy.
Stein, Janice Gross. (1991). The Arab-Israeli War of 1967: Inadvertent War Through Miscalculated Escalation, in Avoiding War: Problems of Crisis Management, Alexander L. George, ed. Boulder: Westview Press.
Stephens, Robert H. (1971). Nasser: A Political Biography. London: Allen Lane/The Penguin Press.
United Nations (1967, 5 June). 1347 Security Council MEETING : June 5, 1967. Provisional agenda (S/PV.1347/Rev.1). On a subpage of the website of The United Nations Information System on the Question of Palestine (UNISPAL).
van Creveld, Martin (2004). Defending Israel: A Controversial Plan Toward Peace. Thomas Dunne Books.
"Mediterranean Eskadra ". (September 7, 2000). Federation of American Scientists.
External links
The Photograph: A Search for June 1967. Retrieved 17 July 2010.
The three soldiers – background to that photograph
Six Day War Personal recollections & Timeline
"Six-Day War" in the Encyclopaedia of the Orient
All State Department documents related to the crisis
Letters from David Ben-Gurion on the Six-Day War Shapell Manuscript Foundation
UN Resolution 242. Retrieved 17 July 2010.
The status of Jerusalem, United Nations, New York, 1997 (Prepared for, and under the guidance of, the Committee on the Exercise of the Inalienable Rights of the Palestinian People)
Status of Jerusalem: Legal Aspects. Retrieved 22 July 2014.
Legal Aspects: The Six Day War – June 1967 and Its Aftermath – Professor Gerald Adler
General Uzi Narkiss – A historic radio interview with General Uzi Narkiss taken on 7 June – one day after the Six-Day War, describing the battle for Jerusalem
Liberation of the Temple Mount and Western Wall by Israel Defense Forces – Historic Live Broadcast on Voice of Israel Radio, 7 June 1967
"How the USSR Planned to Destroy Israel in 1967" by Isabella Ginor. Published by the journal Middle East Review of International Affairs (MERIA), Volume 7, Number 3 (September 2003)
Position of Arab forces May 1967 . Retrieved 22 July 2014.
Category:1967 in Egypt
Category:1967 in Israel
Category:1967 in Jordan
Category:1967 in Palestine
Category:1967 in Syria
Category:1967 in the Israeli Military Governorate
Category:Arab–Israeli conflict
Category:Articles containing video clips
Category:Cold War conflicts
Category:Conflicts in 1967
Category:Invasions by Israel
Category:Invasions of Syria
Category:Invasions of Egypt
Category:June 1967 in Asia
Category:Wars involving Egypt
Category:Wars involving Israel
Category:Wars involving Jordan
Category:Wars involving Syria
Category:June 1967 in Africa
Category:Yigal Allon
|
wars_military
| 17,532
|
30075
|
Tiger
|
https://en.wikipedia.org/wiki/Tiger
|
The tiger (Panthera tigris) is a large cat and a member of the genus Panthera native to Asia. It has a powerful, muscular body with a large head and paws, a long tail and orange fur with black, mostly vertical stripes. It is traditionally classified into nine recent subspecies, though some recognise only two subspecies, mainland Asian tigers and the island tigers of the Sunda Islands.
Throughout the tiger's range, it inhabits mainly forests, from coniferous and temperate broadleaf and mixed forests in the Russian Far East and Northeast China to tropical and subtropical moist broadleaf forests on the Indian subcontinent and Southeast Asia. The tiger is an apex predator and preys mainly on ungulates, which it takes by ambush. It lives a mostly solitary life and occupies home ranges, defending these from individuals of the same sex. The range of a male tiger overlaps with that of multiple females with whom he mates. Females give birth to usually two or three cubs that stay with their mother for about two years. When becoming independent, they leave their mother's home range and establish their own.
Since the early 20th century, tiger populations have lost at least 93% of their historic range and are locally extinct in West and Central Asia, in large areas of China and on the islands of Java and Bali. Today, the tiger's range is severely fragmented. It is listed as Endangered on the IUCN Red List of Threatened Species, as its range is thought to have declined by 53% to 68% since the late 1990s. Major threats to tigers are habitat destruction and fragmentation due to deforestation, poaching for fur and the illegal trade of body parts for medicinal purposes. Tigers are also victims of human–wildlife conflict as they attack and prey on livestock in areas where natural prey is scarce. The tiger is legally protected in all range countries. National conservation measures consist of action plans, anti-poaching patrols and schemes for monitoring tiger populations. In several range countries, wildlife corridors have been established and tiger reintroduction is planned.
The tiger is among the most popular of the world's charismatic megafauna. It has been kept in captivity since ancient times and has been trained to perform in circuses and other entertainment shows. The tiger featured prominently in the ancient mythology and folklore of cultures throughout its historic range and has continued to appear in culture worldwide.
Etymology
The Old English tigras derives from Old French , from Latin , which was a borrowing from ().
Since ancient times, the word has been suggested to originate from the Armenian or Persian word for 'arrow', which may also be the origin of the name for the river Tigris. However, today, the names are thought to be homonyms, and the connection between the tiger and the river is doubted.
Taxonomy
In 1758, Carl Linnaeus described the tiger in his work Systema Naturae and gave it the scientific name Felis tigris, as the genus Felis was being used for all cats at the time. His scientific description was based on descriptions by earlier naturalists such as Conrad Gessner and Ulisse Aldrovandi. In 1929, Reginald Innes Pocock placed the species in the genus Panthera using the scientific name Panthera tigris.
Subspecies
Nine recent tiger subspecies have been proposed between the early 19th and early 21st centuries, namely the Bengal, Malayan, Indochinese, South China, Siberian, Caspian, Javan, Bali and Sumatran tigers. The validity of several tiger subspecies was questioned in 1999 as most putative subspecies were distinguished on the basis of fur length and colouration, striping patterns and body size of specimens in natural history museum collections that are not necessarily representative for the entire population. It was proposed to recognise only two tiger subspecies as valid, namely P. t. tigris in mainland Asia and the smaller P. t. sondaica in the Greater Sunda Islands.Kitchener, A. (1999). "Tiger distribution, phenotypic variation and conservation issues" in
This two-subspecies proposal was reaffirmed in 2015 through a comprehensive analysis of morphological, ecological and mitochondrial DNA (mtDNA) traits of all putative tiger subspecies.
In 2017, the Cat Classification Task Force of the IUCN Cat Specialist Group revised felid taxonomy in accordance with the 2015 two-subspecies proposal and recognised only P. t. tigris and P. t. sondaica. Results of a 2018 whole-genome sequencing study of 32 samples from the six living putative subspecies—the Bengal, Malayan, Indochinese, South China, Siberian and Sumatran tiger—found them to be distinct and separate clades. These results were corroborated in 2021 and 2023. A 2023 study found validity for all nine recent subspecies. The Cat Specialist Group states that "Given the varied interpretations of data, the [subspecific] taxonomy of this species is currently under review by the IUCN SSC Cat Specialist Group."
The following tables are based on the classification of the tiger as of 2005, and also reflect the classification recognised by the Cat Classification Task Force in 2017.
Panthera tigris tigris Population Description Image Bengal tiger This population inhabits the Indian subcontinent. The Bengal tiger has shorter fur than tigers further north, with a light tawny to orange-red colouration, and relatively long and narrow nostrils.frameless † Caspian tiger This population occurred from Turkey to around the Caspian Sea. It had bright rusty-red fur with thin and closely spaced brownish stripes, and a broad occipital bone. Genetic analysis revealed that it was closely related to the Siberian tiger. It has been extinct since the 1970s.frameless Siberian tiger This population lives in the Russian Far East, Northeast China and possibly North Korea. The Siberian tiger has long hair and dense fur. Its ground colour varies widely from ochre-yellow in winter to more reddish and vibrant after moulting. The skull is shorter and broader than the skulls of tigers further south.frameless South China tiger This tiger historically lived in south-central China. The skulls of the five type specimens had shorter carnassials and molars than tigers from India, a smaller cranium, orbits set closer together and larger postorbital processes; skins were yellowish with rhombus-like stripes. It has a unique mtDNA haplotype due to interbreeding with ancient tiger lineages. It is extinct in the wild as there has not been a confirmed sighting since the 1970s, and survives only in captivity.frameless Indochinese tiger This tiger population occurs on the Indochinese Peninsula. Indochinese tiger specimens have smaller craniums than Bengal tigers and appear to have darker fur with somewhat thin stripes.frameless Malayan tiger The Malayan tiger was proposed as a distinct subspecies on the basis of mtDNA and micro-satellite sequences that differ from the Indochinese tiger. It does not differ significantly in fur colour or skull size from Indochinese tigers. There is no clear geographical barrier between tiger populations in northern Malaysia and southern Thailand.frameless
Panthera tigris sondaica Population Description Image †Javan tiger This tiger was described based on an unspecified number of skins with short and smooth hair. Tigers from Java were small compared to tigers of the Asian mainland, had relatively elongated skulls compared to the Sumatran tiger and longer, thinner and more numerous stripes. The Javan tiger is thought to have gone extinct by the 1980s.frameless †Bali tiger This tiger occurred on Bali and had brighter fur and a smaller skull than the Javan tiger. A typical feature of Bali tiger skulls is the narrow occipital bone, which is similar to the Javan tiger's skull. This population went extinct in the 1940s.Seidensticker, J.; Christie, S. & Jackson, P. (1999). "Preface" in frameless Sumatran tiger The type specimen from Sumatra had dark fur. The Sumatran tiger has particularly long hair around the face, thick body stripes and a broader and smaller nasal bone than other island tigers.frameless
Evolution
The tiger shares the genus Panthera with the lion, leopard, jaguar and snow leopard. Results of genetic analyses indicate that the tiger and snow leopard are sister species whose lineages split from each other between 2.70 and 3.70 million years ago. The tiger's whole genome sequencing shows repeated sequences that parallel those in other cat genomes.
The fossil species Panthera palaeosinensis of early Pleistocene northern China was described as a possible tiger ancestor when it was discovered in 1924, but modern cladistics places it as basal to modern Panthera. Panthera zdanskyi lived around the same time and place, and was suggested to be a sister species of the modern tiger when it was examined in 2014. However, as of 2023, at least two subsequent studies considered P. zdanskyi likely to be a synonym of P. palaeosinensis, noting that its proposed differences from that species fell within the range of individual variation. The earliest appearance of the modern tiger species in the fossil record are jaw fragments from Lantion in China that are dated to the early Pleistocene.
Middle- to late-Pleistocene tiger fossils have been found throughout China, Sumatra and Java. Prehistoric subspecies include Panthera tigris trinilensis and P. t. soloensis of Java and Sumatra and P. t. acutidens of China; late Pleistocene and early Holocene fossils of tigers have also been found in Borneo and Palawan, Philippines.Kitchener, A. & Yamaguchi, N. (2009). "What is a Tiger? Biogeography, Morphology, and Taxonomy" in Fossil specimens of tigers have also been reported from the Middle-Late Pleistocene of Japan.Hasegawa, Y., Takakuwa, Y., Nenoki, K. & Kimura, T. Fossil tiger from limestone mine of Tsukumi City, Oita Prefecture, Kyushu Island, Japan. Bull. Gunma Museum Nat. Hist. 23, (2019) (in Japanese with English abstract) Results of a phylogeographic study indicate that all living tigers have a common ancestor that lived between 108,000 and 72,000 years ago. Genetic studies suggest that the tiger population contracted around 115,000 years ago due to glaciation. Modern tiger populations originated from a refugium in Indochina and spread across Asia after the Last Glacial Maximum. As they colonised northeastern China, the ancestors of the South China tiger intermixed with a relict tiger population.
Hybrids
Tigers can interbreed with other Panthera cats and have done so in captivity. The liger is the offspring of a female tiger and a male lion and the tigon the offspring of a male tiger and a female lion. The lion sire passes on a growth-promoting gene, but the corresponding growth-inhibiting gene from the female tiger is absent, so that ligers grow far larger than either parent species. By contrast, the male tiger does not pass on a growth-promoting gene while the lioness passes on a growth inhibiting gene; hence, tigons are around the same size as their parents. Since they often develop life-threatening birth defects and can easily become obese, breeding these hybrids is regarded as unethical.
Description
The tiger has a typical felid morphology, with a muscular body, shortened legs, strong forelimbs with wide front paws, a large head and a tail that is about half the length of the rest of its body. It has five digits, including a dewclaw, on the front feet and four on the back, all of which have retractile claws that are compact and curved, and can reach long. The ears are rounded and the eyes have a round pupil. The snout ends in a triangular, pink tip with small black dots, the number of which increase with age. The tiger's skull is robust, with a constricted front region, proportionally small, elliptical orbits, long nasal bones and a lengthened cranium with a large sagittal crest. It resembles a lion's skull, but differs from it in the concave or flattened underside of the lower jaw and in its longer nasals. The tiger has 30 fairly robust teeth and its somewhat curved canines are the longest in the cat family at .
The tiger has a head-body length of with a tail and stands at the shoulder. The Siberian and Bengal tigers are the largest. Male Bengal tigers weigh , and females weigh ; island tigers are the smallest, likely due to insular dwarfism. Male Sumatran tigers weigh , and females weigh . The tiger is popularly thought to be the largest living felid species; but since tigers of the different subspecies and populations vary greatly in size and weight, the tiger's average size may be less than the lion's, while the largest tigers are bigger than their lion counterparts.
Coat
The tiger's coat usually has short hairs, reaching up to , though the hairs of the northern-living Siberian tiger can reach . Belly hairs tend to be longer than back hairs. The density of their fur is usually thin, though the Siberian tiger develops a particularly thick winter coat. The tiger has lines of fur around the face and long whiskers, especially in males. It has an orange colouration that varies from yellowish to reddish. White fur covers the underside, from head to tail, along with the inner surface of the legs and parts of the face. On the back of the ears, it has a prominent white spot, which is surrounded by black. The tiger is marked with distinctive black or dark brown stripes, which are uniquely patterned in each individual. The stripes are mostly vertical, but those on the limbs and forehead are horizontal. They are more concentrated towards the backside and those on the trunk may reach under the belly. The tips of stripes are generally sharp and some may split up or split and fuse again. Tail stripes are thick bands and a black tip marks the end.
The tiger is one of only a few striped cat species. Stripes are advantageous for camouflage in vegetation with vertical patterns of light and shade, such as trees, reeds and tall grass.Miquelle, D. "Tiger" in This is supported by a Fourier analysis study showing that the striping patterns line up with their environment.
The orange colour may also aid in concealment, as the tiger's prey is colour blind and possibly perceives the tiger as green and blended in with the vegetation.
Colour variations
The three colour variants of Bengal tigers – nearly stripeless snow-white, white and golden – are now virtually non-existent in the wild due to the reduction of wild tiger populations but continue in captive populations. The white tiger has a white background colour with sepia-brown stripes. The golden tiger is pale golden with reddish-brown stripes. The snow-white tiger is a morph with extremely faint stripes and a pale sepia-brown ringed tail. White and golden morphs are the result of an autosomal recessive trait with a white locus and a wideband locus, respectively. The snow-white variation is caused by polygenes with both white and wideband loci. The breeding of white tigers is controversial, as they have no use for conservation. Only 0.001% of wild tigers have the genes for this colour morph and the overrepresentation of white tigers in captivity is the result of inbreeding. Hence, their continued breeding will risk both inbreeding depression and loss of genetic variability in captive tigers.
Pseudo-melanistic tigers with thick, merged stripes have been recorded in Simlipal National Park and three Indian zoos; a population genetic analysis of Indian tiger samples revealed that this phenotype is caused by a mutation of a transmembrane aminopeptidase gene. Around 37% of the Simlipal tiger population has this feature, which has been linked to genetic isolation.
Distribution and habitat
The tiger historically ranged from eastern Turkey, northern Iran and Afghanistan to Central Asia and from northern Pakistan through the Indian subcontinent and Indochina to southeastern Siberia, Sumatra, Java and Bali. As of 2022, it inhabits less than 7% of its historical distribution and has a scattered range in the Indian subcontinent, the Indochinese Peninsula, Sumatra, northeastern China and the Russian Far East. As of 2020, India had the largest extent of global tiger habitat with , followed by Russia with .
The tiger mainly lives in forest habitats and is highly adaptable.Sunquist, M. (2010). "What is a Tiger? Ecology and Behaviour" in Records in Central Asia indicate that it primarily inhabited Tugay riverine forests and hilly and lowland forests in the Caucasus. In the Amur-Ussuri region of Russia and China, it inhabits Korean pine and temperate broadleaf and mixed forests; riparian forests serve as dispersal corridors, providing food and water for both tigers and ungulates.Miquelle, D. G.; Smirnov, E. N.; Merrill, T. W.; Myslenkov, A. E.; Quigley, H.; Hornocker, M. G. & Schleyer, B. (1999). "Hierarchical spatial analysis of Amur tiger relationships to habitat and prey" in On the Indian subcontinent, it inhabits mainly tropical and subtropical moist broadleaf forests, temperate broadleaf and mixed forests, tropical moist evergreen forests, tropical dry forests, alluvial plains and the mangrove forests of the Sundarbans.Wikramanayake, E. D.; Dinerstein, E.; Robinson, J. G.; Karanth, K. U.; Rabinowitz, A.; Olson, D.; Mathew, T.; Hedao, P.; Connor, M.; Hemley, G. & Bolze, D. (1999). "Where can tigers live in the future? A framework for identifying high-priority areas for the conservation of tigers in the wild" in In the Eastern Himalayas, it was documented in temperate forest up to an elevation of in Bhutan, of in the Mishmi Hills and of in Mêdog County, southeastern Tibet. In Thailand, it lives in deciduous and evergreen forests. In Sumatra, it inhabits lowland peat swamp forests and rugged montane forests.
Population density
Camera trapping during 2010–2015 in the deciduous and subtropical pine forest of Jim Corbett National Park, northern India revealed a stable tiger population density of 12–17 individuals per in an area of .
In northern Myanmar, the population density in a sampled area of roughly in a mosaic of tropical broadleaf forest and grassland was estimated to be 0.21–0.44 tigers per as of 2009.
Population density in mixed deciduous and semi-evergreen forests of Thailand's Huai Kha Khaeng Wildlife Sanctuary was estimated at 2.01 tigers per ; during the 1970s and 1980s, logging and poaching had occurred in the adjacent Mae Wong and Khlong Lan National Parks, where population density was much lower, estimated at only 0.359 tigers per as of 2016.
Population density in dipterocarp and montane forests in northern Malaysia was estimated at 1.47–2.43 adult tigers per in Royal Belum State Park, but 0.3–0.92 adult tigers per in the unprotected selectively logged Temengor Forest Reserve.
Behaviour and ecology
Camera trap data show that tigers in Chitwan National Park avoided locations frequented by people and were more active at night than during day.
In Sundarbans National Park, six radio-collared tigers were most active from dawn to early morning and reached their zenith around 7:00 o'clock in the morning.
A three-year-long camera trap survey in Shuklaphanta National Park revealed that tigers were most active from dusk until midnight.
In northeastern China, tigers were crepuscular and active at night with activity peaking at dawn and dusk; they were largely active at the same time as their prey.
The tiger is a powerful swimmer and easily transverses rivers as wide as ; it immerses in water, particularly on hot days. In general, it is less capable of climbing trees than many other cats due to its size, but cubs under 16 months old may routinely do so. An adult was recorded climbing up a smooth pipal tree.
Social spacing
Adult tigers lead largely solitary lives within home ranges or territories, the size of which mainly depends on prey abundance, geographic area and sex of the individual. Males and females defend their home ranges from those of the same sex and the home range of a male encompasses that of multiple females. Two females in the Sundarbans had home ranges of .
In Panna Tiger Reserve, the home ranges of five reintroduced females varied from in winter to in summer and to during the monsoon; three males had large home ranges in winter, in summer and during monsoon seasons.
In Sikhote-Alin Biosphere Reserve, 14 females had home ranges and five resident males of that overlapped with those of up to five females. When tigresses in the same reserve had cubs of up to four months of age, they reduced their home ranges to stay near their young and steadily enlarged them until their offspring were 13–18 months old.
The tiger is a long-ranging species and individuals disperse over distances of up to to reach tiger populations in other areas. Young tigresses establish their first home ranges close to their mothers' while males migrate further than their female counterparts. Four radio-collared females in Chitwan dispersed between and 10 males between . A subadult male lives as a transient in another male's home range until he is older and strong enough to challenge the resident male. Tigers mark their home ranges by spraying urine on vegetation and rocks, clawing or scent rubbing trees and marking trails with faeces, anal gland secretions and ground scrapings. Scent markings also allow an individual to pick up information on another's identity. Unclaimed home ranges, particularly those that belonged to a deceased individual, can be taken over in days or weeks.
Male tigers are generally less tolerant of other males within their home ranges than females are of other females. Disputes are usually solved by intimidation rather than fighting. Once dominance has been established, a male may tolerate a subordinate within his range, as long as they do not come near him. The most serious disputes tend to occur between two males competing for a female in oestrus. Though tigers mostly live alone, relationships between individuals can be complex. Tigers are particularly social at kills and a male tiger will sometimes share a carcass with the females and cubs within this home range and unlike male lions, will allow them to feed on the kill before he is finished with it. However, a female is more tense when encountering another female at a kill.
Communication
During friendly encounters and bonding, tigers rub against each other's bodies. Facial expressions include the "defence threat", which involves a wrinkled face, bared teeth, pulled-back ears and widened pupils. Both males and females show a flehmen response, a characteristic curled-lip grimace, when smelling urine markings. Males also use the flehmen to detect the markings made by tigresses in oestrus. Tigers will move their ears around to display the white spots, particularly during aggressive encounters and between mothers and cubs. They also use their tails to signal their mood. To show cordiality, the tail sticks up and sways slowly, while an apprehensive tiger lowers its tail or wags it side-to-side. When calm, the tail hangs low.
Tigers are normally silent but can produce numerous vocalisations. They roar to signal their presence to other individuals over long distances. This vocalisation is forced through an open mouth as it closes and can be heard away. They roar multiple times in a row and others respond in kind. Tigers also roar during mating and a mother will roar to call her cubs to her. When tense, tigers moan, a sound similar to a roar but softer and made when the mouth is at least partially closed. Moaning can be heard away. Aggressive encounters involve growling, snarling and hissing. An explosive "coughing roar" or "coughing snarl" is emitted through an open mouth and exposed teeth. In friendlier situations, tigers prusten, a soft, low-frequency snorting sound similar to purring in smaller cats. Tiger mothers communicate with their cubs by grunting, while cubs call back with miaows. When startled, they "woof". They produce a deer-like "pok" sound for unknown reasons, but most often at kills.
Hunting and diet
The tiger is a carnivore and an apex predator. Abundance and body weight of prey species are assumed to be the main criteria for the tiger's prey selection, both inside and outside protected areas. It feeds mainly on large and medium-sized ungulates such as sambar deer, Manchurian wapiti, barasingha, gaur and wild boar. It also preys opportunistically on smaller species like monkeys, peafowl and other ground-based birds, porcupines and fish. Occasional attacks on Asian elephants and Indian rhinoceroses have also been reported.
More often, tigers take the more vulnerable calves.
They sometimes prey on livestock and dogs in close proximity to settlements. Tigers occasionally consume vegetation, fruit and minerals for dietary fibre and supplements.
Tigers learn to hunt from their mothers, though the ability to hunt may be partially inborn. Depending on the size of the prey, they typically kill weekly though mothers must kill more often. Families hunt together when cubs are old enough. They search for prey using vision and hearing. A tiger will also wait at a watering hole for prey to come by, particularly during hot summer days. It is an ambush predator and when approaching potential prey, it crouches with the head lowered and hides in foliage. It switches between creeping forward and staying still. A tiger may even doze off and can stay in the same spot for as long as a day, waiting for prey and launch an attack when the prey is close enough, usually within . If the prey spots it before then, the cat does not pursue further. A tiger can sprint and leap ; it is not a long-distance runner and gives up a chase if prey outpaces it over a certain distance.
The tiger attacks from behind or at the sides and tries to knock the target off balance. It latches onto prey with its forelimbs, twisting and turning during the struggle and tries to pull it to the ground. The tiger generally applies a bite to the throat until its victim dies of strangulation. It has an average bite force at the canine tips of 1234.3 newtons. Holding onto the throat puts the cat out of reach of horns, antlers, tusks and hooves. Tigers are adaptable killers and may use other methods, including ripping the throat or breaking the neck. Large prey may be disabled by a bite to the back of the hock, severing the tendon. Swipes from the large paws are capable of stunning or breaking the skull of a water buffalo. They kill small prey with a bite to the back of the neck or head. Estimates of the success rate for hunting tigers range from a low of 5% to a high of 50%. They are sometimes killed or injured by large or dangerous prey like gaur, buffalo and boar.
Tigers typically move kills to a private, usually vegetated spot no further than , though they have been recorded dragging them . They are strong enough to drag the carcass of a fully grown buffalo for some distance. They rest for a while before eating and can consume as much as of meat in one session, but feed on a carcass for several days, leaving little for scavengers.
Competitors
In much of their range, tigers share habitat with leopards and dholes. They typically dominate both of them, though with dholes it depends on their pack size. Interactions between the three predators involve chasing, stealing kills and direct killing. Large dhole packs may kill tigers. Tigers, leopards and dholes coexist by hunting different sized prey. In Nagarhole National Park, the average weight for tiger kills was found to be , compared to for leopards and for dholes. In Kui Buri National Park, following a reduction in prey numbers, tigers continued to kill favoured prey while leopards and dholes increased their consumption of small prey.
Both leopards and dholes can live successfully in tiger habitat when there is abundant food and vegetation cover. Otherwise, they appear to be less common where tigers are numerous. The recovery of the tiger population in Rajaji National Park during the 2000s led to a reduction in leopard population densities. Similarly, at two sites in central India the size of dhole packs was negatively correlated with tiger densities. Leopard and dhole distribution in Kui Buri correlated with both prey access and tiger scarcity. In Jigme Dorji National Park, tigers were found to inhabit the deeper parts of forests while the smaller predators were pushed closer to the fringes.
Reproduction and life cycle
The tiger generally mates all year round, particularly between November and April. A tigress is in oestrus for three to six days at a time, separated by three to nine week intervals. A resident male mates with all the females within his home range, who signal their receptiveness by roaring and marking. Younger, transient males are also attracted, leading to a fight in which the more dominant, resident male drives the usurper off. During courtship, the male is cautious with the female until she shows readiness to mate by positioning herself in lordosis with her tail to the side. Copulation typically lasts no more than 20 seconds, with the male biting the female by the scruff of her neck. After it is finished, the male quickly pulls away as the female may turn and slap him. Tiger pairs may stay together for up to four days and mate multiple times. Gestation lasts around or over three months.
A tigress gives birth in a secluded location, be it in dense vegetation, in a cave or under a rocky shelter. Litters consist of as many as seven cubs, but two or three are more typical. Newborn cubs weigh and are blind and altricial. The mother licks and cleans her cubs, suckles them and viciously defends them from any potential threat. Cubs open their eyes at the age of three to 14 days and their vision becomes clear after a few more weeks. They can leave the denning site after two months and around the same time they start eating meat. The mother only leaves them alone to hunt and even then she does not travel far. When she suspects an area is no longer safe, she moves her cubs to a new spot, transporting them one by one by grabbing them by the scruff of the neck with her mouth.
A tigress in Sikhote-Alin Biosphere Reserve maximised the time spent with her cubs by reducing her home range, killing larger prey and returning to her den more rapidly than without cubs; when the cubs started to eat meat, she took them to kill sites, thereby optimising their protection and access to food.
In the same reserve, one of 21 cubs died in over eight years of monitoring and mortality did not differ between male and female juveniles.
Tiger monitoring over six years in Ranthambore Tiger Reserve indicated an average annual survival rate of around 85 percent for 74 male and female cubs; survival rate increased to 97 percent for both males and female juveniles of one to two years of age.
Causes of cub mortality include predators, floods, fires, death of the mother and fatal injuries.
After around two months, the cubs are able to follow their mother. They still hide in vegetation when she goes hunting. Young bond through play fighting and practice stalking. A hierarchy develops in the litter, with the biggest cub, often a male, being the most dominant and the first to eat its fill at a kill. Around the age of six months, cubs are fully weaned and have more freedom to explore their environment. Between eight and ten months, they accompany their mother on hunts. A cub can make a kill as early as 11 months and reach independence as a juvenile of 18 to 24 months of age; males become independent earlier than females. Radio-collared tigers in Chitwan started leaving their natal areas at the age of 19 months. Young females are sexually mature at three to four years, whereas males are at four to five years. Generation length of the tiger is about 7–10 years.
Wild Bengal tigers live 12–15 years. Data from the International Tiger Studbook 1938–2018 indicate that captive tigers lived up to 19 years.
The father does not play a role in raising the young, but he encounters and interacts with them. The resident male appears to visit the female–cub families within his home range. They socialise and even share kills. One male was recorded looking after cubs whose mother had died. By defending his home range, the male protects the females and cubs from other males. When a new male takes over, dependent cubs are at risk of infanticide as the male attempts to sire his own young with the females. A seven-year long study in Chitwan National Park revealed that 12 of 56 detected cubs and juveniles were killed by new males taking over home ranges.
Health and diseases
Tigers are recorded as hosts for various parasites including tapeworms like Diphyllobothrium erinacei, Taenia pisiformis in India and nematodes like Toxocara species in India and Physaloptera preputialis, Dirofilaria ursi and Uiteinarta species in Siberia. Canine distemper is known to occur in Siberian tigers. A morbillivirus infection was the likely cause of death of a tigress in the Russian Far East that was also tested positive for feline panleukopenia and feline coronavirus.
Blood samples from 11 adult tigers in Nepal showed antibodies for canine parvovirus-2, feline herpesvirus, feline coronavirus, leptospirosis and Toxoplasma gondii.
Threats
The tiger has been listed as Endangered on the IUCN Red List since 1986 and the global tiger population is thought to have continuously declined from an estimated population of 5,000–8,262 tigers in the late 1990s to 3,726–5,578 individuals estimated as of 2022. During 2001–2020, landscapes where tigers live declined from to . Habitat destruction, habitat fragmentation and poaching for fur and body parts are the major threats that contributed to the decrease of tiger populations in all range countries.
Protected areas in central India are highly fragmented due to linear infrastructure like roads, railway lines, transmission lines, irrigation channels and mining activities in their vicinity.
In the Tanintharyi Region of southern Myanmar, deforestation coupled with mining activities and high hunting pressure threatens the tiger population.
In Thailand, nine of 15 protected areas hosting tigers are isolated and fragmented, offering a low probability for dispersal between them; four of these have not harboured tigers since about 2013.
In Peninsular Malaysia, of tiger habitat was cleared during 1988–2012, most of it for industrial plantations.
Large-scale land acquisitions of about for commercial agriculture and timber extraction in Cambodia contributed to the fragmentation of potential tiger habitat, especially in the Eastern Plains.
Inbreeding depression coupled with habitat destruction, insufficient prey resources and poaching is a threat to the small and isolated tiger population in the Changbai Mountains along the China–Russia border.
In China, tigers became the target of large-scale 'anti-pest' campaigns in the early 1950s, where suitable habitats were fragmented following deforestation and resettlement of people to rural areas, who hunted tigers and prey species. Though tiger hunting was prohibited in 1977, the population continued to decline and is considered extinct in South China since 2001.
Tiger populations in India have been targeted by poachers since the 1990s and were extirpated in two tiger reserves in 2005 and 2009.
Between March 2017 and January 2020, 630 activities of hunters using snares, drift nets, hunting platforms and hunting dogs were discovered in a reserve forest of about in southern Myanmar. Nam Et-Phou Louey National Park was considered the last important site for the tiger in Laos, but it has not been recorded there at least since 2013; this population likely fell victim to indiscriminate snaring. Anti-poaching units in Sumatra's Kerinci Seblat landscape removed 362 tiger snare traps and seized 91 tiger skins during 2005–2016; annual poaching rates increased with rising skin prices.
Poaching is also the main threat to the tiger population in far eastern Russia, where logging roads facilitate access for poachers and people harvesting forest products that are important for prey species to survive in winter.
Body parts of 207 tigers were detected during 21 surveys in 1991–2014 in two wildlife markets in Myanmar catering to customers in Thailand and China.
During the years 2000–2022, at least 3,377 tigers were confiscated in 2,205 seizures in 28 countries; seizures encompassed 665 live and 654 dead individuals, 1,313 whole tiger skins, 16,214 body parts like bones, teeth, paws, claws, whiskers and of meat; 759 seizures in India encompassed body parts of 893 tigers; and 403 seizures in Thailand involved mostly captive-bred tigers. Seizures in Nepal between January 2011 and December 2015 obtained 585 pieces of tiger body parts and two whole carcasses in 19 districts. Seizure data from India during 2001–2021 indicate that tiger skins were the most often traded body parts, followed by claws, bones and teeth; trafficking routes mainly passed through the states of Maharashtra, Karnataka, Tamil Nadu and Assam.
A total of 292 illegal tiger parts were confiscated at US ports of entry from personal baggage, air cargo and mail between 2003 and 2012.
Demand for tiger parts for use in traditional Chinese medicine has also been cited as a major threat to tiger populations.
Interviews with local people in the Bangladeshi Sundarbans revealed that they kill tigers for local consumption and trade of skins, bones and meat, in retaliation for attacks by tigers and for excitement.
Tiger body parts like skins, bones, teeth and hair are consumed locally by wealthy Bangladeshis and are illegally trafficked from Bangladesh to 15 countries including India, China, Malaysia, Korea, Vietnam, Cambodia, Japan and the United Kingdom via land borders, airports and seaports.
Tiger bone glue is the prevailing tiger product purchased for medicinal purposes in Hanoi and Ho Chi Minh City. "Tiger farm" facilities in China and Southeast Asia breed tigers for their parts, but these appear to make the threat to wild populations worse by increasing the demand for tiger products.
Local people killing tigers in retaliation for attacking and preying on livestock is a threat in several tiger range countries, as this consequence of human–wildlife conflict also contributes to the decline of the population.
Conservation
+ Global wild tiger populationCountryYearEstimate India 2022 3,167–3,682 Russia 2022 573–600 Indonesia 2022 393 Nepal 2022 316–355 Thailand 2022 148–189 Malaysia 2022 <150 Bhutan 2022 131 Bangladesh 2022118–122 China 2022 >60 Myanmar 2022 28 Total 5,638–5,899
Internationally, the tiger is protected under CITES Appendix I, banning trade of live tigers and their body parts.
In Russia, hunting the tiger has been banned since 1952.
In Bhutan, it has been protected since 1969 and enlisted as totally protected since 1995. Since 1972, it has been afforded the highest protection level under India's Wild Life (Protection) Act, 1972.
In Nepal and Bangladesh, it has been protected since 1973.
Since 1976, it has been totally protected under Malaysia's Protection of Wild Life Act, and the country's Wildlife Conservation Act enacted in 2010 increased punishments for wildlife-related crimes.
In Indonesia, it has been protected since 1990.
In China, the trade in tiger body parts was banned in 1993.
The Thai Wildlife Preservation and Protection Act was enacted in 2019 to combat poaching and trading of body parts.
In 1973, the National Tiger Conservation Authority and Project Tiger were founded in India to gain public support for tiger conservation. Since then, 53 tiger reserves covering an area of have been established in the country up to 2022. These efforts contributed to the recovery of India's tiger population between 2006 and 2018 so that it occurs in an area of about .
Myanmar's national tiger conservation strategy developed in 2003 comprises management tasks such as restoration of degraded habitats, increasing the extent of protected areas and wildlife corridors, protecting tiger prey species, thwarting tiger killing and illegal trade of its body parts and promoting public awareness through wildlife education programmes.
Bhutan's first Tiger Action Plan implemented during 2006–2015 revolved around habitat conservation, human–wildlife conflict management, education and awareness; the second Action Plan aimed at increasing the country's tiger population by 20% until 2023 compared to 2015.
In 2009, the Bangladesh Tiger Action Plan was initiated to stabilise the country's tiger population, maintain habitat and a sufficient prey base, improve law enforcement and foster cooperation between governmental agencies responsible for tiger conservation.
The Thailand Tiger Action Plan ratified in 2010 envisioned increasing the country's tiger populations by 50% in the Western Forest Complex and Dong Phayayen–Khao Yai Forest Complex and reestablish populations in three potential landscapes until 2022.
The Indonesian National Tiger Recovery Program ratified in 2010 aimed at increasing the Sumatran tiger population by 2022. The third strategic and action plan for the conservation of the Sumatran tiger for the years 2020–2030 revolves around strengthening management of small tiger population units of less than 20 mature individuals and connectivity between 13 forest patches in North Sumatra and West Sumatra provinces.
Increases in anti-poaching patrol efforts in four Russian protected areas during 2011–2014 contributed to reducing poaching, stabilising the tiger population and improving protection of ungulate populations. Poaching and trafficking were declared to be moderate and serious crimes in 2019.
Anti-poaching operations were also established in Nepal in 2010, with increased cooperation and intelligence sharing between agencies. These policies have led to many years of "zero poaching" and the country's tiger population has doubled in a decade.
Anti-poaching patrols in the large core area of Taman Negara lead to a decrease of poaching frequency from 34 detected incidents in 2015–2016 to 20 incidents during 2018–2019; the arrest of seven poaching teams and removal of snares facilitated the survival of three resident female tigers and at least 11 cubs.
Army and police officers are deployed for patrolling together with staff of protected areas in Malaysia.
Wildlife corridors are important conservation measures as they facilitate tiger populations to connect between protected areas; tigers use at least nine corridors that were established in the Terai Arc Landscape and Sivalik Hills in both Nepal and India.
Corridors in forested areas with low human encroachment are highly suitable.
In West Sumatra, 12 wildlife corridors were identified as high priority for mitigating human–wildlife conflicts.
In 2019, China and Russia signed a memorandum of understanding for transboundary cooperation between two protected areas, Northeast China Tiger and Leopard National Park and Land of the Leopard National Park, that includes the creation of wildlife corridors and bilateral monitoring and patrolling along the Sino-Russian border.
Rescued and rehabilitated problem tigers and orphaned tiger cubs have been released into the wild and monitored in India, Sumatra and Russia.
In Kazakhstan, habitat restoration and reintroduction of prey species in Ile-Balkash Nature Reserve have progressed and tiger reintroduction is planned for 2025.
Reintroduction of tigers is considered possible in eastern Cambodia, once management of protected areas is improved and forest loss stabilized. South China tigers are kept and bred in Chinese zoos, with plans to reintroduce their offspring into remote protected areas. Coordinated breeding programs among zoos have led to enough genetic diversity in tigers to act as "insurance against extinction in the wild".
Relationship with humans
Hunting
Tigers have been hunted by humans for millennia, as indicated by a painting on the Bhimbetka rock shelters in India that is dated to 5,000–6,000 years ago. They were hunted throughout their range in Asia, chased on horseback, elephant-back or even with sled dogs and killed with spears and later firearms. Such hunts were conducted both by Asian governments and empires like the Mughal Empire, as well as European colonists. Tigers were often hunted as trophies and because of their perceived danger. An estimated 80,000 tigers were killed between 1875 and 1925.
Attacks
In most areas, tigers avoid humans, but attacks are a risk wherever people coexist with them. Dangerous encounters are more likely to occur in edge habitats between wild and agricultural areas.Nyhus, P. J. & Tilson, R. (2010). "Panthera tigris vs Homo sapiens: Conflict, coexistence, or extinction?" in Most attacks on humans are defensive, including protection of young; however, tigers do sometimes see people as prey. Man-eating tigers tend to be old and disabled. Tigers driven from their home ranges are also at risk of turning to man-eating.
At the beginning of the 20th century, the Champawat Tiger was responsible for over 430 human deaths in Nepal and India before she was shot by Jim Corbett. This tigress suffered from broken teeth and was unable to kill normal prey. Modern authors speculate that sustaining on meagre human flesh forced the cat to kill more and more. Tiger attacks were particularly high in Singapore during the mid-19th century, when plantations expanded into the tiger's habitat. In the 1840s, the number of deaths in the area ranged from 200 to 300 annually. Tiger attacks in the Sundarbans caused 1,396 human deaths in the period 1935–2006 according to official records of the Bangladesh Forest Department. Victims of these attacks are local villagers who enter the tiger's domain to collect resources like wood and honey. Fishermen have been particularly common targets. Methods to counter tiger attacks have included face masks worn backwards, protective clothes, sticks and carefully stationed electric dummies.
Captivity
Tigers have been kept in captivity since ancient times. In ancient Rome, tigers were displayed in amphitheatres; they were slaughtered in venatio hunts and used to kill criminals.Manfredi, P. "The Tiger in the Ancient World" in The Mongol ruler Kublai Khan is reported to have kept tigers in the 13th century. Starting in the Middle Ages, tigers were being kept in European menageries. Tigers and other exotic animals were mainly used for the entertainment of elites but from the 19th century onward, they were exhibited more to the public. Tigers were particularly big attractions and their captive population soared. In 2020, there were over 8,000 captive tigers in Asia, over 5,000 in the US and no less than 850 in Europe. There are more tigers in captivity than in the wild. Captive tigers may display stereotypical behaviours such as pacing or inactivity. Modern zoos are able to reduce such behaviours with exhibits designed so the animals can move between separate but connected enclosures. Enrichment items are also important for the cat's welfare and the stimulation of its natural behaviours.
Tigers have played prominent roles in circuses and other live performances. Ringling Bros included many tiger tamers in the 20th century including Mabel Stark, who became a big draw and had a long career. She was well known for being able to control the tigers despite being a small woman; using "manly" tools like whips and guns. Another trainer was Clyde Beatty, who used chairs, whips and guns to provoke tigers and other beasts into acting fierce and allowed him to appear courageous. He would perform with as many as 40 tigers and lions in one act. From the 1960s onward, trainers like Gunther Gebel-Williams would use gentler methods to control their animals. Sara Houcke was dubbed "the Tiger Whisperer" as she trained the cats to obey her by whispering to them. Siegfried & Roy became famous for performing with white tigers in Las Vegas. The act ended in 2003 when a tiger attacked Roy during a performance. In 2009, tigers were the most traded circus animals. The use of tigers and other animals in shows eventually declined in many countries due to pressure from animal rights groups and greater desires from the public to see them in more natural settings. Several countries restrict or ban such acts.
Tigers have become popular in the exotic pet trade, particularly in the United States where only 6% of the captive tiger population in 2020 were being housed in zoos and other facilities approved by the Association of Zoos and Aquariums. Private collectors are thought to be ill-equipped to provide proper care for tigers, which compromises their welfare. They can also threaten public safety by allowing people to interact with them. The keeping of tigers and other big cats by private people was banned in the US in 2022. Most countries in the European Union have banned breeding and keeping tigers outside of licensed zoos and rescue centres, but some still allow private holdings.
Cultural significance
The tiger is among the most famous of the charismatic megafauna. Kailash Sankhala has called it "a rare combination of courage, ferocity and brilliant colour", while Candy d'Sa calls it "fierce and commanding on the outside, but noble and discerning on the inside". In a 2004 online poll involving more than 50,000 people from 73 countries, the tiger was voted the world's favourite animal with 21% of the vote, narrowly beating the dog. Similarly, a 2018 study found the tiger to be the most popular wild animal based on surveys, as well as appearances on websites of major zoos and posters of some animated movies.
While the lion represented royalty and power in Western culture, the tiger played such a role in various Asian cultures. In ancient China, the tiger was seen as the "king of the forest" and symbolised the power of the emperor. In Chinese astrology, the tiger is the third out of 12 symbols in the Chinese zodiac and controls the period between 15:00 and 17:00 o'clock in the afternoon. The Year of the Tiger is thought to bring "dramatic and extreme events". The White Tiger is one of the Four Symbols of the Chinese constellations, representing the west along with the yin and the season of autumn. It is the counterpart to the Azure Dragon, which conversely symbolises the east, yang and springtime. The tiger is one of the animals displayed on the Pashupati seal of the Indus Valley Civilisation. The big cat was depicted on seals and coins during the Chola dynasty of southern India, as it was the official emblem.Thapar, R. "In Times Past" in
Tigers have had religious and folkloric significance. In Buddhism, the tiger, monkey and deer are the Three Senseless Creatures, with the tiger symbolising anger. In Hinduism, the tiger is the vehicle of Durga, the goddess of feminine power and peace, whom the gods created to fight demons. Similarly, in the Greco-Roman world, the tiger was depicted being ridden by the god Dionysus. In Korean mythology, tigers are messengers of the Mountain Gods. In both Chinese and Korean culture, tigers are seen as protectors against evil spirits and their image was used to decorate homes, tombs and articles of clothing. In the folklore of Malaysia and Indonesia, "tiger shamans" heal the sick by invoking the big cat. People turning into tigers and the inverse has also been widespread; in particular weretigers are people who could change into tigers and back again. The Mnong people of Indochina believed that tigers could shapeshift into humans. Among some indigenous peoples of Siberia, it was believed that men would seduce women by transforming into tigers.
William Blake's 1794 poem "The Tyger" portrays the animal as the duality of beauty and ferocity. It is the sister poem to "The Lamb" in Blake's Songs of Innocence and of Experience and he ponders how God could create such different creatures. The tiger is featured in the mediaeval Chinese novel Water Margin, where the cat battles and is slain by the bandit Wu Song, while the tiger Shere Khan in Rudyard Kipling's The Jungle Book (1894) is the mortal enemy of the human protagonist Mowgli. Friendly tame tigers have also existed in culture, notably Tigger, the Winnie-the-Pooh character and Tony the Tiger, the Kellogg's cereal mascot.
See also
List of largest cats
International Tiger Day
Tiger Temple
References
Bibliography
External links
Category:Apex predators
Category:Big cats
Category:Conservation-reliant species
Category:EDGE species
Category:Extant Pleistocene first appearances
Category:Fauna of South Asia
Category:Fauna of Southeast Asia
Category:Felids of Asia
Category:Mammals described in 1758
Category:Mammals of East Asia
Category:National symbols of India
Category:National symbols of Malaysia
Category:National symbols of Singapore
Category:Panthera
Category:Species that are or were threatened by agricultural development
Category:Species that are or were threatened by deliberate extirpation efforts
Category:Species that are or were threatened by deforestation
Category:Species that are or were threatened by urbanization
Category:Animal taxa named by Carl Linnaeus
|
nature_wildlife
| 8,539
|
32610
|
Vikings
|
https://en.wikipedia.org/wiki/Vikings
|
Vikings were a seafaring people originally from Scandinavia (present-day Denmark, Norway, and Sweden), who from the late 8th to the late 11th centuries raided, pirated, traded, and settled throughout parts of Europe.Roesdahl, pp. 9–22. They voyaged as far as the Mediterranean, North Africa, the Middle East, Greenland, and Vinland (present-day Newfoundland in Canada, North America). In their countries of origin, and in some of the countries they raided and settled, this period of activity is popularly known as the Viking Age, and the term "Viking" also commonly includes the inhabitants of the Scandinavian homelands as a whole during the late 8th to the mid-11th centuries. The Vikings had a profound impact on the early medieval history of northern and Eastern Europe, including the political and social development of England (and the English language) and parts of France, and established the embryo of Russia in Kievan Rus'.
Expert sailors and navigators of their characteristic longships, Vikings established Norse settlements and governments in the British Isles, the Faroe Islands, Iceland, Greenland, Normandy, and the Baltic coast, as well as along the Dnieper and Volga trade routes across Eastern Europe where they were also known as Varangians. The Normans, Norse-Gaels, Rus, Faroese, and Icelanders emerged from these Norse colonies. At one point, a group of Rus Vikings went so far south that, after briefly being bodyguards for the Byzantine emperor, they attacked the Byzantine city of Constantinople. Vikings also voyaged to the Caspian Sea and Arabia. They were the first Europeans to reach North America, briefly settling in Newfoundland (Vinland). While spreading Norse culture to foreign lands, they simultaneously brought home slaves, concubines, and foreign cultural influences to Scandinavia, influencing the genetic and historical development of both. During the Viking Age, the Norse homelands were gradually consolidated from smaller kingdoms into three larger kingdoms: Denmark, Norway, and Sweden.
The Vikings spoke Old Norse and made inscriptions in runes. For most of the Viking Age, they followed the Old Norse religion, but became Christians over the 8th–12th centuries. The Vikings had their own laws, art, and architecture. Most Vikings were also farmers, fishermen, craftsmen, and traders. Popular conceptions of the Vikings often strongly differ from the complex, advanced civilisation of the Norsemen that emerges from archaeology and historical sources. A romanticised picture of Vikings as noble savages began to emerge in the 18th century; this developed and became widely propagated during the 19th-century Viking revival.Wawn 2000Johnni Langer, "The origins of the imaginary viking", Viking Heritage Magazine, Gotland University/Centre for Baltic Studies. Visby (Sweden), n. 4, 2002. Varying views of the Vikings—as violent, piratical heathens or as intrepid adventurers—reflect conflicting modern Viking myths that took shape by the early 20th century. Current popular representations are typically based on cultural clichés and stereotypes and are rarely accurate—for example, there is no evidence that they wore horned helmets, a costume element that first appeared in the 19th century.
Etymology
The etymology of the word Viking has been much debated by academics, with many origin theories being proposed. One theory suggests that the word's origin is from the Old English 'settlement' and the Old Frisian , attested almost 300 years prior. Another less popular theory is that came from the feminine 'creek', 'inlet', 'small bay'.The Syntax of Old Norse by Jan Terje Faarlund; p. 25 ; The Principles of English Etymology By Walter W. Skeat, published in 1892, defined Viking: better Wiking, Icel. Viking-r, O. Icel. *Viking-r, a creek-dweller; from Icel. vik, O. Icel. *wik, a creek, bay, with suffix -uig-r, belonging to Principles of English Etymology by Walter W. Skeat; Clarendon; p. 479 The Old Norse word víkingr does not appear in written sources until the 12th century, apart from a few runestones.
Another etymology that gained support in the early 21st century derives Viking from the same root as Old Norse 'sea mile', originally referring to the distance between two shifts of rowers, ultimately from the . This is found in the early Nordic verb *wikan 'to turn', similar to Old Icelandic 'to move, to turn', with "well-attested nautical usages", according to Bernard Mees. This theory is better attested linguistically, and the term most likely predates the use of the sail by the Germanic peoples of northwestern Europe.
In the Middle Ages, viking came to refer to Scandinavian pirates or raiders.Stafford, P. (2009). A companion to the Early Middle Ages. Wiley/Blackwell Publisher, chapter 13.Bjorvand, Harald (2000). Våre arveord: etymologisk ordbok. Oslo: Instituttet for sammenlignende kulturforskning (Institute for Comparative Research in Human Culture). p. 1051. . The earliest reference to in English sources is from the Épinal-Erfurt glossary (), about 93 years before the first known attack by Viking raiders in England. The glossary lists the Latin translation for as 'pirate'.Gretsch. The Cambridge Companion to Old English Literature. p. 278 In Old English, the word appears in the Anglo-Saxon poem Widsith, probably from the 9th century. The word was not regarded as a reference to nationality, with other terms such as and 'Danes' being used for that. In Asser's Latin work The Life of King Alfred, the Danes are referred to as 'pagans'; historian Janet Nelson states that became "the Vikings" in standard translations of this work, even though there is "clear evidence" that it was used as a synonym, while Eric Christiansen avers that it is a mistranslation made at the insistence of the publisher. The word does not occur in any preserved Middle English texts.
The word Viking was introduced into Modern English during the late 18th-century Viking revival, at which point it acquired romanticised heroic overtones of "barbarian warrior" or noble savage. During the 20th century, the meaning of the term was expanded to refer not only to seaborne raiders from Scandinavia and other places settled by them (like Iceland and the Faroe Islands), but also any member of the culture that produced the raiders during the period from the late 8th to the mid-11th centuries, or more loosely from about 700 to as late as about 1100. As an adjective, the word is used to refer to ideas, phenomena, or artefacts connected with those people and their cultural life, producing expressions like Viking age, Viking culture, Viking art, Viking religion, Viking ship and so on.
History
Viking Age
The Viking Age in Scandinavian history is taken to have been the period from the earliest recorded raids by Norsemen in 793 until the Norman conquest of England in 1066.Peter Sawyer, The Viking Expansion, The Cambridge History of Scandinavia, Issue 1 (Knut Helle, ed., 2003), p. 105. Vikings used the Norwegian Sea and Baltic Sea for sea routes to the south.
The Normans were descendants of those Vikings who had been given feudal overlordship of areas in northern France, namely the Duchy of Normandy, in the 10th century. In that respect, descendants of the Vikings continued to have an influence in northern Europe. Likewise, King Harold Godwinson, the last Anglo-Saxon king of England, had Danish ancestors. Two Vikings even ascended to the throne of England, with Sweyn Forkbeard claiming the English throne in 1013 until 1014 and his son Cnut the Great being king of England between 1016 and 1035.Lund, Niels "The Danish Empire and the End of the Viking Age", in Sawyer, History of the Vikings, pp. 167–81.The Royal Household, "Sweyn" , The official Website of The British Monarchy, 15 March 2015. Retrieved 15 March 2015Lawson, M K (2004). "Cnut: England's Viking King 1016–35". The History Press Ltd, 2005, .The Royal Household, "Canute The Great" , The official Website of The British Monarchy, 15 March 2015. Retrieved 15 March 2015Badsey, S. Nicolle, D, Turnbull, S (1999). "The Timechart of Military History". Worth Press Ltd, 2000, .
Geographically, the Viking Age covered Scandinavian lands (modern Denmark, Norway and Sweden), as well as territories under North Germanic dominance, mainly the Danelaw, including Scandinavian York, the administrative centre of the remains of the Kingdom of Northumbria,"History of Northumbria: Viking era 866 AD–1066 AD" www.englandnortheast.co.uk. parts of Mercia, and East Anglia.Toyne, Stanley Mease. The Scandinavians in history Pg.27. 1970. Viking navigators opened the road to new lands to the north, west and east, resulting in the foundation of independent settlements in the Shetland, Orkney, and Faroe Islands; Iceland; Greenland;The Fate of Greenland's Vikings , by Dale Mackenzie Brown, Archaeological Institute of America, 28 February 2000 and L'Anse aux Meadows, a short-lived settlement in Newfoundland, circa 1000. The Greenland settlement was established around 980, during the Medieval Warm Period, and its demise by the mid-15th century may have been partly due to climate change. The semi-legendary Viking Rurik is said to have taken control of Novgorod in 862, while his kinsman Oleg captured Kiev in 882 and made it the capital of the Rus. The Rurik dynasty would rule Russia until 1598.
As early as 839, when Swedish emissaries are first known to have visited Byzantium, Scandinavians served as mercenaries in the service of the Byzantine Empire.Hall, p. 98 In the late 10th century, a new unit of the imperial bodyguard formed. Traditionally containing large numbers of Scandinavians, it was known as the Varangian Guard. The word Varangian may have originated in Old Norse, but in Slavic and Greek it could refer either to Scandinavians or Franks. In these years, Swedish men left to enlist in the Byzantine Varangian Guard in such numbers that a medieval Swedish law, the Västgötalagen, from Västergötland declared no-one could inherit while staying in "Greece"—the then Scandinavian term for the Byzantine Empire—to stop the emigration,Jansson 1980:22 especially as two other European courts simultaneously also recruited Scandinavians:Pritsak 1981:386 Kievan Rus' and London 1018–1066 (the Þingalið).
There is archaeological evidence that Vikings reached Baghdad, the centre of the Islamic Empire. The Norse regularly plied the Volga with their trade goods: furs, tusks, seal fat for boat sealant, and slaves. Important trading ports during the period include Birka, Hedeby, Kaupang, Jorvik, Staraya Ladoga, Novgorod, and Kiev.
Scandinavian Norsemen explored Europe by its seas and rivers for trade, raids, colonisation, and conquest. In this period, voyaging from their homelands in Denmark, Norway and Sweden the Norsemen settled in the present-day Faroe Islands, Iceland, Norse Greenland, Newfoundland, the Netherlands, Germany, Normandy, Italy, Scotland, England, Wales, Ireland, the Isle of Man, Estonia, Latvia, Lithuania,Butrimas, Adomas. "Dešiniajame Savo Krante Svebų Jūra Skalauja Aisčių Gentis..." Lietuva iki Mindaugo [Lithuania Before Mindaugas] (in Lithuanian). 2003, p. 136. ISBN 9986571898. Ukraine, Russia and Turkey, as well as initiating the consolidation that resulted in the formation of the present-day Scandinavian countries.
In the Viking Age, the present-day nations of Norway, Sweden and Denmark did not exist, but the peoples who lived in what is now those countries were largely homogeneous and similar in culture and language, although somewhat distinct geographically. The names of Scandinavian kings are reliably known for only the later part of the Viking Age. After the end of the Viking Age, the separate kingdoms gradually acquired distinct identities as nations, which went hand-in-hand with their Christianisation. Thus, the end of the Viking Age for the Scandinavians also marks the start of their relatively brief Middle Ages.
Intermixing with the Slavs
Slavic and Viking tribes were "closely linked, fighting one another, intermixing and trading". In the Middle Ages, goods were transferred from Slavic areas to Scandinavia, and Denmark could be considered "a melting pot of Slavic and Scandinavian elements". Leszek Gardeła, of the Department of Scandinavian Languages and Literatures at the University of Bonn, posits that the presence of Slavs in Scandinavia is "more significant than previously thought", while Mats Roslund states that "the Slavs and their interaction with Scandinavia have not been adequately investigated".
A 10th-century grave of a female warrior in Denmark was long thought to belong to a Viking. A 2019 analysis suggested the woman may have been a Slav from present-day Poland. The first king of the Swedes, Eric, was married to Gunhild, of the Polish House of Piast. Likewise, his son, Olof, fell in love with Edla, a Slavic woman, and took her as his frilla (concubine). They had a son and a daughter: Emund the Old, King of Sweden, and Astrid, Queen of Norway. Cnut the Great, King of Denmark, England and Norway, was the son of a daughter of Mieszko I of Poland, possibly the former Polish queen of Sweden, wife of Eric.
Expansion
Colonisation of Iceland by Norwegian Vikings began in the 9th century. The first source mentioning Iceland and Greenland is a papal letter from 1053. Twenty years later, they appear in the Gesta of Adam of Bremen. It was not until after 1130, when the islands had become Christianised, that accounts of the history of the islands were written from the point of view of the inhabitants in sagas and chronicles.Sawyer, History of the Vikings, pp. 110, 114 The Vikings explored the northern islands and coasts of the North Atlantic, ventured south to North Africa, and brought slaves from the Baltic coast and European Russia.
They raided and pillaged, traded, acted as mercenaries and settled colonies over a wide area.John Haywood: Penguin Historical Atlas of the Vikings, Penguin (1996). Detailed maps of Viking settlements in Scotland, Ireland, England, Iceland and Normandy. Early Vikings probably returned home after their raids. Later in their history, they began to settle in other lands. Vikings under Leif Erikson, heir to Erik the Red, reached North America and set up short-lived settlements in present-day L'Anse aux Meadows, Newfoundland, Canada. This expansion occurred during the Medieval Warm Period.
Viking expansion into continental Europe was limited. Their realm was bordered by powerful tribes to the south. Early on, it was the Saxons who occupied Old Saxony, located in what is now Northern Germany. The Saxons were a fierce and powerful people and were often in conflict with the Vikings. To counter the Saxon aggression and solidify their own presence, the Danes constructed the huge defence fortification of Danevirke in and around Hedeby.
The Vikings witnessed the violent subduing of the Saxons by Charlemagne, in the thirty-year Saxon Wars of 772–804. The Saxon defeat resulted in their forced christening and the absorption of Old Saxony into the Carolingian Empire. Fear of the Franks led the Vikings to further expand Danevirke, and the defence constructions remained in use throughout the Viking Age and even up until 1864.
The southern coast of the Baltic Sea was ruled by the Obotrites, a federation of Slavic tribes loyal to the Carolingians and later the Frankish empire. The Vikings—led by King Gudfred—destroyed the Obotrite city of Reric on the southern Baltic coast in 808 AD and transferred the merchants and traders to Hedeby. This secured Viking supremacy in the Baltic Sea, which continued throughout the Viking Age.
Because of the expansion of the Vikings across Europe, a comparison of DNA and archeology undertaken by scientists at the University of Cambridge and University of Copenhagen suggested that the term "Viking" may have evolved to become "a job description, not a matter of heredity", at least in some Viking bands.
Motives
The motives driving the Viking expansion are a topic of much debate. The concept that Vikings may have originally started sailing and raiding due to a need to seek out women from foreign lands was expressed in the 11th century by historian Dudo of Saint-Quentin in his semi-imaginary History of The Normans. As observed by Adam of Bremen, rich and powerful Viking men tended to have many wives and concubines; and these polygynous relationships may have led to a shortage of women available to the Viking male. Consequently, the average Viking man may have felt compelled to seek wealth and power to have the means to acquire suitable women. Several centuries after Dudo's observations, scholars revived this idea, and over time it became a cliché among scholars of the Viking Age. Viking men would often buy or capture women and make them into their wives or concubines; such polygynous marriages increase male-male competition in society because they create a pool of unmarried men who are willing to engage in risky status-elevating and sex-seeking behaviors. The Annals of Ulster states that in 821 the Vikings plundered an Irish village and "carried off a great number of women into captivity".
One common theory posits that Charlemagne "used force and terror to Christianise all pagans", leading to baptism, conversion or execution, and as a result, Vikings and other pagans resisted and wanted revenge.Bruno Dumézil, master of Conference at Paris X-Nanterre, Normalien, aggregated history, author of conversion and freedom in the barbarian kingdoms. 5th–8th centuries (Fayard, 2005)"Franques Royal Annals" cited in Sawyer, History of the Vikings, p. 20Dictionnaire d'histoire de France, Perrin, Alain Decaux and André Castelot, 1981, pp. 184–85. ."the Vikings" R. Boyer history, myths, dictionary, Robert Laffont several 2008, p. 96 Professor Rudolf Simek states that "it is not a coincidence if the early Viking activity occurred during the reign of Charlemagne".François-Xavier Dillmann, "Viking civilisation and culture. A bibliography of French-language", Caen, Centre for research on the countries of the North and Northwest, University of Caen, 1975, p. 19, and "Les Vikings: the Scandinavian and European 800–1200", 22nd exhibition of art from the Council of Europe, 1992, p. 26 The ascendance of Christianity in Scandinavia led to serious conflict, dividing Norway for almost a century. However, this time period did not commence until the 10th century. Norway was never subject to aggression by Charlemagne and the period of strife was due to successive Norwegian kings embracing Christianity after encountering it overseas."History of the Kings of Norway" by Snorri Sturlusson translated by Professor of History François-Xavier Dillmann, Gallimard pp. 15–16, 18, 24, 33–34, 38
Another explanation is that the Vikings exploited a moment of weakness in the surrounding regions. Contrary to Simek's assertion, Viking raids occurred sporadically long before the reign of Charlemagne; but exploded in frequency and size after his death, when his empire fragmented into multiple much weaker entities. England suffered from internal divisions and was a relatively easy prey given the proximity of many towns to the sea or to navigable rivers. Lack of organised naval opposition throughout Western Europe allowed Viking ships to travel freely, raiding or trading as opportunity permitted. The decline in the profitability of old trade routes could also have played a role. Trade between Western Europe and the rest of Eurasia suffered a severe blow when the Western Roman Empire fell in the 5th century. The expansion of Islam in the 7th century had also affected trade with Western Europe.Crone, Patricia. Meccan trade and the rise of Islam . First Georgias Press. 2004.
Raids in Europe, including raids and settlements from Scandinavia, were not unprecedented and had occurred long before the Vikings arrived. The Jutes invaded the British Isles three centuries earlier, from Jutland during the Age of Migrations, before the Danes settled there. The Saxons and the Angles did the same, embarking from mainland Europe. The Viking raids were, however, the first to be documented by eyewitnesses, and they were much larger in scale and frequency than in previous times.
Vikings themselves were expanding; although their motives are unclear, historians believe that scarce resources or a lack of mating opportunities were a factor.
The slave trade was an important part of the Viking economy, with most slaves destined to Scandinavia, although many others were shipped east where they could be sold for large profits. The "Highway of Slaves" was a term for a route that the Vikings found to have a direct pathway from Scandinavia to Constantinople and Baghdad while travelling on the Baltic Sea. With the advancements of their ships during the 9th century, the Vikings were able to sail to Kievan Rus and some northern parts of Europe.
Jomsborg
Jomsborg was a semi-legendary Viking stronghold at the southern coast of the Baltic Sea (medieval Wendland, modern Pomerania), that existed between the 960s and 1043. Its inhabitants were known as Jomsvikings. Jomsborg's exact location, or its existence, has not yet been established, though it is often maintained that Jomsborg was somewhere on the islands of the Oder estuary.T. D. Kendrick, A History of the Vikings, Courier Dover Publications, 2004, pp. 179ff,
End of the Viking Age
While the Vikings were active beyond their Scandinavian homelands, Scandinavia was itself experiencing new influences and undergoing a variety of cultural changes.Roesdahl, pp. 295–97
Emergence of nation-states and monetary economies
By the late 11th century, royal dynasties were legitimised by the Catholic Church (which had had little influence in Scandinavia 300 years earlier) which were asserting their power with increasing authority and ambition, with the three kingdoms of Denmark, Norway, and Sweden taking shape. Towns appeared that functioned as secular and ecclesiastical administrative centres and market sites, and monetary economies began to emerge based on English and German models.Gareth Williams, "Kingship, Christianity and coinage: monetary and political perspectives on silver economy in the Viking Age", in Silver Economy in the Viking Age, ed. James Graham-Campbell and Gareth Williams, pp. 177–214; By this time the influx of Islamic silver from the East had been absent for more than a century, and the flow of English silver had come to an end in the mid-11th century.Roesdahl, p. 296
Assimilation into Christendom
Christianity had taken root in Denmark and Norway with the establishment of dioceses in the 11th century, and the new religion was beginning to organise and assert itself more effectively in Sweden. Foreign churchmen and native elites were energetic in furthering the interests of Christianity, which was now no longer operating only on a missionary footing, and old ideologies and lifestyles were transforming. By 1103, the first archbishopric was founded in Scandinavia, at Lund, Scania, then part of Denmark.
The assimilation of the nascent Scandinavian kingdoms into the cultural mainstream of European Christendom altered the aspirations of Scandinavian rulers and of Scandinavians able to travel overseas, and changed their relations with their neighbours.
One of the primary sources of profit for the Vikings had been slave-taking from other European peoples. The medieval Church held that Christians should not own fellow Christians as slaves, so chattel slavery diminished as a practice throughout northern Europe. This took much of the economic incentive out of raiding, though sporadic slaving activity continued into the 11th century. Scandinavian predation in Christian lands around the North and Irish Seas diminished markedly.
The kings of Norway continued to assert power in parts of northern Britain and Ireland, and raids continued into the 12th century, but the military ambitions of Scandinavian rulers were now directed toward new paths. In 1107, Sigurd I of Norway sailed for the eastern Mediterranean with Norwegian crusaders to fight for the newly established Kingdom of Jerusalem; the kings of Denmark and Sweden participated actively in the Baltic Crusades of the 12th and 13th centuries.The Northern Crusades: Second Edition by Eric Christiansen;
Culture
A variety of sources illuminate the culture, activities, and beliefs of the Vikings. Although they were generally a non-literate culture that produced no literary legacy, they had an alphabet and described themselves and their world on runestones. Most contemporary literary and written sources on the Vikings come from other cultures that were in contact with them. Since the mid-20th century, archaeological findings have built a more complete and balanced picture of the lives of the Vikings.Hall, 2010, pp. 8 passim.Roesdahl, pp. 16–22. The archaeological record is particularly rich and varied, providing knowledge of their rural and urban settlement, crafts and production, ships and military equipment, trading networks, as well as their pagan and Christian religious artefacts and practices.
Literature and language
The most important primary sources on the Vikings are contemporary texts from Scandinavia and regions where the Vikings were active.Hall, pp. 8–11 Writing in Latin letters was introduced to Scandinavia with Christianity, so there are few native documentary sources from Scandinavia before the late 11th and early 12th centuries.Lindqvist, pp. 160–61 The Scandinavians did write inscriptions in runes, but these were usually very short and formulaic. Most contemporary documentary sources consist of texts written in Christian and Islamic communities outside Scandinavia, often by authors who had been negatively affected by Viking activity.
Later writings on the Vikings and the Viking Age can also be important for understanding them and their culture, although they need to be treated cautiously. After the consolidation of the church and the assimilation of Scandinavia and its colonies into mainstream medieval Christian culture in the 11th and 12th centuries, native written sources began to appear in Latin and Old Norse. In the Viking colony of Iceland, extraordinary vernacular literature blossomed in the 12th through 14th centuries, and many traditions connected with the Viking Age were written down for the first time in the Icelandic sagas. A literal interpretation of these medieval prose narratives about the Vikings and the Scandinavian past is doubtful, but many specific elements remain worthy of consideration, such as the great quantity of skaldic poetry attributed to court poets of the 10th and 11th centuries, the exposed family trees, the self-images, and the ethical values that are contained in these literary writings.
Indirectly, the Vikings have also left a window open onto their language, culture and activities, through many Old Norse place names and words found in their former sphere of influence. Some of these place names and words are still in direct use today, almost unchanged, and shed light on where they settled and what specific places meant to them. Examples include place names like Egilsay (from Eigils ey meaning Eigil's Island), Ormskirk (from Ormr kirkja meaning Orms Church or Church of the Worm), Meols (from merl meaning Sand Dunes), Snaefell (Snow Fell), Ravenscar (Ravens Rock), Vinland (Land of Wine or Land of Winberry), Kaupanger (Market Harbour), Tórshavn (Thor's Harbour), and the religious centre of Odense, meaning a place where Odin was worshipped. Viking influence is also evident in concepts like the present-day parliamentary body of the Tynwald on the Isle of Man.
Many common words in everyday English language stem from the Old Norse of the Vikings and give an opportunity to understand their interactions with the people and cultures of the British Isles.See List of English words of Old Norse origin for further explanations on specific words. In the Northern Isles of Shetland and Orkney, Old Norse completely replaced the local languages and over time evolved into the now extinct Norn language. Some modern words and names only emerge and contribute to our understanding after a more intense research of linguistic sources from medieval or later records, such as York (Horse Bay), Swansea (Sveinn's Isle) or some of the place names in Normandy like Tocqueville (Toki's farm).See Norman toponymy.
Linguistic and etymological studies continue to provide a vital source of information on the Viking culture, their social structure and history and how they interacted with the people and cultures they met, traded, attacked or lived with in overseas settlements.Henriksen, Louise Kæmpe: Nordic place names in Europe Viking Ship Museum RoskildeViking Words The British Library A lot of Old Norse connections are evident in the modern-day languages of Swedish, Norwegian, Danish, Faroese and Icelandic.Department of Scandinavian Research University of Copenhagen Old Norse did not exert any great influence on the Slavic languages in the Viking settlements of Eastern Europe. It has been speculated that the reason for this was the great differences between the two languages, combined with the Rus Vikings' more peaceful businesses in these areas, and the fact that they were outnumbered. The Norse named some of the rapids on the Dnieper, but this can hardly be seen from modern names.See information on the "Slavonic and Norse names of the Dnieper rapids" on Trade route from the Varangians to the Greeks.Else Roesdahl (prof. in Arch. & Hist.): The Vikings, Penguin Books (1999),
Runestones
The Norse of the Viking Age could read and write and used a non-standardised alphabet, called runor, built upon sound values. While there are few remains of runic writing on paper from the Viking era, thousands of stones with runic inscriptions have been found where Vikings lived. They are usually in memory of the dead, though not necessarily placed at graves. The use of runor survived into the 15th century, used in parallel with the Latin alphabet.
The runestones are unevenly distributed in Scandinavia: Denmark has 250 runestones, Norway has 50 while Iceland has none. Sweden has as many as between 1,700 and 2,500Zilmer 2005:38 depending on the definition. The Swedish district of Uppland has the highest concentration with as many as 1,196 inscriptions in stone, whereas Södermanland is second with 391.
The majority of runic inscriptions from the Viking period are found in Sweden. Many runestones in Scandinavia record the names of participants in Viking expeditions, such as the Kjula runestone that tells of extensive warfare in Western Europe and the Turinge Runestone, which tells of a war band in Eastern Europe. Swedish runestones are mostly from the 11th century and often contain rich inscriptions, such as the Färentuna, Hillersjö, Snottsta and Vreta stones, which provide extensive detail on the life of one family, Gerlög and Inga.
Other runestones mention men who died on Viking expeditions. Among them are the England runestones (Swedish: Englandsstenarna), which is a group of about 30 runestones in Sweden which refer to Viking Age voyages to England. They constitute one of the largest groups of runestones that mention voyages to other countries, and they are comparable in number only to the approximately 30 Greece RunestonesJansson 1980:34. and the 26 Ingvar Runestones, the latter referring to a Viking expedition to the Middle East.Thunberg, Carl L. (2010). Ingvarståget och dess monument. Göteborgs universitet. CLTS. . They were engraved in Old Norse with the Younger Futhark.Thunberg 2010:18–51. thumb|Piraeus Lion drawing of curved lindworm. The runes on the lion tell of Viking warriors, most likely Varangians, mercenaries in the service of the Byzantine (Eastern Roman) Emperor.
The Jelling stones date from between 960 and 985. The older, smaller stone was raised by King Gorm the Old, the last pagan king of Denmark, as a memorial honouring Queen Thyre. The larger stone was raised by his son, Harald Bluetooth, to celebrate the conquest of Denmark and Norway and the conversion of the Danes to Christianity. It has three sides: one with an animal image; one with an image of the crucified Jesus Christ; and a third bearing the following inscription:
Runic inscriptions are also found outside Scandinavia, in places as far as Greenland and Istanbul. Runestones attest to voyages to locations such as Bath,baþum (Sm101), see Nordiskt runnamnslexikon PDF Greece (how the Vikings referred to the Byzantium territories generally),In the nominative: krikiaR (G216). In the genitive: girkha (U922$), k—ika (U104). In the dative: girkium (U1087†), kirikium (SöFv1954;20, U73, U140), ki(r)k(i)(u)(m) (Ög94$), kirkum (U136), krikium (Sö163, U431), krikum (Ög81A, Ög81B, Sö85, Sö165, Vg178, U201, U518), kri(k)um (U792), krikum (Sm46†, U446†), krkum (U358), kr... (Sö345$A), kRkum (Sö82). In the accusative: kriki (Sö170). Uncertain case krik (U1016$Q). Greece also appears as griklanti (U112B), kriklati (U540), kriklontr (U374$), see Nordiskt runnamnslexikon PDF Khwaresm,Karusm (Vs1), see Nordiskt runnamnslexikon PDF Jerusalem,iaursaliR (G216), iursala (U605†), iursalir (U136G216, U605, U136), see Nordiskt runnamnslexikon PDF Italy (as Langobardland),lakbarþilanti (SöFv1954;22), see Nordiskt runnamnslexikon PDF Serkland (i.e. the Muslim world),Thunberg, Carl L. (2011). Särkland och dess källmaterial. Göteborgs universitet. CLTS. pp. 23–58. .serklat (G216), se(r)kl... (Sö279), sirklanti (Sö131), sirk:lan:ti (Sö179), sirk*la(t)... (Sö281), srklant- (U785), skalat- (U439), see Nordiskt runnamnslexikon PDF Englandeklans (Vs18$), eklans (Sö83†), ekla-s (Vs5), enklans (Sö55), iklans (Sö207), iklanþs
(U539C), ailati (Ög104), aklati (Sö166), akla- (U616$), anklanti (U194), eg×loti (U812), eklanti (Sö46, Sm27), eklati (ÖgFv1950;341, Sm5C, Vs9), enklanti (DR6C), haklati (Sm101), iklanti (Vg20), iklati (Sm77), ikla-ti (Gs8), i...-ti (Sm104), ok*lanti (Vg187), oklati (Sö160), onklanti (U241), onklati (U344), -klanti (Sm29$), iklot (N184), see Nordiskt runnamnslexikon PDF (including London),luntunum (DR337$B), see Nordiskt runnamnslexikon PDF and various places in Eastern Europe. Viking Age inscriptions have also been discovered on the Manx runestones on the Isle of Man. Not all runestones are from the Viking Age, such as the Kingittorsuaq Runestone in Greenland, which dates to the early 14th century.
Runic alphabet usage in modern times
The last known people to use the Runic alphabet were an isolated group of people known as the Elfdalians, that lived in the locality of Älvdalen in the Swedish province of Dalarna. They spoke the language of Elfdalian, the language unique to Älvdalen. The Elfdalian language differentiates itself from the other Scandinavian languages as it evolved much closer to Old Norse. The people of Älvdalen stopped using runes as late as the 1920s. Usage of runes therefore survived longer in Älvdalen than anywhere else in the world. The last known record of the Elfdalian Runes is from 1929; they are a variant of the Dalecarlian runes, runic inscriptions that were also found in Dalarna.
Traditionally regarded as a Swedish dialect, but by several criteria closer related to West Scandinavian dialects, Elfdalian is a separate language by the standard of mutual intelligibility. Although there is no mutual intelligibility, due to schools and public administration in Älvdalen being conducted in Swedish, native speakers are bilingual and speak Swedish at a native level. Residents in the area who speak only Swedish as their sole native language, neither speaking nor understanding Elfdalian, are also common. Älvdalen can be said to have had its own alphabet during the 17th and 18th century. Today there are about 2,000–3,000 native speakers of Elfdalian.
Burial sites
There are numerous burial sites associated with Vikings throughout Europe and their sphere of influence—in Scandinavia, the British Isles, Ireland, Greenland, Iceland, Faeroe Islands, Germany, Latvia, Estonia, Finland, Russia, etc. The burial practices of the Vikings were quite varied, from dug graves in the ground, to tumuli, sometimes including so-called ship burials.
According to written sources, most of the funerals took place at sea. Funerals involved either burial or cremation, depending on local customs. In the area that is now Sweden, cremations were predominant; in Denmark burial was more common; and in Norway both were common. Viking barrows are one of the primary sources of evidence for circumstances in the Viking Age.Medieval Archaeology: An Encyclopaedia (Pamela Crabtree, ed., 2001), "Vikings," p. 510. The items buried with the dead give some indication as to what was considered important to possess in the afterlife.Roesdahl, p. 20. It is unknown what mortuary services were given to dead children by the Vikings.Roesdahl p. 70 (in Women, gender roles and children) Some of the most important burial sites for understanding the Vikings include:
Norway: Oseberg; Gokstad; Borrehaugene.
Sweden: Gettlinge gravfält; the cemeteries of Birka, a World Heritage Site;The Hemlanden cemetery located here is the largest Viking Period cemetery in Scandinavia Valsgärde; Gamla Uppsala; Hulterstad gravfält, near Alby; Hulterstad, Öland, Gotland.
Denmark: Jelling, a World Heritage Site; Lindholm Høje; Ladby ship; Mammen chamber tomb and hoard.
Estonia: Salme ships – The largest and earliest Viking ship burial ground ever uncovered.
Scotland: Port an Eilean Mhòir ship burial; Scar boat burial, Orkney.
Faroe Islands: Hov.
Iceland: Mosfellsbær in Capital Region; See also Jon M. Erlandson. the boat burial in Vatnsdalur, Austur-Húnavatnssýsla.Þór Magnússon: Bátkumlið í Vatnsdal , Árbók hins íslenzka fornleifafélags (1966), 1–32A comprehensive list of registered pagan graves in Iceland, can be found in Eldjárn & Fridriksson (2000): Kuml og haugfé.
Greenland: Brattahlíð.
Germany: Hedeby.
Latvia: Grobiņa.
Ukraine: the Black Grave.
Russia: Gnezdovo, Staraya Ladoga.
Ships
There have been several archaeological finds of Viking ships of all sizes, providing knowledge of the craftsmanship that went into building them. There were many types of Viking ships, built for various uses; the best-known type is probably the longship.Longships are sometimes erroneously called drakkar, a corruption of "dragon" in Norse. Longships were intended for warfare and exploration, designed for speed and agility, and were equipped with oars to complement the sail, making navigation possible independently of the wind. The longship had a long, narrow hull and shallow draught to facilitate landings and troop deployments in shallow water. Longships were used extensively by the Leidang, the Scandinavian defence fleets. The longship allowed the Norse to go Viking, which might explain why this type of ship has become almost synonymous with the concept of Vikings.Hadingham, Evan: Secrets of Viking Ships (05.09.00) NOVA science media.Durham, Keith: Viking Longship Osprey Publishing, Oxford, 2002.
The Vikings built many unique types of watercraft, often used for more peaceful tasks. The knarr was a dedicated merchant vessel designed to carry cargo in bulk. It had a broader hull, a deeper draught, and a small number of oars (used primarily to manoeuvre in harbours and similar situations). One Viking innovation was the 'beitass', a spar mounted to the sail that allowed their ships to sail effectively against the wind.Block, Leo, To Harness the Wind: A Short History of the Development of Sails , Naval Institute Press, 2002, It was common for seafaring Viking ships to tow or carry a smaller boat to transfer crew and cargo from the ship to shore.
Ships were an integral part of Viking culture. They facilitated everyday transportation across seas and waterways, exploration of new lands, raids, conquests, and trade with neighbouring cultures. They also held a major religious importance. People with high status were sometimes buried in a ship along with animal sacrifices, weapons, provisions and other items, as evidenced by the buried vessels at Gokstad and Oseberg in NorwayIan Heath, The Vikings, p. 4, Osprey Publishing, 1985. and the excavated ship burial at Ladby in Denmark. Ship burials were also practised by Vikings overseas, as evidenced by the excavations of the Salme ships on the Estonian island of Saaremaa.
Well-preserved remains of five Viking ships were excavated from Roskilde Fjord in the late 1960s, representing both the longship and the knarr. The ships were scuttled there in the 11th century to block a navigation channel and thus protect Roskilde, then the Danish capital, from a seaborne assault. The remains of these ships are on display at the Viking Ship Museum in Roskilde.
In 2019, archaeologists uncovered two Viking boat graves in Gamla Uppsala. They also discovered that one of the boats still holds the remains of a man, a dog, and a horse, along with other items. This has shed light on the death rituals of Viking communities in the region.
Social structure
Viking society was divided into the three socio-economic classes: thralls, karls and jarls. This is described vividly in the Eddic poem of Rígsþula, which also explains that it was the god Ríg—father of mankind also known as Heimdallr—who created the three classes. Archaeology has confirmed this social structure.Roesdahl, pp. 38–48, 61–71.
The lowest ranking class were thralls, Old Norse for slaves, who comprised as much as a quarter of the population. Slavery was vital to Viking society – for everyday chores and large-scale construction, and also for trading and for the economy. Thralls were servants and workers on the farms and in larger households of the karls and jarls, and they were used for constructing fortifications, ramps, canals, mounds, roads and similar projects built by hard labour. According to the Rígsþula, thralls were despised and looked down upon. New thralls were supplied by either the sons and daughters of thralls, or were captured abroad by the Vikings on their raids in Europe. The thralls were brought back to Scandinavia by boat, used on location or in newer settlements to build needed structures, or sold, often to the Arabs in exchange for silver dirhams or silk.
Free peasants (karlar). They owned farms, land and cattle, and engaged in chores like ploughing the fields, milking the cows, and building houses and wagons, but used thralls to make ends meet. Other names for karls were bonde or simply free men. Similar classes were churls and huskarls.
Aristocracy (jarlar). They were wealthy and owned large estates with huge longhouses, horses and many thralls. The thralls did most of the daily chores, while the jarls carried out administration, politics, hunting, and sports – they also visited other jarls or went abroad on expeditions. When a jarl died and was buried, his household thralls were sometimes sacrificially killed and buried next to him, as many excavations have revealed.
In daily life, there were many intermediate positions in the overall social structure and it appears that there was some social mobility between them. These details are unclear, but titles and positions like hauldr, thegn, and landmand, show mobility between the karls and the jarls.
Other social structures included the communities of félag in both the civil and the military spheres, to which its members (called félagi) were obliged. A félag could be centred around certain trades, a common ownership of a sea vessel or a military obligation under a specific leader. Members of the latter were referred to as drenge, one of the words for warrior. There were also official communities within towns and villages, the overall defence, religion, the legal system and the Things.
Status of women
Like elsewhere in medieval Europe, most women in Viking society were subordinate to their husbands and fathers and had little political power.Magnúsdóttir, Auður. "Women and sexual politics", in The Viking World. Routledge, 2008. pp.40–45"Women in the Viking Age" . National Museum of Denmark. However, written sources portray free Viking women as having independence and rights. Viking women generally appear to have had more freedom than women elsewhere, as illustrated in the Icelandic Grágás and the Norwegian Frostating laws and Gulating laws.Borgström Eva : Makalösa kvinnor: könsöverskridare i myt och verklighet (Marvelous women : gender benders in myth and reality) Alfabeta/Anamma, Stockholm 2002. (inb.). Libris 8707902.
Most free Viking women were housewives, and a woman's standing in society was linked to that of her husband. Marriage gave a woman a degree of economic security and social standing encapsulated in the title húsfreyja (lady of the house). Norse laws assert the housewife's authority over the 'indoor household'. She had the important roles of managing the farm's resources, conducting business, as well as child-rearing, although some of this would be shared with her husband.Friðriksdóttir, Jóhanna. Valkyrie: The Women of the Viking World. Bloomsbury Publishing, 2020. pp.98–100.
After the age of 20, an unmarried woman, referred to as maer and mey, reached legal majority and had the right to decide her place of residence and was regarded as her own person before the law. An exception to her independence was the right to choose a husband, as marriages were normally arranged by the family.Borgström Eva: Makalösa kvinnor: könsöverskridare i myt och verklighet (Marvelous women : gender benders in myth and reality) Alfabeta/Anamma, Stockholm 2002. (inb.). Libris 8707902. The groom would pay a bride-price (mundr) to the bride's family, and the bride brought assets into the marriage, as a dowry. A married woman could divorce her husband and remarry.Ohlander, Ann-Sofie & Strömberg, Ulla-Britt, Tusen svenska kvinnoår: svensk kvinnohistoria från vikingatid till nutid, 3. (A Thousand Swedish Women's Years: Swedish Women's History from the Viking Age until now), [omarb. och utök.] uppl., Norstedts akademiska förlag, Stockholm, 2008
Concubinage was also part of Viking society, whereby a woman could live with a man and have children with him without marrying; such a woman was called a frilla. Usually she would be the mistress of a wealthy and powerful man who also had a wife. The wife had authority over the mistresses if they lived in her household. Through her relationship to a man of higher social standing, a concubine and her family could advance socially; although her position was less secure than that of a wife. There was little distinction made between children born inside or outside marriage: both had the right to inherit property from their parents, and there were no "legitimate" or "illegitimate" children. However, children born in wedlock had more inheritance rights than those born out of wedlock.
A woman had the right to inherit part of her husband's property upon his death, and widows enjoyed the same independent status as unmarried women. The paternal aunt, paternal niece and paternal granddaughter, referred to as odalkvinna, all had the right to inherit property from a deceased man. A woman with no husband, sons or male relatives could inherit not only property but also the position as head of the family when her father or brother died. Such a woman was referred to as Baugrygr, and she exercised all the rights afforded to the head of a family clan, until she married, by which her rights were transferred to her new husband.
Women had religious authority and were active as priestesses (gydja) and oracles (sejdkvinna).Ingelman-Sundberg, Catharina, Forntida kvinnor: jägare, vikingahustru, prästinna [Ancient women: hunters, viking wife, priestess], Prisma, Stockholm, 2004 They were active within art as poets (skalder) and rune masters, and as merchants and medicine women. There may also have been female entrepreneurs, who worked in textile production. Women may also have been active within military offices: the tales about shieldmaidens are unconfirmed, but some archaeological finds such as the Birka female Viking warrior may indicate that at least some women in military authority existed.
These liberties of the Viking women gradually disappeared after the introduction of Christianity, and from the late 13th century, they are no longer mentioned.
Examination of Viking Age burials suggests that women lived longer, and nearly all well past the age of 35, as compared to earlier times. Female graves from before the Viking Age in Scandinavia hold a proportionally large number of remains from women aged 20 to 35, presumably due to complications of childbirth.Jesch, 13
Examination of skeletal remains also allows the relative health and nutritional status of boys and girls in the past to be reconstructed, using anthropometric techniques. Burials from Scandinavia and other European countries suggest that, in comparison with other societies at the time, female equality was remarkably high in rural Scandinavia. Females in the rural periphery of Nordic countries during the Viking period and the later Middle Ages had relatively high status, resulting in substantial nutritional and health resources being allocated to girls, enabling them to grow stronger and healthier.
Appearance
Scandinavian Vikings were similar in appearance to modern Scandinavians: "their skin was fair and the hair color varied between blond, dark and reddish". Genetic studies suggest that people were mostly blond in what is now eastern Sweden, while red hair was mostly found in western Scandinavia.Hjardar, Kim. Vikings. Rosen Publishing, 2018. pp.37–41 Most Viking men had shoulder-length hair and beards, and slaves (thralls) were usually the only men with short hair.Sherrow, Victoria. Encyclopedia of Hair: A Cultural History. Greenwood Publishing, 2006. p.389 The length varied according to personal preference and occupation. Men involved in warfare, for example, may have had slightly shorter hair and beards for practical reasons. Men in some regions bleached their hair a golden saffron colour. Females also had long hair, with girls often wearing it loose or braided, and married women often wearing it in a bun. The average height is estimated to have been for men and for women.
The three classes were easily recognisable by their appearance. Men and women of the Jarls were well groomed with neat hairstyles and expressed their wealth and status by wearing expensive clothes (often silk) and well-crafted jewellery like brooches, belt buckles, necklaces and arm rings. Almost all of the jewellery was crafted in specific designs unique to the Norse (see Viking art). Finger rings were seldom used and earrings were not used at all, as they were seen as a Slavic phenomenon. Most karls expressed similar tastes and hygiene, but in a more relaxed and inexpensive way.
Archaeological finds from Scandinavia and Viking settlements in the British Isles support the idea of the well-groomed and hygienic Viking. Burial with grave goods was a common practice in the Scandinavian world, through the Viking Age and well past the Christianisation of the Norse peoples.Caroline Ahlström Arcini "Eight Viking Age Burials", The Viking Age: A Time With Many Faces, Oxbow Books (2018), pp. 5. Within these burial sites and homesteads, combs, often made from antler, are a common find.C. Paterson, "The combs, ornaments, weights and coins", Cille Pheadair: A Norse Farmstead and Pictish Burial Cairn in South Uist. Mike Parker Pearson, Mark Brennand, Jacqui Mulville and Helen Smith. Oxbow Books (2018), p. 293. The manufacturing of such antler combs was common, as at the Viking settlement at Dublin hundreds of examples of combs from the tenth-century have survived, suggesting that grooming was a common practice. The manufacture of such combs was also widespread throughout the Viking world, as examples of similar combs have been found at Viking settlements in Ireland,Selwyn Kittredge, "Digging up Viking and Medieval Dublin", Archaeology, Vol.27, No. 2 (April 1974), pp. 134–36. Archaeological Institute of America. England,Caroline Peterson, "A Tale of two cemeteries: Viking Burials at Cumwhitton and Carlisle, Cumbria", Crossing Boundaries: Interdisciplinary Approaches to the Art, Material Culture, Language and Literature of the Early Medieval World. Edited by, Eric Cambridge and Jane Hawkes. Oxbow Books (2017). and Scotland.C. Paterson, "The combs, ornaments, weights and coins", Cille Pheadair: A Norse Farmstead and Pictish Burial Cairn in South Uist. Mike Parker Pearson, Mark Brennand, Jacqui Mulville and Helen Smith. Oxbow Books (2018). The combs share a common visual appearance as well, with the extant examples often decorated with linear, interlacing, and geometric motifs, or other forms of ornamentation depending on the comb's period and type, but stylistically similar to Viking Age art.Ibid, pp. 296. All levels of Viking age society appear to have groomed their hair, as hair combs have been found in common graves as well as in aristocratic ones.
Farming and cuisine
The sagas tell about the diet and cuisine of the Vikings,Sk. V. Gudjonsson (1941): Folkekost og sundhedsforhold i gamle dage. Belyst igennem den oldnordiske Litteratur. (Dvs. først og fremmest de islandske sagaer). København. Short description in English: Diet and health in previous times, as revealed in the Old Norse Literature, especially the Icelandic Sagas. but first-hand evidence, like cesspits, kitchen middens and garbage dumps have proved to be of great value and importance. Undigested remains of plants from cesspits at Coppergate in York have provided much information in this respect. Overall, archaeo-botanical investigations have been undertaken increasingly in recent decades, as a collaboration between archaeologists and palaeoethno-botanists. This new approach sheds light on the agricultural and horticultural practices of the Vikings and their cuisine.
The combined information from various sources suggests a diverse cuisine and ingredients. Meat products of all kinds, such as cured, smoked and whey-preserved meat,This will cause a lactic acid fermentation process to occur. sausages, and boiled or fried fresh meat cuts, were prepared and consumed. There were plenty of seafood, bread, porridges, dairy products, vegetables, fruits, berries and nuts. Alcoholic drinks like beer, mead, bjórr (a strong fruit wine) and, for the rich, imported wine, were served.Roesdahl, p. 54
Certain livestock were typical and unique to the Vikings, including the Icelandic horse, Icelandic cattle, a plethora of sheep breeds,See the article on the Northern European short-tailed sheep for specific information. In southern Scandinavia (i.e. Denmark), the heath sheep of Lüneburger Heidschnucke was raised and kept. the Danish hen and the Danish goose. The Vikings in York mostly ate beef, mutton, and pork with small amounts of horse meat. Most of the beef and horse leg bones were found split lengthways, to extract the marrow. The mutton and swine were cut into leg and shoulder joints and chops. The frequent remains of pig skull and foot bones found on house floors indicate that brawn and trotters were also popular. Hens were kept for both their meat and eggs, and the bones of game birds such as black grouse, golden plover, wild ducks, and geese have also been found.O'Conner, Terry. 1999? "The Home – Food and Meat." Viking Age York. Jorvik Viking Centre.
Seafood was important, in some places even more so than meat. Whales and walrus were hunted for food in Norway and the northwestern parts of the North Atlantic region, and seals were hunted nearly everywhere. Oysters, mussels and shrimp were eaten in large quantities and cod and salmon were popular fish. In the southern regions, herring was also important.Roesdahl pp. 102–17Nedkvitne, Arnved. "Fishing, Whaling and Seal Hunting." in
Milk and buttermilk were popular, both as cooking ingredients and drinks, but were not always available, even at farms. Milk came from cows, goats and sheep, with priorities varying from location to location,Roesdahl, pp. 110–11 and fermented milk products like skyr or surmjölk were produced as well as butter and cheese.
Food was often salted and enhanced with spices, some of which were imported like black pepper, while others were cultivated in herb gardens or harvested in the wild. Home grown spices included caraway, mustard and horseradish as evidenced from the Oseberg ship burial or dill, coriander, and wild celery, as found in cesspits at Coppergate in York. Thyme, juniper berry, sweet gale, yarrow, rue and peppercress were also used and cultivated in herb gardens.
Vikings collected and ate fruits, berries and nuts. Apple (wild crab apples), plums and cherries were part of the diet, as were rose hips and raspberry, wild strawberry, blackberry, elderberry, rowan, hawthorn and various wild berries, specific to the locations. Hazelnuts were an important part of the diet in general and large amounts of walnut shells have been found in cities like Hedeby. The shells were used for dyeing, and it is assumed that the nuts were consumed.
The invention and introduction of the mouldboard plough revolutionised agriculture in Scandinavia in the early Viking Age and made it possible to farm even poor soils. In Ribe, grains of rye, barley, oat and wheat dated to the 8th century have been found and examined, and are believed to have been cultivated locally. Grains and flour were used for making porridges, some cooked with milk, some cooked with fruit and sweetened with honey, and also various forms of bread. Remains of bread from primarily Birka in Sweden were made of barley and wheat. It is unclear if the Norse leavened their breads, but their ovens and baking utensils suggest that they did. Flax was a very important crop for the Vikings: it was used for oil extraction, food consumption, and most importantly, the production of linen. More than 40% of all known textile recoveries from the Viking Age can be traced as linen. This suggests a much higher actual percentage, as linen is poorly preserved compared to wool, for example.
The quality of food for common people was not always particularly high. The research at Coppergate shows that the Vikings in York made bread from wholemeal flour—probably both wheat and rye—but with the seeds of cornfield weeds included. Corncockle (Agrostemma), would have made the bread dark-coloured, but the seeds are poisonous, and people who ate the bread might have become ill. Seeds of carrots, parsnip, and brassicas were also discovered, but they were poor specimens and tend to come from white carrots and bitter tasting cabbages.Hall, A. R. 1999 "The Home: Food – Fruit, Grain and Vegetable." Viking Age York. The Jorvik Viking Centre. The rotary querns often used in the Viking Age left tiny stone fragments (often from basalt rock) in the flour, which when eaten wore down the teeth. The effects of this can be seen on skeletal remains from that period.
Sports
Sports were widely practised and encouraged by the Vikings. Sports that involved weapons training and developing combat skills were popular. These included spear and stone throwing, building and testing physical strength through wrestling (see glima), fist fighting, and stone lifting. In areas with mountains, mountain climbing was practised as a sport. Agility and balance were built and tested by running and jumping for sport, and there is mention of a sport that involved jumping from oar to oar on the outside of a ship's railing as it was being rowed. Swimming was a popular sport – Snorri Sturluson describes three types: diving, long-distance swimming, and a contest in which two swimmers try to dunk one another. Children often participated in some of the sport disciplines, and women have also been mentioned as swimmers, although it is unclear if they took part in competitions. King Olaf Tryggvason was acclaimed for his skill in both mountain climbing and oar-jumping, and reputedly excelled in the art of knife juggling as well. Skiing and ice skating were the principal winter sports, and also provided transport on snow and ice for adults.
Horse fighting was practised for sport, although the rules are unclear. It appears to have involved two stallions pitted against each other, within smell and sight of fenced-off mares. Whatever the rules were, the fights often resulted in the death of one of the stallions.
Icelandic sources often mention knattleik, a ball game similar to hockey, played with a bat and a small hard ball, usually on a smooth surface of ice. Popular with both adults and children, it was a rugged game that often led to injuries. Knattleik appears to have been played only in Iceland, where it attracted many spectators, as did horse fighting.
Hunting was practised as a sport only in Denmark, where it was not an essential food source. Deer and hares were hunted for meat, along with partridges and sea birds, while foxes were hunted to stop their killing of farm animals and for their furs. Spears, bows, and later crossbows, were the weapons used; stalking was the most common method, although game was also chased with dogs. Numerous kinds of snares and traps were used as well.
Games and entertainment
Archaeological finds and written sources indicate that the Vikings participated in social gatherings and festivities. Board games and dice games were a popular pastime. Game boards were made of ornately carved wood, with gaming pieces fashioned mostly from wood, bone, or stone. Pieces were also made of glass, amber, and antler, along with materials such as walrus tusk and ivory from foreign sources. The Vikings played several types of tafl games; hnefatafl, nitavl (nine men's morris) and the less common kvatrutafl.
Hnefatafl was probably the oldest type of board game played in medieval Scandinavia. The archaeological record indicates that hnefatafl was popular by the early medieval period, with the Vikings introducing it to England, Scotland, Wales, and Ireland. The Ockelbo Runestone shows two men possibly playing hnefatafl, and one saga suggests that dice games involved gambling.
Beer and mead were served on festive occasions, where music was played, poetry was recited, and stories were told. Music was considered an art form and musical skill was viewed as suitable for a cultivated man. The Vikings are known to have played instruments including harps, lutes, lyres and fiddles.
Cultural assimilation
Elements of a Scandinavian identity and practices were maintained in settler societies, but they could be quite distinct as the groups assimilated into neighbouring societies. Assimilation to the Frankish culture in Normandy for example was rapid. Links to a Viking identity remained longer in the remote islands of Iceland and the Faroes.
Weapons and warfare
Knowledge about the arms and armour of the Viking age is based on archaeological finds, pictorial representation, and to some extent on the accounts in the Norse sagas and Norse laws recorded in the 13th century. According to custom, all free Norse men were required to own weapons and were permitted to carry them at all times. These arms indicated a Viking's social status: a wealthy Viking had a complete ensemble of a helmet, shield, mail shirt, and sword. However, swords were rarely used in battle; they were probably not sturdy enough for combat and most likely only used as symbolic or decorative items.
A typical bóndi (freeman) was more likely to fight with a spear and shield, and most also carried a seax as a utility knife and side-arm. Bows were used in the opening stages of land battles and at sea, but they tended to be considered less "honourable" than melee weapons. Vikings were relatively unusual for the time in their use of axes as a main battle weapon. The Húscarls, the elite guard of King Cnut (and later of King Harold II) were armed with two-handed axes that could split shields or metal helmets with ease.
The warfare and violence of the Vikings were often motivated and fuelled by their beliefs in Norse religion, focusing on Thor and Odin, the gods of war and death.
Violence was common in Viking Age Norway. An examination of Norwegian human remains from the Viking Age found that 72% of the examined males and 42% of the examined females had suffered weapon-related injuries. Violence was less common in Viking Age Denmark, where society was more centralized and complex than the clan-based Norwegian society.
The Viking warrior is often associated with violent fits of rage and frenzied fighting in modern popular culture, as reflected in meanings attached to the words berserkergang and berserker that would not have been the meanings understood by medieval Norse society. Such a fighting style may have been deployed intentionally by shock troops, and it has been proposed that the berserk-state may have been induced by consuming large amounts of alcohol,Robert Wernick. The Vikings. Alexandria VA: Time-Life Books. 1979. p. 285 or through ingestion of materials with psychoactive properties, such as the solanaceous plant Hyoscyamus niger, as speculated by Karsten FaturKarsten Fatur, Sagas of the Solanaceae: Speculative ethnobotanical perspectives on the Norse berserkers, Journal of Ethnopharmacology, Volume 244, 2019, https://doi.org/10.1016/j.jep.2019.112151 . or by consumption of the hallucinogenic mushroom Amanita muscaria, as first hypothesised by the Swedish theologian Samuel Ødman in 1784 and later by the botanist F.C. Schübeler in 1885.Howard D. Fabing. "On Going Berserk: A Neurochemical Inquiry." Scientific Monthly. 83 [Nov. 1956] p. 232 The Norwegian battlefield archaeologist Are Skarstein Kolberg asserts that "...Ödman's hypothesis is not supported by the saga literature or by the archaeological record", and according to Roderick Dale, there is no evidence for it from the Viking Age or from Old Norse literature.
Trade
The Vikings established and engaged in extensive trading networks throughout the known world and had a profound influence on the economic development of Europe and Scandinavia.Gareth Williams: Viking Money BBC HistoryGraham-Campbell, James: The Viking World, Frances Lincoln Ltd, London (2013). Maps of trade routes.
Other than in such trading centres as Ribe Hedeby in Denmark, Scandinavia was unfamiliar with the use of coinage. Therefore, its economy was based on bullion; that is, the purity and weight of precious metals used in exchange. Silver was the precious metal most commonly used, although gold was also used. Traders carried small portable scales, enabling them to measure weight precisely, which allowed an accurate medium of exchange, even lacking a regular coinage.
Goods
Organised trade covered everything from ordinary items in bulk to exotic luxury products. The Viking ship designs, like that of the knarr, were an important factor in their success as merchants. Imported goods from other cultures included:Vikings as traders , Teachers' notes 5. Royal Museums Greenwich
Spices were obtained from Chinese and Persian traders, who met with the Viking traders in Russia. Vikings used homegrown spices and herbs like caraway, thyme, horseradish and mustard, but imported cinnamon.
Glass was much prized by the Norse. The imported glass was often made into beads for decoration and these have been found in the thousands. Åhus in Scania and the old market town of Ribe were major centres of glass bead production.HL Renart of Berwick: Glass Beads of the Viking Age . An inquiry into the glass beads of the Vikings. Sourced information and pictures.Glass and Amber Regia Anglorum. Sourced information and pictures.
Silk was a very important commodity obtained from Byzantium (modern day Istanbul) and China. It was valued by many European cultures of the time, and the Vikings used it to indicate status such as wealth and nobility. Many of the archaeological finds in Scandinavia include silk.Marianne Vedeler: Silk for The Vikings , Oxbow 2014.
Wine was imported from France and Germany as a drink of the wealthy, augmenting the regular mead and beer.
To counter these valuable imports, the Vikings exported a large variety of goods. These goods included:
Amber—the fossilised resin of the pine tree—was frequently found on the North Sea and Baltic coastline. It was worked into beads and ornamental objects, before being traded. (See also the Amber Road).
Fur was also exported as it provided warmth. This included the furs of pine martens, foxes, bears, otters and beavers.
Cloth and wool. The Vikings were skilled spinners and weavers and exported woollen cloth of a high quality.
Down was collected and exported. The Norwegian west coast supplied eiderdowns and sometimes feathers were bought from the Samis. Down was used for bedding and quilted clothing. Fowling on the steep slopes and cliffs was dangerous work and was often lethal.
Slaves. The Muslim writer Ahmad ibn Rustah described how the Viking Rus' had "no cultivated fields and lived by pillaging alone". They were ruthless in their enslaving of many people. Most of the slaves were taken to Scandinavia, but others were sold in the markets of Atil that fed the demand in many cities of Asia and North Africa. The surge in the slave trade of the 9th century is reflected in the amount of coins minted in Central Asia, that have been unearthed in Scandinavia.
Other exports included weapons, walrus ivory, wax, salt and cod. As one of the more exotic exports, hunting birds were sometimes provided from Norway to the European aristocracy, from the 10th century.
Many of these goods were also traded within the Viking world itself, as well as goods such as soapstone and whetstone. Soapstone was traded with the Norse on Iceland and in Jutland, who used it for pottery. Whetstones were traded and used for sharpening weapons, tools and knives. There are indications from Ribe and surrounding areas, that the extensive medieval trade with oxen and cattle from Jutland (see Ox Road), reach as far back as c. 720 AD. This trade satisfied the Vikings' need for leather and meat to some extent, and perhaps hides for parchment production on the European mainland. Wool was also very important as a domestic product for the Vikings, to produce warm clothing for the cold Scandinavian and Nordic climate, and for sails. Sails for Viking ships required large amounts of wool, as evidenced by experimental archaeology. There are archaeological signs of organised textile productions in Scandinavia, reaching as far back as the early Iron Ages. Artisans and craftsmen in the larger towns were supplied with antlers from organised hunting with large-scale reindeer traps in the far north. They were used as raw material for making everyday utensils like combs.
Legacy
English language
The Vikings heavily influenced Old English to contribute to Modern English. Nouns lost their grammatical gender and grammatical conjugation was reduced to a simple -s added to the third person verb. Preposition stranding also entered English, which is permitted in Old Norse.
Medieval perceptions
In England the Viking Age began dramatically on 8 June 793 when Norsemen destroyed the abbey on the island of Lindisfarne. The devastation of Northumbria's Holy Island shocked and alerted the royal courts of Europe to the Viking presence. "Never before has such an atrocity been seen", declared the Northumbrian scholar Alcuin of York.English Historical Documents, c. 500–1042 by Dorothy Whitelock; p. 776 Medieval Christians in Europe were totally unprepared for the Viking incursions and could find no explanation for their arrival and the accompanying suffering they experienced at their hands save the "Wrath of God".Derry (2012). A History of Scandinavia: Norway, Sweden, Denmark, Finland, Iceland, p. 16. More than any other single event, the attack on Lindisfarne demonised perception of the Vikings for the next twelve centuries. Not until the 1890s did scholars outside Scandinavia begin to seriously reassess the achievements of the Vikings, recognising their artistry, technological skills, and seamanship.Northern Shores by Alan Palmer; p. 21;
Norse Mythology, sagas, and literature tell of Scandinavian culture and religion through tales of heroic and mythological heroes. Early transmission of this information was primarily oral, and later texts relied on the writings and transcriptions of Christian scholars, including the Icelanders Snorri Sturluson and Sæmundur fróði. Many of these sagas were written in Iceland, and most of them, even if they had no Icelandic provenance, were preserved there after the Middle Ages due to the continued interest of Icelanders in Norse literature and legal codes.
The 200-year Viking influence on European history is filled with tales of plunder and colonisation, and the majority of these chronicles came from western European witnesses and their descendants. Less common, although equally relevant, are references to Vikings in chronicles that originated in the east, including the Nestor chronicles, Novgorod chronicles, Ibn Fadlan chronicles, Ibn Rusta chronicles, and brief mentions by Photius, patriarch of Constantinople, regarding the first Viking attack on the Byzantine Empire. Other chroniclers of Viking history include Adam of Bremen, who wrote, in the fourth volume of his Gesta Hammaburgensis Ecclesiae Pontificum, "[t]here is much gold here (in Zealand), accumulated by piracy. These pirates, which are called wichingi by their own people, and Ascomanni by our own people, pay tribute to the Danish king." In 991, the Battle of Maldon between Viking raiders and the inhabitants of Maldon in Essex was commemorated with a poem of the same name.
Post-medieval perceptions
Early modern publications, dealing with what is now called Viking culture, appeared in the 16th century, e.g. Historia de gentibus septentrionalibus (History of the northern people) of Olaus Magnus (1555), and the first edition of the 13th-century Gesta Danorum (Deeds of the Danes), by Saxo Grammaticus, in 1514. The pace of publication increased during the 17th century with Latin translations of the Edda (notably Peder Resen's Edda Islandorum of 1665).
In Scandinavia, the 17th-century Danish scholars Thomas Bartholin and Ole Worm and the Swede Olaus Rudbeck used runic inscriptions and Icelandic sagas as historical sources. An important early British contributor to the study of the Vikings was George Hickes, who published his Linguarum vett. septentrionalium thesaurus (Dictionary of the Old Northern Languages) in 1703–05. During the 18th century, British interest and enthusiasm for Iceland and early Scandinavian culture grew dramatically, expressed in English translations of Old Norse texts and in original poems that extolled the supposed Viking virtues.
The word "viking" was first popularised at the beginning of the 19th century by Erik Gustaf Geijer in his poem, The Viking. Geijer's poem did much to propagate the new romanticised ideal of the Viking, which had little basis in historical fact. The renewed interest of Romanticism in the Old North had contemporary political implications. The Geatish Society, of which Geijer was a member, popularised this myth to a great extent. Another Swedish author who had great influence on the perception of the Vikings was Esaias Tegnér, a member of the Geatish Society, who wrote a modern version of Friðþjófs saga hins frœkna, which became widely popular in the Nordic countries, the United Kingdom, and Germany.
Fascination with the Vikings reached a peak during the so-called Viking revival in the late 18th and 19th centuries as a form of Romantic nationalism. In Britain this was called Septentrionalism, in Germany "Wagnerian" pathos, and in the Scandinavian countries Scandinavism. Pioneering 19th-century scholarly editions of the Viking Age began to reach a small readership in Britain. Archaeologists began to dig up Britain's Viking past, and linguistic enthusiasts started to identify the Viking-Age origins of rural idioms and proverbs. The new dictionaries of the Old Norse language enabled the Victorians to grapple with the primary Icelandic sagas.The Viking Revival By Professor Andrew Wawn at BBC
Until recently, the history of the Viking Age was largely based on Icelandic sagas, the history of the Danes written by Saxo Grammaticus, the Primary Chronicle, and Cogad Gáedel re Gallaib. Few scholars still accept these texts as reliable sources, as historians now rely more on archaeology and numismatics, disciplines that have made valuable contributions toward understanding the period.
In 20th-century politics
The romanticised idea of the Vikings constructed in scholarly and popular circles in northwestern Europe in the 19th and early 20th centuries was a potent one, and the figure of the Viking became a familiar and malleable symbol in different contexts in the politics and political ideologies of 20th-century Europe.Hall, pp. 220–21; Fitzhugh and Ward, pp. 362–64 In Normandy, which had been settled by Vikings, the Viking ship became an uncontroversial regional symbol. In Germany, awareness of Viking history in the 19th century had been stimulated by the border dispute with Denmark over Schleswig-Holstein and the use of Scandinavian mythology by Richard Wagner. The idealised view of the Vikings appealed to Germanic supremacists who transformed the figure of the Viking in accordance with the ideology of a Germanic master race.Fitzhugh and Ward, p. 363 Building on the linguistic and cultural connections between Norse-speaking Scandinavians and other Germanic groups in the distant past, Scandinavian Vikings were portrayed in Nazi Germany as a pure Germanic type. The cultural phenomenon of Viking expansion was re-interpreted for use as propaganda to support the extreme militant nationalism of the Third Reich, and ideologically informed interpretations of Viking paganism and the Scandinavian use of runes were employed in the construction of Nazi mysticism. Other political organisations of the same ilk, such as the former Norwegian fascist party Nasjonal Samling, similarly appropriated elements of the modern Viking cultural myth in their symbolism and propaganda.
Soviet and earlier Slavophile historians emphasised a Slavic rooted foundation in contrast to the Normanist theory of the Vikings conquering the Slavs and founding the Kievan Rus'. They accused Normanist theory proponents of distorting history by depicting the Slavs as undeveloped primitives. In contrast, Soviet historians stated that the Slavs laid the foundations of their statehood long before the Norman/Viking raids, while the Norman/Viking invasions only served to hinder the historical development of the Slavs. They argued that Rus' composition was Slavic and that Rurik and Oleg's success was rooted in their support from within the local Slavic aristocracy.. After the dissolution of the USSR, Novgorod acknowledged its Viking history by incorporating a Viking ship into its logo.Hall, p. 221
In modern popular culture
Led by the operas of German composer Richard Wagner, such as Der Ring des Nibelungen, Vikings and the Romanticist Viking Revival have inspired many creative works. These have included novels directly based on historical events, such as Frans Gunnar Bengtsson's The Long Ships (which was also released as a 1963 film), and historical fantasies such as the film The Vikings, Michael Crichton's Eaters of the Dead (movie version called The 13th Warrior), and the comedy film Erik the Viking. The vampire Eric Northman, in the HBO TV series True Blood, was a Viking prince before being turned into a vampire. Vikings appear in several books by the Danish American writer Poul Anderson, while British explorer, historian, and writer Tim Severin authored a trilogy of novels in 2005 about a young Viking adventurer Thorgils Leifsson, who travels around the world.
In 1962, American comic book writer Stan Lee and his brother Larry Lieber, together with Jack Kirby, created the Marvel Comics superhero Thor, which they based on the Norse god of the same name. The character is featured in the 2011 Marvel Studios film Thor and its sequels. The character also appears in the 2012 film The Avengers and its associated animated series.
The appearance of Vikings within popular media and television has seen a resurgence in recent decades, especially with the History Channel's series Vikings (2013), directed by Michael Hirst. The show has a loose grounding in historical facts and sources, but bases itself more so on literary sources, such as fornaldarsaga Ragnars saga loðbrókar, itself more legend than fact, and Old Norse Eddic and Skaldic poetry.Gareth Lloyd Evans, "Michael Hirst's Vikings and Old Norse Poetry", Translating Early Medieval Poetry: Transformation, Reception, Interpretation. Edited by Tom Birkett and Kirsty March-Lyons. Boydell and Brewer (2017), p. 200. The events of the show frequently make references to the Völuspá, an Eddic poem describing the creation of the world, often directly referencing specific lines of the poem in the dialogue.Ibid, pp. 201202. The show portrays some of the social realities of the medieval Scandinavian world, such as slaveryClare Downham, "The Viking Slave Trade: Entrepreneurs or Heathen Slavers?" History Ireland, Vol. 17, No. 3 (May–June 2009), pp. 15–17. Wordwell Ltd. and the greater role of women within Viking society.Carol Clover, "Regardless of Sex: Men, Women, and Power in Early Northern Europe", Representations, No. 44, pp. 1–28. University of California Press The show also addresses the topics of gender equity in Viking society with the inclusion of shield maidens through the character Lagertha, also based on a legendary figure.Carol Clover, "Maiden Warriors and Other Sons" The Journal of English and Germanic Philology, Vol. 85, No. 1 (Jan. 1986), pp. 35–49. University of Illinois Press. Recent archaeological interpretations and osteological analyses of previous excavations of Viking burials have given support to the idea of the Viking woman warrior, namely the excavation and DNA study of the Birka female Viking warrior, within recent years. However, the conclusions remain contentious.
Vikings have served as an inspiration for numerous video games, such as The Lost Vikings (1993), Age of Mythology (2002), and For Honor (2017). All three Vikings from The Lost Vikings series—Erik the Swift, Baleog the Fierce, and Olaf the Stout—appeared as a playable hero in the crossover title Heroes of the Storm (2015). The Elder Scrolls V: Skyrim (2011) is an action role-playing video game heavily inspired by Viking culture. Vikings are the lead focus of the 2020 video game Assassin's Creed Valhalla, which is set in 873 AD, and recounts an alternative history of the Viking invasion of Britain.
Modern reconstructions of Viking mythology have shown a persistent influence in late 20th- and early 21st-century popular culture in some countries, inspiring comics, movies, television series, role-playing games, computer games, and music, including Viking metal, a subgenre of heavy metal music.
Since the 1960s, there has been rising enthusiasm for historical reenactment. While the earliest groups had little claim for historical accuracy, the seriousness and accuracy of reenactors has increased. The largest such groups include The Vikings and Regia Anglorum, though many smaller groups exist in Europe, North America, New Zealand, and Australia. Many reenactor groups participate in live-steel combat, and a few have Viking-style ships or boats.
The Minnesota Vikings of the National Football League are so-named owing to the large Scandinavian population in the US state of Minnesota.
During the banking boom of the first decade of the twenty-first century, Icelandic financiers came to be styled as útrásarvíkingar (roughly 'raiding Vikings').Ann-Sofie Nielsen Gremaud, 'The Vikings are coming! A modern Icelandic self-image in the light of the economic crisis ', NORDEUROPAforum 20 (2010), pp. 87–106.Katla Kjartansdóttir, 'The new Viking wave: Cultural heritage and capitalism', Iceland and images of the North, ed. Sumarliði R. Ísleifsson (Québec, 2011), pp. 461–80.Kristinn Schram, 'Banking on borealism: Eating, smelling, and performing the North', Iceland and images of the North, ed. Sumarliði R. Ísleifsson (Québec, 2011), pp. 305–27.
Experimental archaeology
Experimental archaeology of the Viking Age is a flourishing branch and several places have been dedicated to this technique, such as Jorvik Viking Centre in the United Kingdom, Sagnlandet Lejre and in Denmark, Foteviken Museum in Sweden or Lofotr Viking Museum in Norway. Viking-age reenactors have undertaken experimental activities such as iron smelting and forging using Norse techniques at Norstead in Newfoundland for example.
On 1 July 2007, the reconstructed Viking ship Skuldelev 2, renamed Sea Stallion,Return of Dublin's Viking Warship . Retrieved 14 November 2007. began a journey from Roskilde to Dublin. The remains of that ship and four others were discovered during a 1962 excavation in the Roskilde Fjord. Tree-ring analysis has shown the ship was built of oak in the vicinity of Dublin in about 1042. Seventy multinational crew members sailed the ship back to its home, and Sea Stallion arrived outside Dublin's Custom House on 14 August 2007. The purpose of the voyage was to test and document the seaworthiness, speed, and manoeuvrability of the ship on the rough open sea and in coastal waters with treacherous currents. The crew tested how the long, narrow, flexible hull withstood the tough ocean waves. The expedition also provided valuable new information on Viking longships and society. The ship was built using Viking tools, materials, and much of the same methods as the original ship.
Other vessels, often replicas of the Gokstad ship (full- or half-scale) or Skuldelev have been built and tested as well. The Snorri (a Skuldelev I Knarr), was sailed from Greenland to Newfoundland in 1998.
Common misconceptions
Horned helmets
Apart from two or three representations of (ritual) helmets—with protrusions that may be either stylised ravens, snakes, or horns—no depiction of the helmets of Viking warriors, and no preserved helmet, has horns. The formal, close-quarters style of Viking combat (either in shield walls or aboard "ship islands") would have made horned helmets cumbersome and hazardous to the warrior's own side.
Historians therefore believe that Viking warriors did not wear horned helmets; whether such helmets were used in Scandinavian culture for other, ritual purposes, remains unproven. The general misconception that Viking warriors wore horned helmets was partly promulgated by the 19th-century enthusiasts of Götiska Förbundet, founded in 1811 in Stockholm. They promoted the use of Norse mythology as the subject of high art and other ethnological and moral aims.
The Vikings were often depicted with winged helmets and in other clothing taken from Classical antiquity, especially in depictions of Norse gods. This was done to legitimise the Vikings and their mythology by associating it with the Classical world, which had long been idealised in European culture.
The latter-day mythos created by national romantic ideas blended the Viking Age with aspects of the Nordic Bronze Age some 2,000 years earlier. Horned helmets from the Bronze Age were shown in petroglyphs and appeared in archaeological finds (see Bohuslän and Vikso helmets). They were probably used for ceremonial purposes.Did Vikings really wear horns on their helmets? , The Straight Dope, 7 December 2004. Retrieved 14 November 2007.
Cartoons like Hägar the Horrible and Vicky the Viking, and sports kits such as those of the Minnesota Vikings and Canberra Raiders have perpetuated the myth of the horned helmet.
Viking helmets were conical, made from hard leather with wood and metallic reinforcements for regular troops. The iron helmet with mask and mail was for the chieftains, based on the previous Vendel-age helmets from central Sweden. The only original Viking helmet discovered is the Gjermundbu helmet, found in Norway. This helmet is made of iron and has been dated to the 10th century.
Barbarity
The image of wild-haired, dirty savages sometimes associated with the Vikings in popular culture is a distorted picture of reality. Viking tendencies were often misreported, and the work of Adam of Bremen, among others, told largely disputable tales of Viking savagery and uncleanliness.Williams, G. (2001) How do we know about the Vikings? BBC.co.uk. Retrieved 14 November 2007.
Use of skulls as drinking vessels
There is no evidence that Vikings drank out of the skulls of vanquished enemies. This was a misconception based on a passage in the skaldic poem Krákumál speaking of heroes drinking from ór bjúgviðum hausa (branches of skulls). This was a reference to drinking horns, but was mistranslated in the 17th centuryBy Magnús Óláfsson, in Ole Worm, Runar seu Danica Litteratura antiquissima, vulgo Gothica dicta (Copenhagen 1636). as referring to the skulls of the slain.E. W. Gordon, An Introduction to Old Norse (2nd edition, Oxford 1962) pp. lxix–lxx.
Genetic legacy
Margaryan et al. 2020 analysed 442 Viking world individuals from various archaeological sites in Europe. They were found to be closely related to modern Scandinavians. The Y-DNA composition of the individuals in the study was also similar to that of modern Scandinavians. The most common Y-DNA haplogroup was I1 (95 samples), followed by R1b (84 samples) and R1a, especially (but not exclusively) of the Scandinavian R1a-Z284 subclade (61 samples). The study showed what many historians have hypothesised: that it was common for Norseman settlers to marry foreign women. Some individuals from the study, such as those found in Foggia, displayed typical Scandinavian Y-DNA haplogroups but also southern European autosomal ancestry, suggesting that they were the descendants of Viking settler males and local women. The five individual samples from Foggia were likely Normans. The same pattern of a combination of Scandinavian Y-DNA and local autosomal ancestry is seen in other samples from the study, for example Varangians buried near lake Ladoga and Vikings in England, suggesting that Viking men had married into local families in those places too.
The study found evidence of a Swedish influx into Estonia and Finland; and Norwegian influx into Ireland, Iceland and Greenland during the Viking Age. However, the authors commented "Viking Age Danish-like ancestry in the British Isles cannot be distinguished from that of the Angles and Saxons, who migrated in the fifth to sixth centuries AD from Jutland and northern Germany".
Margaryan et al. 2020 examined the skeletal remains of 42 individuals from the Salme ship burials in Estonia. The skeletal remains belonged to warriors killed in battle who were later buried together with numerous valuable weapons and armour. DNA testing and isotope analysis revealed that the men came from central Sweden.
Female descent studies show evidence of Norse descent in areas closest to Scandinavia, such as the Shetland and Orkney islands. Inhabitants of lands farther away show most Norse descent in the male Y-chromosome lines.Roger Highfield, "Vikings who chose a home in Shetland before a life of pillage" , Telegraph, 7 April 2005. Retrieved 16 November 2008
A specialised genetic and surname study in Liverpool showed marked Norse heritage: up to 50% of males of families that lived there before the years of industrialisation and population expansion. High percentages of Norse inheritance—tracked through the R-M420 haplotype—were also found among males in the Wirral and West Lancashire. This was similar to the percentage of Norse inheritance found among males in the Orkney Islands.James Randerson, "Proof of Liverpool's Viking past" , The Guardian, 3 December 2007. Retrieved 16 November 2008
Recent research suggests that the Celtic warrior Somerled, who drove the Vikings out of western Scotland and was the progenitor of Clan Donald, may have been of Viking descent, a member of haplogroup R-M420.
Margaryan et al. 2020 examined an elite warrior burial from Bodzia (Poland) dated to 1010–1020 AD. The cemetery in Bodzia is exceptional in terms of Scandinavian and Kievian Rus links. The Bodzia man (sample VK157, or burial E864/I) was not a simple warrior from the princely retinue, but he belonged to the princely family himself. His burial is the richest one in the whole cemetery. Moreover, strontium analysis of his teeth enamel shows he was not local. It is assumed that he came to Poland with the Prince of Kiev, Sviatopolk the Accursed, and met a violent death in combat. This corresponds to the events of 1018 AD when Sviatopolk himself disappeared after having retreated from Kiev to Poland. It cannot be excluded that the Bodzia man was Sviatopolk himself, as the genealogy of the Rurikids at this period is extremely sketchy and the dates of birth of many princes of this dynasty may be quite approximative. The Bodzia man carried haplogroup I1-S2077 and had both Scandinavian ancestry and Russian admixture.
See also
Faroese people
Geats
Gotlander
Gutasaga
Oeselians
Proto-Norse language
Swedes (Germanic tribe)
Ushkuiniks, Novgorod's privateers
Viking raid warfare and tactics
Wokou
References
Bibliography
Further reading
Blanck, Dag. "The Transnational Viking: The Role of the Viking in Sweden, the United States, and Swedish America." Journal of Transnational American Studies 7.1 (2016). online
. online
External links
Vikings – View videos at The History Channel
Copenhagen-Portal – The Danish Vikings
BBC: History of Vikings
Borg Viking museum, Norway
Ibn Fadlan and the Rusiyyah, by James E. Montgomery, with full translation of Ibn Fadlan
Reassessing what we collect website – Viking and Danish London History of Viking and Danish London with objects and images
Wawm, Andrew, The Viking Revival – BBC Online, Ancient History in Depth (updated 17 February 2011)
Category:Early Middle Ages
*
Category:History of Scandinavia
Category:Iron Age cultures of Europe
Category:Articles containing video clips
Category:Medieval pirates
Category:Archaeological cultures in Sweden
Category:Archaeological cultures in Denmark
Category:Archaeological cultures in Norway
Category:Archaeological cultures in Estonia
Category:Archaeological cultures in England
Category:Archaeological cultures in Scotland
Category:Archaeological cultures in Ireland
Category:Archaeological cultures in France
|
ancient_medieval
| 14,662
|
36197
|
1948 Arab–Israeli War
|
https://en.wikipedia.org/wiki/1948_Arab–Israeli_War
|
The 1948 Arab–Israeli War, also known as the First Arab–Israeli War, followed the civil war in Mandatory Palestine as the second and final stage of the 1948 Palestine war. The civil war became a war of separate states with the Israeli Declaration of Independence on 14 May 1948, the end of the British Mandate for Palestine at midnight, and the entry of a military coalition of Arab states into the territory of Mandatory Palestine the following morning. The war formally ended with the 1949 Armistice Agreements which established the Green Line.
Since the 1917 Balfour Declaration and the 1920 creation of the British Mandate of Palestine, and in the context of Zionism and the mass migration of European Jews to Palestine, there had been tension and conflict between Arabs, Jews, and the British in Palestine. The conflict escalated into a civil war 30 November 1947, the day after the United Nations adopted the Partition Plan for Palestine proposing to divide the territory into an Arab state, a Jewish state, and an internationally administered corpus separatum for the cities of Jerusalem and Bethlehem.
At the end of a campaign beginning April 1948 called Plan Dalet, in which Zionist forces attacked, conquered, and depopulated cities, villages, and territories in Mandatory Palestine in preparation for the establishment of a Jewish state, and just before the expiration of the British Mandate for Palestine, Zionist leaders announced the Israeli Declaration of Independence on 14 May 1948. The following morning, Egypt, Transjordan, Syria, and expeditionary forces from Iraq entered Palestine, taking control of the Arab areas and attacking Israeli forces and settlements.*David Tal, War in Palestine, 1948: Israeli and Arab Strategy and Diplomacy, p. 153.
Book: What Happened Where , p. 307, by Chris Cook and Diccon Bewes, published by Routledge, section from book: Arab-Israeli War 1948–9: Israel was invaded by the armies of its Arab neighbours on the day the British Mandate ended, 15 May 1948. After initial Arab gains, Israel counter-attacked successfully, enlarging its national territory...Benny Morris (2008), p. 401.Zeev Maoz, Defending the Holy Land, University of Michigan Press, 2009 p. 4: 'A combined invasion of a Jordanian and Egyptian army started ... The Syrian and the Lebanese armies engaged in a token effort but did not stage a major attack on the Jewish state.' The 10 months of fighting took place mostly on the territory of the British Mandate and in the Sinai Peninsula and southern Lebanon, interrupted by several truce periods.Rogan and Shlaim 2007 p. 99.
By the end of the war, the State of Israel controlled all of the area that the UN had proposed for a Jewish state, as well as almost 60% of the area proposed for an Arab state,Cragg 1997 pp. 57, 116. including Jaffa, Lydda and Ramle area, Upper Galilee, some parts of the Negev, the west coast as far as Gaza City, and a wide strip along the Tel Aviv–Jerusalem road. Israel also took control of West Jerusalem, which was meant to be part of an international zone for Jerusalem and its environs. Transjordan took control of East Jerusalem and what became known as the West Bank, annexing it the following year. The territory known today as the Gaza Strip was occupied by Egypt.
Expulsions of Palestinians, which had begun during the civil war, continued during the Arab-Israeli war. Hundreds of Palestinians were killed in multiple massacres, such as occurred in the expulsions from Lydda and Ramle. These events are known today as the Nakba (Arabic for "the catastrophe") and were the beginning of the Palestinian refugee problem. A similar number of Jews moved to Israel during the three years following the war, including 260,000 who migrated, fled, or were expelled from the surrounding Arab states.Morris, 2001, pp. 259–260.Fischbach, Michael R. Jewish Property Claims Against Arab Countries. Columbia University Press, 2008, p. 27
Background
Since the 1917 Balfour Declaration and the 1920 creation of the British Mandate of Palestine, and in the context of Zionism and the mass migration of European Jews to Palestine, there had been tension and conflict between Arabs, Jews, and the British. British policies dissatisfied both Arabs and Jews. In 1920, the Arab leaders were very disappointed with Britain. In 1916, the British commander-in-chief in Cairo had made an agreement with the Emir of Mecca: if the Arabs rebelled against the Ottoman Empire, the British would provide them with arms and money and support the formation of an independent Arab state. Around 30,000 older rifles and a smaller amount of modern weapons were supplied by the British, and a very large area from the Red Sea to Damascus was conquered.T. E. Lawrence: Revolt in the dessert, 1927
Britain backtracked from its promise that an independent Arab state would be formed. In 1920, Britain let French troops attack the Arab Kingdom of Syria, crushing its army and overthrowing its government. Arab opposition developed into the 1936–1939 Arab revolt in Palestine, while the Jewish opposition developed into the 1944–1947 Jewish insurgency in Palestine. On 29 November 1947, the United Nations General Assembly adopted a resolution recommending the adoption and implementation of a plan to partition the British Mandate of Palestine into two states, one Arab and one Jewish, and the City of Jerusalem.United Nations: General Assembly: A/RES/181(II): 29 November 1947: Resolution 181 (II). Future government of Palestine.
The General Assembly resolution on Partition was greeted with overwhelming joy in Jewish communities and widespread outrage in the Arab world. In Palestine, violence erupted almost immediately, feeding into a spiral of reprisals and counter-reprisals. The British refrained from intervening as tensions boiled over into a low-level conflict that quickly escalated into a full-scale civil war.Greg Cashman, Leonard C. Robinson, An Introduction to the Causes of War: Patterns of Interstate Conflict from World War 1 to Iraq, Rowman & Littlefield 2007 p. 165.
Benjamin Grob-Fitzgibbon,Imperial Endgame: Britain's Dirty Wars and the End of Empire, Palgrave/Macmillan 2011 p. 57
Ilan Pappé (2000), p. 111
Efraïm Karsh (2002), p. 30
Benny Morris (2003), p. 101
From January onwards, operations became increasingly militarised, with the intervention of a number of Arab Liberation Army regiments inside Palestine, each active in a variety of distinct sectors around the different coastal towns. They consolidated their presence in Galilee and Samaria.Yoav Gelber (2006), pp. 51–56 Abd al-Qadir al-Husayni came from Egypt with several hundred men of the Army of the Holy War. Having recruited a few thousand volunteers, al-Husayni organised the blockade of the 100,000 Jewish residents of Jerusalem.Dominique Lapierre et Larry Collins (1971), chap. 7, pp. 131–153
To counter this, the Yishuv authorities tried to supply the city with convoys of up to 100 armoured vehicles, but the operation became more and more impractical as the number of casualties in the relief convoys surged. By March, Al-Hussayni's tactic had paid off. Almost all of Haganah's armoured vehicles had been destroyed, the blockade was in full operation, and hundreds of Haganah members who had tried to bring supplies into the city were killed.Benny Morris (2003), p. 163 The situation for those who dwelt in the Jewish settlements in the highly isolated Negev and north of Galilee was even more critical.
While the Jewish population had received strict orders requiring them to hold their ground everywhere at all costs,Dominique Lapierre et Larry Collins (1971), p. 163 the Arab population was more affected by the general conditions of insecurity to which the country was exposed. Up to 100,000 Arabs, from the urban upper and middle classes in Haifa, Jaffa and Jerusalem, or Jewish-dominated areas, evacuated abroad or to Arab centres eastwards.Benny Morris (2003), p. 67
This situation caused the United States to withdraw its support for the Partition Plan, encouraging the Arab League to believe that the Palestinian Arabs, reinforced by the Arab Liberation Army, could put an end to the plan. However, the British decided on 7 February 1948 to support the annexation of the Arab part of Palestine by Transjordan.Henry Laurens (2005), p. 83
Although doubt took hold among Yishuv supporters, their apparent defeats were due more to their wait-and-see policy than to weakness. David Ben-Gurion reorganised Haganah and made conscription obligatory. Every Jewish man and woman in the country had to receive military training. Thanks to funds raised by Golda Meir from sympathisers in the United States, and Stalin's decision to support the Zionist cause, the Jewish representatives of Palestine were able to sign very important armament contracts in the East. Other Haganah agents recovered stockpiles from the Second World War, which helped improve the army's equipment and logistics. Operation Balak allowed arms and other equipment to be transported for the first time by the end of March.Arnold Krammer (1974), p. 89
Ben-Gurion invested Yigael Yadin with the responsibility to come up with a plan of offence whose timing was related to the foreseeable evacuation of British forces. This strategy, called Plan Dalet, was readied by March and implemented towards the end of April.David Tal, War in Palestine, 1948: Israeli and Arab Strategy and Diplomacy, Routledge 2004 p. 89. A separate plan, Operation Nachshon, was devised to lift the siege of Jerusalem. 1500 men from Haganah's Givati brigade and Palmach's Harel brigade conducted sorties to free up the route to the city between 5 and 20 April. Both sides acted offensively in defiance of the Partition Plan, which foresaw Jerusalem as a corpus separatum, under neither Jewish nor Arab jurisdiction. The Arabs did not accept the Plan, while the Jews were determined to oppose the internationalisation of the city, and secure it as part of the Jewish state.David Tal, pp. 89–90. The operation was successful, and enough foodstuffs to last two months were trucked into Jerusalem for distribution to the Jewish population.Dominique Lapierre et Larry Collins (1971), pp. 369–381 The success of the operation was assisted by the death of al-Husayni in combat.
During this time, fighters from Irgun and Lehi massacred a substantial number of Palestinians at Deir Yassin. The attack was widely publicized and had a deep impact on the morale of the Palestinian population and contributed to generate the exodus of the Arab population.
At the same time, the Arab Liberation Army was roundly defeated at Mishmar HaEmek in its first large-scale operation,Benny Morris (2003), pp. 242–243 coinciding with the loss of their Druze allies through defection.Benny Morris (2003), p. 242
With the implementation of Plan Dalet, the Haganah, Palmach and Irgun forces began conquering mixed zones. The Palestinian Arab society was shaken as Tiberias, Haifa, Safed, Beisan, Jaffa and Acre were all captured and more than 250,000 Palestinian Arabs fled or were expelled.Henry Laurens (2005), pp. 85–86
The British had essentially withdrawn their troops. This pushed the leaders of the neighbouring Arab states to intervene, but they were not fully prepared, and could not assemble sufficient forces to turn the tide. The majority of Palestinian Arab hopes lay with the Arab Legion of Transjordan's monarch, King Abdullah I, but he had no intention of creating a Palestinian Arab-run state, since he hoped to annex as much of the territory of the British Mandate for Palestine as he could. He was playing a double game, being just as much in contact with the Jewish authorities as with the Arab League.
In preparation for the offensive, Haganah successfully launched Operations YiftahBenny Morris (2003), pp. 248–252 and Ben-'AmiBenny Morris (2003), pp. 252–254 to secure the Jewish settlements of Galilee, and Operation Kilshon, which created a united front around Jerusalem. The inconclusive meeting between Golda Meir and Abdullah I, followed by the Kfar Etzion massacre on 13 May by the Arab Legion led to predictions that the battle for Jerusalem would be merciless.
On 14 May 1948, David Ben-Gurion declared the establishment of the State of Israel and the 1948 Palestine war entered its second phase with the intervention of the Arab state armies and the beginning of the 1948 Arab–Israeli War.
Armed forces
By September 1947, the Haganah had "10,489 rifles, 702 light machine-guns, 2,666 submachine guns, 186 medium machine-guns, 672 two-inch mortars and 92 three-inch (76 mm) mortars".
Importing arms
In 1946, Ben-Gurion decided that the Yishuv would probably have to defend itself against both the Palestinian Arabs and neighbouring Arab states and accordingly began a "massive, covert arms acquisition campaign in the West", and acquired many more during the first few months of hostilities.Leonard Slater (1970), p. 31
The Yishuv managed clandestinely to amass arms and military equipment abroad for transfer to Palestine once the British blockade was lifted. In the United States, Yishuv agents purchased three Boeing B-17 Flying Fortress bombers, one of which bombed Cairo in July 1948, some Curtiss C-46 Commando transport planes, and dozens of half-tracks, which were repainted and defined as "agricultural equipment". In Western Europe, Haganah agents amassed fifty 65mm French mountain guns, twelve 120mm mortars, ten H-35 light tanks, and a large number of half-tracks. By mid-May or thereabouts the Yishuv had purchased from Czechoslovakia 25 Avia S-199 fighters (an inferior version of the Messerschmitt Bf 109), 200 heavy machine guns, 5,021 light machine guns, 24,500 rifles, and 52 million rounds of ammunition, enough to equip all units, but short of heavy arms.Martin Van Creveld, Sword and the Olive: A Critical History of the Israeli Defense Force, Public Affairs (1998) 2002 p. 78 The airborne arms smuggling missions from Czechoslovakia were codenamed Operation Balak.
The airborne smuggling missions were carried out by mostly American aviators – Jews and non-Jews – led by ex-U.S. Air Transport Command flight engineer Al Schwimmer.Leonard Slater (1970), p. 100 Schwimmer's operation also included recruiting and training fighter pilots such as Lou Lenart, commander of the first Israeli air assault against the Arabs. Several Americans, including Schwimmer, were later prosecuted by the U.S. government for violating the Neutrality Act of 1939.
Arms production
The Yishuv also had "a relatively advanced arms producing capacity", that between October 1947 and July 1948 "produced 3 million 9 mm bullets, 150,000 Mills grenades, 16,000 submachine guns (Sten Guns) and 210 three-inch (76 mm) mortars", along with a few "Davidka" mortars, which had been indigenously designed and produced. They were inaccurate but had a loud explosion that demoralised the enemy. Much of the munitions used by the Israelis came from the Ayalon Institute, a clandestine bullet factory beneath kibbutz Ayalon, which produced about 2.5 million bullets for Sten guns. The munitions produced by the Ayalon Institute were said to have been the only supply that was not in shortage during the war. Locally produced explosives were also plentiful. After Israel's independence, these clandestine arms manufacturing operations were moved above ground. All of the Haganah's weapons-manufacturing was centralised and later became Israel Military Industries.
Manpower
In November 1947, the Haganah was an underground paramilitary force that had existed as a highly organised, national force, since the Arab riots of 1920–21, and throughout the riots of 1929, Great Uprising of 1936–39,Morris, 2003, p. 16. and World War II. It had a mobile force, the HISH, which had 2,000 full-time fighters (men and women) and 10,000 reservists (all aged between 18 and 25) and an elite unit, the Palmach composed of 2,100 fighters and 1,000 reservists. The reservists trained three or four days a month and went back to civilian life the rest of the time. These mobile forces could rely on a garrison force, the HIM (Heil Mishmar, lit. Guard Corps), composed of people aged over 25. The Yishuv's total strength was around 35,000 with 15,000 to 18,000 fighters and a garrison force of roughly 20,000.Gelber, p. 73; Karsh 2002, p. 25.
There were also several thousand men and women who had served in the British Army in World War II who did not serve in any of the underground militias but would provide valuable military experience during the war. Walid Khalidi says the Yishuv had the additional forces of the Jewish Settlement Police, numbering some 12,000, the Gadna Youth Battalions, and the armed settlers.W. Khalidi, 'Plan Dalet: Master Plan for the Conquest of Palestine', J. Palestine Studies 18(1), pp. 4–33, 1988 (reprint of a 1961 article) Few of the units had been trained by December 1947. On 5 December 1947, conscription was instituted for all men and women aged between 17 and 25 and by the end of March, 21,000 had been conscripted. On 30 March, the call-up was extended to men and single women aged between 26 and 35. Five days later, a General Mobilization order was issued for all men under 40.Levin, Harry. "Jerusalem Embattled – A Diary of the City under Siege." Cassels, 1997. . pp. 32, 117. Pay £P2 per month. c.f. would buy 2lb of meat in Jerusalem, April 1948. p. 91.
By March 1948, the Yishuv had a numerical superiority, with 35,780 mobilised and deployed fighters for the Haganah,Benny Morris (2004), p. 16Gelber (2006), p. 73 3,000 men under Lehi and Irgun, and a few thousand armed settlers.D. Kurzman, "Genesis 1948", 1970, p. 282. Irgun was eventually absorbed into the Jewish Defence Army. The activities of Irgun was monitored by MI5, which found that Irgun was "involved or implicated in numerous acts of terrorism" during the end years of the British mandate in Palestine such as the attacks on trains and the kidnapping of British servicemen.
Arab forces
According to Benny Morris, by the end of 1947, the Palestinians already "had a healthy and demoralising respect for the Yishuv's military power" and if it came to battle, the Palestinians expected to lose. When the first violent incidents broke out in Jerusalem on the 29 November, the Arab Higher Committee, well aware of their lack of armaments, had called for a three-day strike: the most militant Palestinian group in the city, consisting of 44 fighters, was furnished with 12 rifles, some handguns and a few kilograms of explosives.Henry Laurens, La Question de Palestine, vol.3, Fayard 2007 p. 41
The effective number of Arab combatants was listed as growing to 12,000 by some historiansHenry Laurens, La Question de Palestine, vol. 3, Fayard 2007 p. 70 while others calculate an eventual total Arab strength of approximately 23,500 troops, and with this being more of less or roughly equal to that of the Yishuv. However, as Israel mobilised most of its most able citizens during the war while the Arab troops were only a small percentage of its far greater population, the strength of the Yishuv grew steadily and dramatically during the war.
Political objectives
Yishuv
Yishuv's aims evolved during the war.Morris, 2008, pp. 397–98. Mobilisation for a total war was organised.Moshe Naor,Social Mobilization in the Arab/Israeli War of 1948: On the Israeli Home Front, Routledge 2013 p. 15. Initially, the aim was "simple and modest": to survive the assaults of the Palestinian Arabs and the Arab states. "The Zionist leaders deeply, genuinely, feared a Middle Eastern reenactment of the Holocaust, which had just ended; the Arabs' public rhetoric reinforced these fears". As the war progressed, the aim of expanding the Jewish state beyond the UN partition borders appeared: first to incorporate clusters of isolated Jewish settlements and later to add more territories to the state and give it defensible borders. A third and further aim that emerged among the political and military leaders after four or five months was to "reduce the size of Israel's prospective large and hostile Arab minority, seen as a potential powerful fifth column, by belligerency and expulsion".
According to research by Shay Hazkani, Ben-Gurion and segments of the religious Zionist leadership drew parallels between the war and the biblical wars of extermination, and states this was not a fringe position. IDF indoctrination pamphlets were distributed to recruits instructing them that God “demands a revenge of extermination without mercy to whoever tries to hurt us for no reason.”.Shay Hazkani,Dear Palestine:A Social History of the 1948, Stanford University Press 2021 .Josh Ruebner, 'Unsettling 1948: A Review of Shay Hazkani's ‘Dear Palestine’,' Mondoweiss 24 June 2021
Plan Dalet, or Plan D, (, Tokhnit dalet) was a plan worked out by the Haganah, a Jewish paramilitary group and the forerunner of the Israel Defense Forces, in autumn 1947 to spring 1948, which was sent to Haganah units in early March 1948. The intent of Plan Dalet is subject to much controversy, with historians on the one extreme asserting that it was entirely defensive, and historians on the other extreme asserting that the plan aimed at maximum conquest and expulsion of the Palestinians. According to Walid Khalidi and Ilan Pappé, its purpose was to conquer as much of Palestine and to expel as many Palestinians as possible,Pappe, Ilan. The Ethnic Cleansing of Palestine. though according to Benny Morris there was no such intent. In his book The Ethnic Cleansing of Palestine, Pappé asserts that Plan Dalet was a "blueprint for ethnic cleansing" with the aim of reducing both rural and urban areas of Palestine.Pappé, 2006, pp. xii, 86–126
According to Yoav Gelber, the plan specified that in case of resistance, the population of conquered villages was to be expelled outside the borders of the Jewish state. If no resistance was met, the residents could stay put, under military rule.Gelber 2006 p. 306 According to Morris, Plan D called for occupying the areas within the UN sponsored Jewish state, several concentrations of Jewish population outside those areas (West Jerusalem and Western Galilee), and areas along the roads where the invading Arab armies were expected to attack.Morris 2008 p. 119
The Yishuv perceived the peril of an Arab invasion as threatening its very existence. Having no real knowledge of the Arabs' true military capabilities, the Jews took Arab propaganda literally, preparing for the worst and reacting accordingly.
Arab League as a whole
The Arab League had unanimously rejected the UN partition plan and were officially opposed to the establishment of a Jewish state alongside an Arab one.
The Arab League before partition affirmed the right to the independence of Palestine, while blocking the creation of a Palestinian government. Towards the end of 1947, the League established a military committee commanded by the retired Iraqi general Isma'il Safwat whose mission was to analyse the chance of victory of the Palestinians against the Jews.Gelber (2006), p. 11 His conclusions were that they had no chance of victory and that an invasion of the Arab regular armies was mandatory. The political committee nevertheless rejected these conclusions and decided to support an armed opposition to the Partition Plan excluding the participation of their regular armed forces.Henry Laurens, La Question de Palestine, Fayard, 2007 p. 32.
In April with the Palestinian defeat, the refugees coming from Palestine and the pressure of their public opinion, the Arab leaders decided to invade Palestine.
The Arab League gave reasons for its invasion in Palestine in the cablegram:
the Arab states find themselves compelled to intervene in order to restore law and order and to check further bloodshed.
the Mandate over Palestine has come to an end, leaving no legally constituted authority.
the only solution of the Palestine problem is the establishment of a unitary Palestinian state.
British diplomat Alec Kirkbride wrote in his 1976 memoirs about a conversation with the Arab League's secretary-general Azzam Pasha a week before the armies marched: "...when I asked him for his estimate of the size of the Jewish forces, [he] waved his hands and said: 'It does not matter how many there are. We will sweep them into the sea.'"Morris 2008 p. 187; quoting p. 24 of Kirkbride's memoirs
According to Gelber, the Arab countries were "drawn into the war by the collapse of the Palestinian Arabs and the Arab Liberation Army [and] the Arab governments' primary goal was preventing the Palestinian Arabs' total ruin and the flooding of their own countries by more refugees. According to their own perception, had the invasion not taken place, there was no Arab force in Palestine capable of checking the Haganah's offensive".Yoav Gelber, 2006, p. 137.
King Abdullah I of Transjordan
King Abdullah was the commander of the Arab Legion, the strongest Arab army involved in the war according to Eugene Rogan and Avi Shlaim in 2007.Rogan and Shlaim 2007 p. 110. (In contrast, Morris wrote in 2008 that the Egyptian army was the most powerful and threatening army.Morris, 2008, p. 310) The Arab Legion had about 10,000 soldiers, trained and commanded by British officers.
In 1946–47, Abdullah said that he had no intention to "resist or impede the partition of Palestine and creation of a Jewish state."Sela, 2002, p. 14. Ideally, Abdullah would have liked to annex all of Palestine, but he was prepared to compromise. He supported the partition, intending that the West Bank area of the British Mandate allocated for the Arab state be annexed to Jordan.Morris (2008), pp. 190–192 Abdullah held secret meetings with the Jewish Agency (at which the future Israeli Prime Minister Golda Meir was among the delegates) that reached an agreement of Jewish non-interference with Jordanian annexation of the West Bank (although Abdullah failed in his goal of acquiring an outlet to the Mediterranean Sea through the Negev desert) and of Jordanian agreement not to attack the area of the Jewish state contained in the United Nations partition resolution (in which Jerusalem was given neither to the Arab nor the Jewish state, but was to be an internationally administered area). In order to keep their support to his plan of annexation of the Arab State, Abdullah promised to the British he would not attack the Jewish State.
The neighbouring Arab states pressured Abdullah into joining them in an "all-Arab military invasion" against the newly created State of Israel, that he used to restore his prestige in the Arab world, which had grown suspicious of his relatively good relationship with Western and Jewish leaders. Jordan's undertakings not to cross partition lines were not taken at face value. While repeating assurances that Jordan would only take areas allocated to a future Arab state, on the eve of war Tawfik Abu al-Huda told the British that were other Arab armies to advance against Israel, Jordan would follow suit.Tal,War in Palestine, 1948: Israeli and Arab Strategy and Diplomacy, p. 154. On 23 May Abdullah told the French consul in Amman that he "was determined to fight Zionism and prevent the establishment of an Israeli state on the border of his kingdom".Zamir, 2010, p. 34
Abdullah's role in this war became substantial. He saw himself as the "supreme commander of the Arab forces" and "persuaded the Arab League to appoint him" to this position.Tripp, 2001, p. 137. Through his leadership, the Arabs fought the 1948 war to meet Abdullah's political goals.
Other Arab states
King Farouk of Egypt was anxious to prevent Abdullah from being seen as the main champion of the Arab world in Palestine, which he feared might damage his own leadership aspirations of the Arab world. In addition, Farouk wished to annex all of southern Palestine to Egypt. According to Gamal Abdel Nasser the Egyptian Ministry of Defence's first communique describe the Palestine operations as a merely punitive expedition against the Zionist "gangs", using a term frequent in Haganah reports of Palestinian fighters. According to a 2019 study, "senior British intelligence, military officers and diplomats in Cairo were deeply involved in a covert scheme to drive the King to participate in the Arab states' war coalition against Israel." These intelligence officers acted without the approval or knowledge of the British government.
Nuri as-Said, the strongman of Iraq, had ambitions for bringing the entire Fertile Crescent under Iraqi leadership. Both Syria and Lebanon wished to take certain areas of northern Palestine.
One result of the ambitions of the various Arab leaders was a distrust of all the Palestinian leaders who wished to set up a Palestinian state, and a mutual distrust of each other. Co-operation was to be very poor during the war between the various Palestinian factions and the Arab armies.
Arab Higher Committee of Amin al-Husayni
Following rumours that King Abdullah was re-opening the bilateral negotiations with Israel that he had previously conducted in secret with the Jewish Agency, the Arab League, led by Egypt, decided to set up the All-Palestine Government in Gaza on 8 September under the nominal leadership of the Mufti.Shlaim, 2001, p. 97. Abdullah regarded the attempt to revive al-Husayni's Holy War Army as a challenge to his authority and all armed bodies operating in the areas controlled by the Arab Legion were disbanded. Glubb Pasha carried out the order ruthlessly and efficiently.Shlaim, 2001, p. 99.Benny Morris (2003), p. 189.
Initial line-up of forces
Military assessments
Though the State of Israel faced the formidable armies of neighbouring Arab countries, due to previous battles the Palestinians themselves hardly existed as a military force by the middle of May.Martin Van Creveld,Sword and the Olive: A Critical History of the Israeli Defense Force,, Public Affairs (1998) 2002 p. 75 The British Intelligence and Arab League military reached similar conclusions.Morris (2003), pp. 32–33.
The British Foreign Ministry and the CIA believed that the Arab states would finally win in case of war.Morris (2008), p. 81.Benny (2008), p. 174. Martin Van Creveld says that in terms of manpower, the sides were fairly evenly matched.Martin Van Creveld,Sword and the Olive: A Critical History of the Israeli Defense Force, , Public Affairs (1998) 2002 p. 78
In May, Egyptian generals told their government that the invasion would be "a parade without any risks" and Tel Aviv would be taken "in two weeks."Morris 2008 p. 185 Egypt, Iraq, and Syria all possessed air forces, Egypt and Syria had tanks, and all had some modern artillery.Morris, 2003, p. 35. Initially, the Haganah had no heavy machine guns, artillery, armoured vehicles, anti-tank or anti-aircraft weapons, nor military aircraft or tanks. The four Arab armies that invaded on 15 May were far stronger than the Haganah formations they initially encountered.Morris, 2008, p. 401
On 12 May, three days before the invasion, David Ben-Gurion was told by his chief military advisers (who over-estimated the size of the Arab armies and the numbers and efficiency of the troops who would be committed – much as the Arab generals tended to exaggerate Jewish fighters' strength) that Israel's chances of winning a war against the Arab states were only about even.
Yishuv/Israeli forces
Sources disagree about the quantity of arms at the Yishuv's disposal at the end of the Mandate. According to Efraim Karsh before the arrival shipments from Czechoslovakia as part of Operation Balak, there was roughly one weapon for every three fighters, and even the Palmach could arm only two out of every three of its active members. According to Larry Collins and Dominique Lapierre, by April 1948, the Haganah had accumulated only about 20,000 rifles and Sten guns for the 35,000 soldiers who existed on paper.Collins and LaPierre, 1973 p. 355 According to Walid Khalidi, "the arms at the disposal of these forces were plentiful". France authorised Air France to transport cargo to Tel Aviv on 13 May.
Yishuv forces were organised in nine brigades, and their numbers grew following Israeli independence, eventually expanding to twelve brigades. Although both sides increased their manpower over the first few months of the war, the Israeli forces grew steadily as a result of the progressive mobilisation of Israeli society and the influx of an average of 10,300 immigrants each month. By the end of 1948, the Israel Defense Forces had 88,033 soldiers, including 60,000 combat soldiers.Morgan, Michael L.:The Philosopher as Witness: Fackenheim and Responses to the Holocaust, p. 182
Brigade Commander SizeBen Gurion, David War Diaries, 1947–1949. Arabic edition translated by Samir Jabbour. Institute of Palestine Studies, Beirut, 1994. p. 303. OperationsGolaniMoshe Mann4,500Dekel, HiramCarmeliMoshe Carmel2,000HiramAlexandroni5,200Latrun, HametzKiryatiMichael Ben-Gal1,400Dani, HametzGivatiShimon Avidan5,000Hametz, Barak, PleshetEtzioniDavid ShaltielBattle of Jerusalem, Shfifon, Yevusi, Battle of Ramat Rachel7th ArmouredShlomo ShamirBattles of Latrun8th ArmouredYitzhak SadehDanny, Yoav, HorevOdedAvraham YoffeYoav, HiramHarelYitzhak RabinLater, in the midst of the war, Yitzhak Rabin was succeeded by Joseph Tebenkin who led Operation Ha-Har.1,400Nachshon, DannyYiftachYigal Allon4,500 inc. some GolaniYiftah, Danny, Yoav, Battles of LatrunNegevNahum Sarig2,400Yoav
After the invasion: France allowed aircraft carrying arms from Czechoslovakia to land on French territory in transit to Israel, and permitted two arms shipments to ‘Nicaragua’, which were actually intended for Israel.
Czechoslovakia supplied vast quantities of arms to Israel during the war, including thousands of vz. 24 rifles and MG 34 and ZB 37 machine guns, and millions of rounds of ammunition. Czechoslovakia supplied fighter aircraft, including at first ten Avia S-199 fighter planes.
Haganah agents in Western Europe had amassed fifty 65mm French mountain guns, twelve 120mm mortars, ten H-35 light tanks, and a large number of half-tracks. The Haganah readied twelve cargo ships throughout European ports to transfer the equipment, which would set sail as soon as the British blockade lifted at the end of the Mandate.Morris, 2008: pp. 176–177
Following Israeli independence, the Israelis managed to build three Sherman tanks from scrap-heap material found in abandoned British ordnance depots.Laffin, John: The Israeli Army in the Middle East Wars 1948–73, p. 8thumb|right|Sherman tanks of the Israeli 8th Armoured Brigade, 1948
The Haganah also managed to obtain stocks of British weapons due to the logistical complexity of the British withdrawal, and the corruption of a number of officials.Laurens, vol. 3 p. 69.
On 29 June 1948, the day before the last British troops left Haifa, two British soldiers sympathetic to the Israelis stole two Cromwell tanks from an arms depot in the Haifa port area, smashing them through the unguarded gates, and joined the IDF. These two tanks would form the basis of the Israeli Armoured Corps.
After the first truce, by July 1948, the Israelis had established an air force, a navy, and a tank battalion.
After the second truce, Czechoslovakia supplied Supermarine Spitfire fighter planes, which were smuggled to Israel via an abandoned Luftwaffe runway in Yugoslavia, with the agreement of the Yugoslav government.Arnold Krammer (1974), p. 103 The airborne arms smuggling missions from Czechoslovakia were codenamed Operation Balak.
Arab forces
At the invasion, in addition to the irregular Palestinian militia groups, the five Arab states that joined the war were Egypt, Transjordan, Syria, Lebanon and Iraq sending expeditionary forces of their regular armies. Additional contingents came from Saudi Arabia and Yemen. On the eve of war, the available number of Arab troops likely to be committed was between 23,500 and 26,500 (10,000 Egyptians, 4,500 Jordanians, 3,000 Iraqis, 3,000–6,000 Syrians, 2,000 ALA volunteers, 1,000 Lebanese, and several hundred Saudis), in addition to the irregular Palestinians already present. These Arab forces had been trained by British and French instructors; this was particularly true of Jordan's Arab Legion under command of Lt Gen Sir John Glubb (known as Glubb Pasha).
Syria bought a quantity of small arms for the Arab Liberation Army from Czechoslovakia, but the shipment never arrived due to Haganah force intervention.Gelber (2006), p. 50.
Arab states
Jordan's Arab Legion was considered the most effective Arab force. Armed, trained and commanded by British officers, this 8,000–12,000 strong force was organised in four infantry/mechanised regiments supported by some forty artillery pieces and seventy-five armoured cars. Until January 1948, it was reinforced by the 3,000-strong Transjordan Frontier Force. As many as 48 British officers served in the Arab Legion. Glubb Pasha, the commander of the Legion, organised his forces into four brigades as follows:
Military Division CommanderMa'an Abu Nawar, The Jordanian-Israeli war, 1948–1951: a history of the Hashemite Kingdom of Jordan, p. 393.Benny Morris, Victimes : histoire revisitée du conflit arabo-sioniste, 2003, pp. 241, 247–255.RankMilitary Zone of operationsFirst Brigade, includes: 1st and 3rd regimentsDesmond GoldieColonelNablus Military ZoneSecond Brigade, includes: Fifth and Sixth RegimentsSam Sidney Arthur CookeBrigadierSupport forceThird Brigade, includes: Second and Fourth RegimentsTeel AshtonColonelRamallah Military ZoneFourth BrigadeAhmad Sudqi al-JundiColonelSupport: Ramallah, Hebron, and Ramla
The Arab Legion joined the war in May 1948, but fought only in the area that King Abdullah wanted to secure for Jordan: the West Bank, including East Jerusalem.
France prevented a large sale of arms by a Swiss company to Ethiopia, brokered by the UK foreign office, which was actually destined for Egypt and Jordan, and denied a British request at the end of April to land a squadron of British aircraft on its way to Transjordan, and applied diplomatic pressure on Belgium to suspend arms sales to the Arab states.
The Jordanian forces were probably the best trained of all combatants. Other combatant forces lacked the ability to make strategic decisions and tactical manoeuvres, as evidenced by positioning the fourth regiment at Latrun, which was abandoned by ALA combatants before the arrival of the Jordanian forces and the importance of which was not fully understood by the Haganah. In the later stages of the war, Latrun proved to be of high strategic importance for its access to roads and proximity to Jerusalem, with some of the most successful Arab and Jordanian fighting occurring there. "It was the only Arab army in 1948 that covered itself in glory, defeating Israel in two sets of battles in and around Jerusalem's old city and at Latrun (15 May–18 July)." "One of the fiercest and most important battles of the War of Independence was the fight for the fortress of Latrun, which commanded the main road between Tel Aviv and Jerusalem."
In 1948, Iraq's army had 21,000 men in twelve brigades and the Iraqi Air Force had 100 planes, mostly British. Initially the Iraqis committed around 3,000 menD. Kurzman, 'Genesis 1948', 1972, p. 382. to the war effort, including four infantry brigades, one armoured battalion and support personnel. These were to operate under Jordanian guidance.I. Pappe, "The ethnic cleansing of Palestine", 2006, p. 129. The first Iraqi forces to be deployed reached Jordan in April 1948 under the command of Gen. Nur ad-Din Mahmud.Pollack, 2002, pp. 149–155.
In 1948, Egypt's army was able to put a maximum of around 40,000 men into the field, 80% of its military-age male population were unfit for military service, and its embryonic logistics system was limited in its ability to support ground forces beyond its borders. Initially, an expeditionary force of 10,000 men was sent to Palestine under the command of Maj. Gen. Ahmed Ali al-Mwawi. This consisted of five infantry battalions, one armoured battalion equipped with British Light Tank Mk VI and Matilda tanks, one battalion of sixteen 25-pounder guns, a battalion of eight 6-pounder guns and one medium-machine-gun battalion with supporting troops.
The Egyptian Air Force had over thirty Spitfires, four Hawker Hurricanes and twenty C47s modified into crude bombers.
Syria had 12,000 soldiers at the beginning of the 1948 War, grouped into three infantry brigades and an armoured force of approximately battalion size. The Syrian Air Force had forty-three planes, thirty-seven operational, of which approximately the ten newest were World War II–generation models.
France suspended arms sales to Syria, notwithstanding already-signed contracts.
Lebanon's army was the smallest of the Arab states, consisting of 3,500 soldiers, of whom only 1,000 were deployed during the war. According to Gelber, in June 1947, Ben-Gurion "arrived at an agreement with the Maronite religious leadership in Lebanon that cost a few thousand pounds and kept Lebanon's army out of the War of Independence and the military Arab coalition". A token force of 436 soldiers crossed into northern Galilee, seized two villages after a small skirmish, and withdrew. Israel then invaded and occupied southern Lebanon until the end of the war.Rogan and Shlaim 2001, p. 8.
By the time of the second truce, the Egyptians had 20,000 men in the field in thirteen battalions equipped with 135 tanks and 90 artillery pieces.Pollack, 2002, pp. 15–27.
During the first truce, the Iraqis increased their force to about 10,000.D. Kurzman, "Genesis 1948", 1972, p. 556. Ultimately, the Iraqi expeditionary force numbered around 18,000 men.Pollack, 2002, p. 150.
Saudi Arabia sent hundreds of volunteers to join the Arab forces. In February 1948, around 800 tribesmen had gathered near Aqaba to invade the Negev, but crossed to Egypt after Saudi rival King Abdallah denied them permission to pass through Jordanian territory.Gelber, p. 55 The Saudi troops were attached to the Egyptian command throughout the war,Morris, 2008, pp. 322, 326. and estimates of their total strength ranged up to 1,200.Uthman Hasan Salih. Dawr Al-Mamlaka Al-'Arabiyya Al-Sa'udiyya Fi Harb Filastin 1367H/1948 (The role of Saudi Arabia in the Palestine war of 1948), Revue d'Histoire Maghrébine [Tunisia] 1986 13(43–44): 201–221. .Morris, 2008, p. 205; cites British diplomatic communications. By July 1948, the Saudis comprised three brigades within the Egyptian expeditionary force, and were stationed as guards between Gaza city and Rafah.Gelber, p. 200 This area came under heavy aerial bombardment during Operation Yoav in October,Gelber, p. 203 and faced a land assault beginning in late December which culminated in the Battle of Rafah in early January of the new year. With the subsequent armistice of 24 February 1949 and evacuation of almost 4,000 Arab soldiers and civilians from Gaza, the Saudi contingent withdrew through Arish and returned to Saudi Arabia.Gelber, p. 239
During the first truce, Sudan sent six companies of regular troops to fight alongside the Egyptians.Morris, 2008, p. 269. Yemen also committed a small expeditionary force to the war effort, and contingents from Morocco joined the Arab armies as well.
Course of the war
At the last moment, several Arab leaders, to avert catastrophe – secretly appealed to the British to hold on in Palestine for at least another year.
First phase: 15 May – 11 June 1948
The civil war in Mandatory Palestine became a war between separate states with the declaration of the establishment of the State of Israel on 14 May 1948, a few hours before the termination of the British Mandate of Palestine at midnight. The following morning, the regular armies of neighbouring Arab statesEgypt, Transjordan and Syriainvaded territories of the former Palestinian mandate allocated for a future Arab state according to the United Nations Partition Plan for Palestine.Yoav Gelber, Palestine 1948, 2006 – Chap. 8 "The Arab Regular Armies' Invasion of Palestine".
Through Plan Dalet, Zionist forces had already, from 1 April down to 14 May, conducted 8 of its 13 full-scale military operations outside of the area allotted to a Jewish state by partition, and the operational commander Yigal Allon later stated that had it not been for the Arab invasion, Haganah's forces would have reached 'the natural borders of western Israel.'Sean F. McMahon,The Discourse of Palestinian-Israeli Relations: Persistent Analytics and Practices, Routledge 2010 p. 37: "If it wasn't for the Arab invasion there would have been no stop to the expansion of the forces of Haganah who could have, with the same drive, reached the natural borders of western Israel". Walid Khalidi, "Plan Dalet: Master Plan for the Conquest of Palestine," Journal of Palestine Studies, Vol. 18, No. 1, Special Issue: Palestine 1948, (Autumn,1988), pp. 4–33, p. 19. Although the Arab invasion was denounced by the United States, the Soviet Union, and UN secretary-general Trygve Lie, it found support from the Republic of China and other UN member states.
The initial Arab plans called for Syrian and Lebanese forces to invade from north while Jordanian and Iraqi forces were to invade from east in order to meet at Nazareth and then to push forward together to Haifa. In the south, the Egyptians were to advance and take Tel Aviv.Yoav Gelber (2006), p. 130. At the Arab League meeting in Damascus on 11–13 May, Abdullah rejected the plan, which served Syrian interests, using the fact his allies were afraid to go to war without his army. He proposed that the Iraqis attack the Jezreel valley and the Arab Legion enter Ramallah and Nablus and link with the Egyptian army at Hebron, which was more in compliance with his political objective to occupy the territory allocated to the Arab State by the partition plan and promises not to invade the territory allocated to the Jewish State by the partition plan. In addition, Lebanon decided not to take part in the war at the last minute, due to the still-influential Christians' opposition, encouraged by Jewish bribes.Gelber (2006), p. 11 "Lebanon was active in forming the political coalition, but ultimately abstained from military participation, presumably owing to the still influential Christians' opposition, encouraged by Jewish bribes."
Intelligence provided by the French consulate in Jerusalem on 12 May 1948 on the Arab armies' invading forces and their revised plan to invade the new state contributed to Israel's success in withstanding the Arab invasion.
The first mission of the Jewish forces was to hold on against the Arab armies and stop them, although the Arabs had enjoyed major advantages (the initiative, vastly superior firepower).Morris, 2008, p. 263 As the British stopped blocking the incoming Jewish immigrants and arms supply, the Israeli forces grew steadily with large numbers of immigrants and weapons, that allowed the Haganah to transform itself from a paramilitary force into a real army. Initially, the fighting was handled mainly by the Haganah, along with the smaller Jewish militant groups Irgun and Lehi. On 26 May 1948, Israel established the Israel Defense Forces (IDF), incorporating these forces into one military under a central command.
Southern front – Negev
The Egyptian force, the largest among the Arab armies, invaded from the south.
On 15 May 1948, the Egyptians attacked two settlements: Nirim, using artillery, armoured cars carrying cannons, and Bren carriers; and Kfar Darom using artillery, tanks and aircraft. The Egyptians' attacks met fierce resistance from the few and lightly armed defenders of both settlements, and failed. On 19 May the Egyptians attacked Yad Mordechai, where an inferior force of 100 Israelis armed with nothing more than rifles, a medium machinegun and a PIAT anti-tank weapon, held up a column of 2,500 Egyptians, well-supported by armour, artillery and air units, for five days. The Egyptians took heavy losses, while the losses sustained by the defenders were comparatively light.
One of the Egyptian force's two main columns made its way northwards along the shoreline, through what is today the Gaza Strip and the other column advanced eastwards toward Beersheba.Wallach et al. (Volume 2, 1978), p. 29 To secure their flanks, the Egyptians attacked and laid siege to a number of kibbutzim in the Negev, among those Kfar Darom, Nirim, Yad Mordechai, and Negba.Tal, 2004, p. 179 The Israeli defenders held out fiercely for days against vastly superior forces, and managed to buy valuable time for the IDF's Givati Brigade to prepare to stop the Egyptian drive on Tel Aviv.
On 28 May the Egyptians renewed their northern advance, and stopped at a destroyed bridge north to Isdud. The Givati Brigade reported this advance but no fighters were sent to confront the Egyptians. Had the Egyptians wished to continue their advance northward, towards Tel Aviv, there would have been no Israeli force to block them.Morris, 2008, p. 239Tal, 2004 p. 182
From 29 May to 3 June, Israeli forces stopped the Egyptian drive north in Operation Pleshet. In the first combat mission performed by Israel's fledgling air force, four Avia S-199s attacked an Egyptian armoured column of 500 vehicles on its way to Isdud. The Israeli planes dropped 70 kilogram bombs and strafed the column, although their machine guns jammed quickly. Two of the planes crashed, killing a pilot. The attack caused the Egyptians to scatter, and they had lost the initiative by the time they had regrouped. Following the air attack, Israeli forces constantly bombarded Egyptian forces in Isdud with Napoleonchik cannons, and IDF patrols engaged in small-scale harassment of Egyptian lines. Following another air attack, the Givati Brigade launched a counterattack. Although the counterattack was repulsed, the Egyptian offensive was halted as Egypt changed its strategy from offensive to defensive, and the initiative shifted to Israel.
On 6 June, in the Battle of Nitzanim, Egyptian forces attacked the kibbutz of Nitzanim, located between Majdal and Isdud, and the Israeli defenders surrendered after resisting for five days.
Battles of Latrun
The heaviest fighting occurred in Jerusalem and on the Jerusalem – Tel Aviv road, between Jordan's Arab Legion and Israeli forces. As part of the redeployment to deal with the Egyptian advance, the Israelis abandoned the Latrun fortress overlooking the main highway to Jerusalem, which the Arab Legion immediately seized. The Arab Legion also occupied the Latrun Monastery. From these positions, the Jordanians were able to cut off supplies to Israeli fighters and civilians in Jerusalem.
The Israelis attempted to take the Latrun fortress in a series of battles lasting from 24 May to 18 July. The Arab Legion held Latrun and managed to repulse the attacks. During the attempts to take Latrun, Israeli forces suffered some 586 casualties, among them Mickey Marcus, Israel's first general, who was killed by friendly fire. The Arab Legion also took losses, losing 90 dead and some 200 wounded up to 29 May.War in Palestine, 1948: Israeli and Arab Strategy and Diplomacy. David Tal.
The besieged Israeli Jerusalem was only saved via the opening of the so-called "Burma Road", a makeshift bypass road built by Israeli forces that allowed Israeli supply convoys to pass into Jerusalem. Parts of the area where the road was built were cleared of Jordanian snipers in May and the road was completed on 14 June. Supplies had already begun passing through before the road was completed, with the first convoy passing through on the night of 1–2 June. The Jordanians spotted the activity and attempted to shell the road, but were ineffective, as it could not be seen. However, Jordanian sharpshooters killed several road workers, and an attack on 9 June left eight Israelis dead. On 18 July, elements of the Harel Brigade took about 10 villages to the south of Latrun to enlarge and secure the area of the Burma Road.
The Arab Legion was able to repel an Israeli attack on Latrun. The Jordanians launched two counterattacks, temporarily taking Beit Susin before being forced back, and capturing Gezer after a fierce battle, which was retaken by two Palmach squads the same evening.Morris, 2008, pp. 229–230
Battle for Jerusalem
The Jordanians in Latrun cut off supplies to western Jerusalem. Though some supplies, mostly munitions, were airdropped into the city, the shortage of food, water, fuel and medicine was acute. The Israeli forces were seriously short of food, water and ammunition.
King Abdullah ordered Glubb Pasha, the commander of the Arab Legion, to enter Jerusalem on 17 May. The Arab Legion fired 10,000 artillery and mortar shells a day, and also attacked West Jerusalem with sniper fire.
Heavy house-to-house fighting occurred between 19 and 28 May, with the Arab Legion eventually succeeding in pushing Israeli forces from the Arab neighbourhoods of Jerusalem as well as the Jewish Quarter of the Old City. The 1,500 Jewish inhabitants of the Old City's Jewish Quarter were expelled, and several hundred were detained. The Jews had to be escorted out by the Arab Legion to protect them against Palestinian Arab mobs that intended to massacre them.(Morris (2008), "1948: The First Arab-Israeli War", Yale University Press, New Haven, , p. 219).Mordechai Weingarten
On 22 May, Arab forces attacked kibbutz Ramat Rachel south of Jerusalem. After a fierce battle in which 31 Jordanians and 13 Israelis were killed, the defenders of Ramat Rachel withdrew, only to partially retake the kibbutz the following day. Fighting continued until 26 May, until the entire kibbutz was recaptured. Radar Hill was also taken from the Arab Legion, and held until 26 May, when the Jordanians retook it in a battle that left 19 Israelis and 2 Jordanians dead.Netanel Lorch (1961), p. 192 A total of 23 attempts by the Harel Brigade to capture Radar Hill in the war failed.
The same day, Thomas C. Wasson, the US Consul-General in Jerusalem and a member of the UN Truce Commission was shot dead in West Jerusalem. It was disputed whether Wasson was killed by the Arabs or Israelis.
In mid to late October 1948, the Harel Brigade began its offensive in what was known as Operation Ha-Har, to secure the Jerusalem Corridor.
Northern Samaria
An Iraqi force consisting of two infantry and one armoured brigade crossed the Jordan River from northern Jordan, attacking the Israeli settlement of Gesher with little success. Following this defeat, Iraqi forces moved into the strategic triangle bounded by the Arab towns Nablus, Jenin and Tulkarm. On 25 May, they were making their way towards Netanya, when they were stopped. On 29 May, an Israeli attack against the Iraqis led to three days of heavy fighting over Jenin, but Iraqi forces managed to hold their positions. After these battles, the Iraqi forces became stationary and their involvement in the war effectively ended.
Iraqi forces failed in their attacks on Israeli settlements with the most notable battle taking place at Gesher, and instead took defensive positions around Jenin, Nablus, and Tulkarm, from where they could put pressure on the Israeli center.The Palestine Post: State of Israel is Born (1948) On 25 May, Iraqi forces advanced from Tulkarm, taking Geulim and reaching Kfar Yona and Ein Vered on the Tulkarm-Netanya road. The Alexandroni Brigade then stopped the Iraqi advance and retook Geulim. The IDF Carmeli and Golani Brigades attempted to capture Jenin during an offensive launched on 31 May, but were defeated in course of the subsequent battle by an Iraqi counterattack.
Northern front – Lake of Galilee
On 14 May Syria entered Palestine with the 1st Infantry Brigade supported by a battalion of armoured cars, a company of French R 35 and R 37 tanks, an artillery battalion and other units.Pollack 2002, pp. 448–457 The Syrian president, Shukri al-Quwwatli instructed his troops in the front, "to destroy the Zionists". "The situation was very grave. There aren't enough rifles. There are no heavy weapons," Ben-Gurion told the Israeli Cabinet.Morris, 2008, pp. 253–254Tal, 2004, p. 251 On 15 May, the Syrian forces turned to the eastern and southern Sea of Galilee shores, and attacked Samakh the neighbouring Tegart fort and the settlements of Sha'ar HaGolan, Ein Gev, but they were bogged down by resistance. Later, they attacked Samakh using tanks and aircraft, and on 18 May they succeeded in conquering Samakh and occupied the abandoned Sha'ar HaGolan.
On 21 May, the Syrian army was stopped at kibbutz Degania Alef in the north, where local militia reinforced by elements of the Carmeli Brigade halted Syrian armoured forces with Molotov cocktails, hand grenades and a single PIAT. One tank that was disabled by Molotov cocktails and hand grenades still remains at the kibbutz. The remaining Syrian forces were driven off the next day by four Napoleonchik mountain guns – Israel's first use of artillery during the war. Following the Syrian forces' defeat at the Deganias a few days later, they abandoned the Samakh village. The Syrians were forced to besiege the kibbutz rather than advance. One author claims that the main reason for the Syrian defeat was the Syrian soldiers' low regard for the Israelis who they believed would not stand and fight against the Arab army.
On 6 June, the 3rd battalion of the Lebanese Army took Al-Malkiyya and Qadas in what became the only intervention of the Lebanese army during the war, handing the towns over to the Arab Liberation Army and withdrawing on 8 July.
On 6 June, Syrian forces attacked Mishmar HaYarden, but they were repulsed. On 10 June, the Syrians overran Mishmar HaYarden and advanced to the main road, where they were stopped by units of the Oded Brigade. Subsequently, the Syrians reverted to a defensive posture, conducting only a few minor attacks on small, exposed Israeli settlements.
Palestinian forces
In the continuity of the civil war between Jewish and Arab forces that had begun in 1947, battles between Israeli forces and Palestinian Arab militias took place, particularly in the Lydda, al-Ramla, Jerusalem, and Haifa areas. On 23 May, the Alexandroni Brigade captured Tantura, south of Haifa, from Arab forces. On 2 June, Holy War Army commander Hasan Salama was killed in a battle with Haganah at Ras al-Ein.
Air operations
All Jewish aviation assets were placed under the control of the Sherut Avir (Air Service, known as the SA) in November 1947 and flying operations began in the following month from a small civil airport on the outskirts of Tel Aviv called Sde Dov, with the first ground support operation (in an RWD-13) taking place on 17 December. The Galilee Squadron was formed at Yavne'el in March 1948, and the Negev Squadron was formed at Nir-Am in April. By 10 May, when the SA suffered its first combat loss, there were three flying units, an air staff, maintenance facilities and logistics support. At the outbreak of the war on 15 May, the SA became the Israeli Air Force. With its fleet of light planes it was no match for Arab forces during the first few weeks of the war with their T-6s, Spitfires, C-47s, and Avro Ansons.
On 15 May, with the beginning of the war, four Royal Egyptian Air Force (REAF) Spitfires attacked Tel Aviv, bombing Sde Dov Airfield, where the bulk of Sherut Avir's aircraft were concentrated, as well as the Reading Power Station. Several aircraft were destroyed, some others were damaged, and five Israelis were killed. Throughout the following hours, additional waves of Egyptian aircraft bombed and strafed targets around Tel Aviv, although these raids had little effect. One Spitfire was shot down by anti-aircraft fire, and its pilot was taken prisoner.Morris (2008), p. 261
Throughout the next six days, the REAF continued to attack Tel Aviv, causing civilian casualties. On 18 May, Egyptian warplanes attacked the Tel Aviv Central Bus Station, killing 42 people and wounding 100. In addition to their attacks on Tel Aviv, the Egyptians bombed rural settlements and airfields, though few casualties were caused in these raids.
At the outset of the war, the REAF was able to attack Israel with near impunity, due to the lack of Israeli fighter aircraft to intercept them,Morris, 2008, p. 235 and met only ground fire.
As more effective air defences were transferred to Tel Aviv, the Egyptians began taking significant aircraft losses. As a result of these losses, as well as the loss of five Spitfires downed by the British when the Egyptians mistakenly attacked RAF Ramat David, the Egyptian air attacks became less frequent. By the end of May 1948, almost the entire REAF Spitfire squadron based in El Arish had been lost, including many of its best pilots.
Although lacking fighter or bomber aircraft, in the first few days of the war, Israel's embryonic air force still attacked Arab targets, with light aircraft being utilised as makeshift bombers, striking Arab encampments and columns. The raids were mostly carried out at night to avoid interception by Arab fighter aircraft. These attacks usually had little effect, except on morale.
The balance of air power soon began to swing in favour of the Israeli Air Force following the arrival of 25 Avia S-199s from Czechoslovakia, the first of which arrived in Israel on 20 May. Ironically, Israel was using the Avia S-199, an inferior derivative of the Bf 109 designed in Nazi Germany to counter British-designed Spitfires flown by Egypt. Throughout the rest of the war, Israel would acquire more Avia fighters, as well as 62 Spitfires from Czechoslovakia. On 28 May 1948, Sherut Avir became the Israeli Air Force.
Many of the pilots who fought for the Israeli Air Force were foreign volunteers or mercenaries, including many World War II veterans.David Bercuson (1984), p. 74
On 3 June, Israel scored its first victory in aerial combat when Israeli pilot Modi Alon shot down a pair of Egyptian DC-3s that had just bombed Tel Aviv. Although Tel Aviv would see additional raids by fighter aircraft, there would be no more raids by bombers for the rest of the war. From then on, the Israeli Air Force began engaging the Arab air forces in air-to-air combat. The first dogfight took place on 8 June, when an Israeli fighter plane flown by Gideon Lichtman shot down an Egyptian Spitfire. By the fall of 1948, the IAF had achieved air superiority and had superior firepower and more knowledgeable personnel, many of whom had seen action in World War II.Morris, 2001, pp. 217–218. Israeli planes then began intercepting and engaging Arab aircraft on bombing missions.
Following Israeli air attacks on Egyptian and Iraqi columns, the Egyptians repeatedly bombed Ekron Airfield, where IAF fighters were based. During a 30 May raid, bombs aimed for Ekron hit central Rehovot, killing 7 civilians and wounding 30. In response to this, and probably to the Jordanian victories at Latrun, Israel began bombing targets in Arab cities. On the night of 31 May/1 June, the first Israeli raid on an Arab capital took place when three IAF planes flew to Amman and dropped several dozen 55 and 110-pound bombs, hitting the King's Palace and an adjacent British airfield. Some 12 people were killed and 30 wounded. During the attack, an RAF hangar was damaged, as were some British aircraft. The British threatened that in the event of another such attack, they would shoot down the attacking aircraft and bomb Israeli airfields, and as a result, Israeli aircraft did not attack Amman again for the rest of the war. Israel also bombed Arish, Gaza, Damascus, and Cairo. Israeli Boeing B-17 Flying Fortress bombers flying to Israel from Czechoslovakia bombed Egypt on their way.Morris, 2008, p. 262.Aloni, 2001, pp. 7–11.
Sea battles
At the outset of the war, the Israeli Navy consisted of three former Aliyah Bet ships that had been seized by the British and impounded in Haifa harbour, where they were tied up at the breakwater. Work on establishing a navy had begun shortly before Israeli independence, and the three ships were selected due to them having a military background – one, the INS Eilat, was an ex-US Coast Guard icebreaker, and the other two, the INS Haganah and INS Wedgwood, had been Royal Canadian Navy corvettes.Gershoni, pp. 46–47
The ships were put into minimum running condition by contractors dressed as stevedores and port personnel, who were able to work in the engine rooms and below deck. The work had to be clandestine to avoid arousing British suspicion. On 21 May 1948, the three ships set sail for Tel Aviv, and were made to look like ships that had been purchased by foreign owners for commercial use. In Tel Aviv, the ships were fitted with small field guns dating to the late 19th century and anti-aircraft guns.
After the British left Haifa port on 30 June, Haifa became the main base of the Israeli Navy. In October 1948, a submarine chaser was purchased from the United States. The warships were manned by former merchant seamen, former crewmembers of Aliyah Bet ships, Israelis who had served in the Royal Navy during World War II, and foreign volunteers. The newly refurbished and crewed warships served on coastal patrol duties and bombarded Egyptian coastal installations in and around the Gaza area all the way to Port Said.
Israeli use of biological warfare
Research by Israeli historians Benny Morris and Benjamin Kedar show that during the 1948 war, Israel conducted a biological warfare operation codenamed Cast Thy Bread. According to Morris and Kedar, the Haganah initially used typhoid bacteria to contaminate water wells in newly cleared Arab villages to prevent the population including militiamen from returning. Later, the biological warfare campaign expanded to include Jewish settlements that were in imminent danger of being captured by Arab troops and inhabited Arab towns not slated for capture. There were also plans to expand the biological warfare campaign into other Arab states including Egypt, Lebanon and Syria, but they were not carried out.
End of the first phase
Throughout the following days, the Arabs were only able to make limited gains due to fierce Israeli resistance, and were quickly driven off their new holdings by Israeli counterattacks.
As the war progressed, the IDF managed to field more troops than the Arab forces. In July 1948, the IDF had 63,000 troops; by early spring 1949, they had 115,000. The Arab armies had an estimated 40,000 troops in July 1948, rising to 55,000 in October 1948, and slightly more by the spring of 1949.
Upon the implementation of the truce, the IDF had control over nine Arab cities and towns or mixed cities and towns: New Jerusalem, Jaffa, Haifa, Acre, Safed, Tiberias, Baysan (Beit She'an), Samakh and Yibna (Yavne). Another city, Jenin, was not occupied but its residents fled. The combined Arab forces captured 14 Jewish settlement points, but only one of them, Mishmar HaYarden, was in the territory of the proposed Jewish State according to Resolution 181.
Within the boundaries of the proposed Jewish state, there were twelve Arab villages which opposed Jewish control or were captured by the invading Arab armies, and in addition to them, the Lod Airport and pumping station near Antipatris, which were within the boundaries of the proposed Jewish state, were under the control of the Arabs. The IDF captured about 50 large Arab villages outside of the boundaries of the proposed Jewish State and a larger number of hamlets and Bedouin encampments. 350 square kilometres of the proposed Jewish State were under the control of the Arab forces, while 700 square kilometres of the proposed Arab State were under the control of the IDF. This figure ignores the Negev desert which was not under any absolute control of either side.Gelber, 2004 ; Kinneret, p. 220
In the period between the invasion and the first truce the Syrian army had 315 of its men killed and 400–500 injured; the Iraqi expeditionary force had 200 of its men killed and 500 injured; the Jordanian Arab Legion had 300 of its men killed and 400–500 injured (including irregulars and Palestinian volunteers fighting under the Jordanians); the Egyptian army had 600 of its men killed and 1,400 injured (including irregulars from the Muslim Brotherhood); the ALA, which returned to fight in early June, had 100 of its men killed or injured. 800 Jews were taken hostage by the Arabs and 1,300 Arabs were taken hostage by the Jews, mostly Palestinians.
First truce: 11 June – 8 July 1948
The UN declared a truce on 29 May, which came into effect on 11 June and lasted 28 days.
The truce was designed to last 28 days and an arms embargo was declared with the intention that neither side would make any gains from the truce. Neither side respected the truce; both found ways around the restrictions placed on them. Both the Israelis and the Arabs used this time to improve their positions, a direct violation of the terms of the ceasefire.Morris, 2008, pp. 269–71
Reinforcements
+ Israeli Forces 1948Bregman, 2002, p. 24 citing Ben Gurion's diary of the warInitial strength29,6774 June40,82517 July63,5867 October88,03328 October92,2752 December106,90023 December107,65230 December108,300
At the time of the truce, the British view was that "the Jews are too weak in armament to achieve spectacular success". As the truce commenced, a British officer stationed in Haifa stated that the four-week-long truce "would certainly be exploited by the Jews to continue military training and reorganization while the Arabs would waste [them] feuding over the future divisions of the spoils". During the truce, the Israelis sought to bolster their forces by massive import of arms. The IDF was able to acquire weapons from Czechoslovakia as well as improve training of forces and reorganisation of the army during this time. Yitzhak Rabin, an IDF commander at the time of the war and later Israel's fifth Prime Minister, stated "[w]ithout the arms from Czechoslovakia... it is very doubtful whether we would have been able to conduct the war".
The Israeli army increased its manpower from approximately 30,000–35,000 men to almost 65,000 during the truce due to mobilisation and the constant immigration into Israel. It was also able to increase its arms supply to more than 25,000 rifles, 5,000 machine guns, and fifty million bullets. As well as violating the arms and personnel embargo, they also sent fresh units to the front lines, much as their Arab enemies did.
During the truce, Irgun attempted to bring in a private arms shipment aboard a ship called Altalena. Fearing a coup by the Irgun (at the time the IDF was in the process of integrating various pre-independence political factions), Ben-Gurion ordered that the arms be confiscated by force. After some miscommunication, the army was ordered by Ben-Gurion to sink the ship. Sixteen Irgun members and three IDF soldiers were killed in the fighting.Dov Joseph (1960), p. 239
UN mediator Bernadotte
The ceasefire was overseen by UN mediator Folke Bernadotte and a team of UN Observers made up of army officers from Belgium, United States, Sweden and France.
Bernadotte was voted in by the General Assembly to "assure the safety of the holy places, to safeguard the well-being of the population, and to promote 'a peaceful adjustment of the future situation of Palestine. Folke Bernadotte reported:
During the period of the truce, three violations occurred ... of such a serious nature:
the attempt by ...the Irgun Zvai Leumi to bring war materials and immigrants, including men of military age, into Palestine aboard the ship Altalena on 21 June...
Another truce violation occurred through the refusal of Egyptian forces to permit the passage of relief convoys to Jewish settlements in the Negeb...
The third violation of the truce arose as a result of the failure of the Transjordan and Iraqi forces to permit the flow of water to Jerusalem.Security Council, S/1025, 5 October 1948, Report by the United Nations, Mediator on the Observtation of the Truce in, Palestine in the Period from 11 June, to 9 July 1948 , During the period of the truce, three violations occurred ... of such a serious nature... the Altalena incident, the Negeb convoys, and the question of the water supply to Jerusalem....
the attempt by ...the Irgun Zvai Leumi to bring war materials and immigrants, including men of military age, into Palestine aboard the ship Altalena on 21 June...
Another truce violation occurred through the refusal of Egyptian forces to permit the passage of relief convoys to Jewish settlements in the Negeb...
The third violation of the truce arose as a result of the failure of the Transjordan and Iraqi forces to permit the flow of water to Jerusalem.
After the truce was in place, Bernadotte began to address the issue of achieving a political settlement. The main obstacles in his opinion were "the Arab world's continued rejection of the existence of a Jewish state, whatever its borders; Israel's new 'philosophy', based on its increasing military strength, of ignoring the partition boundaries and conquering what additional territory it could; and the emerging Palestinian Arab refugee problem".
Taking all the issues into account, Bernadotte presented a new partition plan. He proposed there be a Palestinian Arab state alongside Israel and that a "Union" "be established between the two sovereign states of Israel and Jordan (which now included the West Bank); that the Negev, or part of it, be included in the Arab state and that Western Galilee, or part of it, be included in Israel; that the whole of Jerusalem be part of the Arab state, with the Jewish areas enjoying municipal autonomy and that Lydda Airport and Haifa be 'free ports' – presumably free of Israeli or Arab sovereignty". Israel rejected the proposal, in particular the aspect of losing control of Jerusalem, but they did agree to extend the truce for another month. The Arabs rejected both the extension of the truce and the proposal.
Second phase: 8–18 July 1948 ("Ten Day Battles")
On 8 July, the day before the expiration of the truce, Egyptian forces under General Muhammad Naguib renewed the war by attacking Negba.Alfred A. Knopf. A History of Israel from the Rise of Zionism to Our Time. New York. 1976. p. 330. . The following day, Israeli air forces launched a simultaneous offensive on all three fronts, ranging from Quneitra to Arish and the Egyptian air force bombed the city of Tel Aviv.Gelber, 2006; Kinneret, p. 226 During the fighting, the Israelis were able to open a lifeline to a number of besieged kibbutzim.
The fighting continued for ten days until the UN Security Council issued the Second Truce on 18 July. During those 10 days, the fighting was dominated by large-scale Israeli offensives and a defensive posture from the Arab side.
Southern front
In the south, the IDF carried out several offensives, including Operation An-Far and Operation Death to the Invader. The task of the 11th Brigades's 1st Battalion on the southern flank was to capture villages, and its operation ran smoothly, with but little resistance from local irregulars. According to Amnon Neumann, a Palmach veteran of the Southern front, hardly any Arab villages in the south fought back, due to the miserable poverty of their means and lack of weapons, and suffered expulsion.Gideon Levy and Alex Levac, 'Drafting the blueprint for Palestinian refugees' right of return,' at Haaretz 4 October 2013: 'In all the Arab villages in the south almost nobody fought. The villagers were so poor, so miserable, that they didn't even have weapons ... The flight of these residents began when we started to clean up the routes used by those accompanying the convoys. Then we began to expel them, and in the end they fled on their own.' What slight resistance was offered was quelled by an artillery barrage, followed by the storming of the village, whose residents were expelled and houses destroyed.David Tal, War in Palestine, 1948: Israeli and Arab Strategy and Diplomacy, Routledge 2004 p. 307.
On 12 July, the Egyptians launched an offensive action, and again attacked Negba, which they had previously failed to capture, using three infantry battalions, an armoured battalion, and an artillery regiment. In the battle that followed, the Egyptians were repulsed, suffering 200–300 casualties, while the Israelis lost 5 dead and 16 wounded.Herzog and Gazit, 2005, p. 86
After failing to take Negba, the Egyptians turned their attention to more isolated settlements and positions. On 14 July, an Egyptian attack on Gal On was driven off by a minefield and by resistance from Gal On's residents.Lorch, Netanel (1998). History of the War of Independence
The Egyptians then assaulted the lightly defended village of Be'erot Yitzhak. The Egyptians managed to penetrate the village perimeter, but the defenders concentrated in an inner position in the village and fought off the Egyptian advance until IDF reinforcements arrived and drove out the attackers. The Egyptians suffered an estimated 200 casualties, while the Israelis had 17 dead and 15 wounded. The battle was one of Egypt's last offensive actions during the war, and the Egyptians did not attack any Israeli villages following this battle.
Operation Dani
Israeli Operation Dani was the most important Israeli offensive, aimed at securing and enlarging the corridor between Jerusalem and Tel Aviv by capturing the roadside cities Lod (Lydda) and Ramle. In a second planned stage of the operation the fortified positions of Latrun – overlooking the Tel Aviv-Jerusalem highway – and the city of Ramallah were also to be captured. Hadita, near Latrun, was captured by the Israelis at a cost of nine dead.
The objectives of Operation Dani were to capture territory east of Tel Aviv and then to push inland and relieve the Jewish population and forces in Jerusalem. Lydda had become an important military center in the region, lending support to Arab military activities elsewhere, and Ramle was one of the main obstacles blocking Jewish transportation. Lydda was defended by a local militia of around 1,000 residents, with an Arab Legion contingent of 125–300.Kadish, Alon, and Sela, Avraham. (2005) "Myths and historiography of the 1948 Palestine War revisited: the case of Lydda," The Middle East Journal, 22 September 2005; and Khalidi, Walid. (1998) Introduction to Munayyer, Spiro. The fall of Lydda. Journal of Palestine Studies, Vol. 27, No. 4, pp. 80–98.
On 10 July, Glubb Pasha ordered the defending Arab Legion troops to "make arrangements...for a phony war".
Lydda and al-Ramla
The IDF forces gathered to attack the city numbered around 8,000. It was the first operation where several brigades were involved. The city was attacked from the north via Majdal al-Sadiq and al-Muzayri'a, and from the east via Khulda, al-Qubab, Jimzu and Daniyal. Bombers were also used for the first time in the conflict to bombard the city. The IDF captured the city on 11 July.
The civilian populations of Lydda and Ramle (50,000-70,000 people) were violently expelled. Hundreds of Palestinians were killed in multiple mass killing events in Lydda, and many were expelled without the provision of transport vehicles, as had been done in Ramle; many of the evictees died on the long walk under the hot July sun.
Battle of Latrun
On 15–16 July, an attack on Latrun took place but did not manage to occupy the fort. A desperate second attempt occurred on 18 July by units from the Yiftach Brigade equipped with armoured vehicles, including two Cromwell tanks, but that attack also failed. Despite the second truce, which began on 18 July, the Israeli efforts to conquer Latrun continued until 20 July.
Jerusalem
Operation Kedem's aim was to secure the Old City of Jerusalem, but fewer resources were allocated. The operation failed.Map of the Attacks . Originally the operation was to begin on 8 July, immediately after the first truce, by Irgun and Lehi forces. However, it was delayed by David Shaltiel, possibly because he did not trust their ability after their failure to capture Deir Yassin without Haganah assistance.
Irgun forces commanded by Yehuda Lapidot were to break through at the New Gate, Lehi was to break through the wall stretching from the New Gate to the Jaffa Gate, and the Beit Horon Battalion was to strike from Mount Zion.
The battle was planned to begin on the Shabbat, at 20:00 on 16 July, two days before the second ceasefire of the war. The plan went wrong from the beginning and was postponed first to 23:00 and then to midnight. It was not until 02:30 that the battle actually began. The Irgun managed to break through at the New Gate, but the other forces failed in their missions. At 05:45 on 17 July, Shaltiel ordered a retreat and to cease hostilities.
On 14 July 1948, Irgun occupied the Arab village of Malha after a fierce battle. Several hours later, the Arabs launched a counterattack, but Israeli reinforcements arrived, and the village was retaken at a cost of 17 dead.
Southern Galilee
The second plan was Operation Dekel, which was aimed at capturing the Lower Galilee including Nazareth. Nazareth was captured on 16 July, and by the time the second truce took effect at 19:00 18 July, the whole Lower Galilee from Haifa Bay to the Sea of Galilee was captured by Israel.
Eastern Galilee
Operation Brosh was launched in a failed attempt to dislodge Syrian forces from the Eastern Galilee and the Benot Yaakov Bridge. During the operation, 200 Syrians and 100 Israelis were killed.
Second truce: 18 July – 15 October 1948
At 19:00 on 18 July, the second truce of the conflict went into effect after intense diplomatic efforts by the UN.
On 16 September, Count Folke Bernadotte proposed a new partition for Palestine in which the Negev would be divided between Jordan and Egypt, and Jordan would annexe Lydda and Ramla. There would be a Jewish state in the whole of Galilee, with the frontier running from Faluja northeast towards Ramla and Lydda. Jerusalem would be internationalised, with municipal autonomy for the city's Jewish and Arab inhabitants, the Port of Haifa would be a free port, and Lydda Airport would be a free airport. All Palestinian refugees would be granted the right of return, and those who chose not to return would be compensated for lost property. The UN would control and regulate Jewish immigration.
The plan was once again rejected by both sides. On the next day, 17 September, Bernadotte was assassinated in Jerusalem by the militant Zionist group Lehi. A four-man team ambushed Bernadotte's motorcade in Jerusalem, killing him and a French UN observer sitting next to him. Lehi saw Bernadotte as a British and Arab puppet, and thus a serious threat to the emerging State of Israel, and feared that the provisional Israeli government would accept the plan, which it considered disastrous. Unbeknownst to Lehi, the government had already decided to reject it and resume combat in a month. Bernadotte's deputy, American Ralph Bunche, replaced him.A. Ilan, Bernadotte in Palestine, 1948 (Macmillan, 1989) p. 194
J. Bowyer Bell, Assassination in International Politics, International Studies Quarterly, vol. 16, March 1972, pp. 59–82.
Review of Kati Marton's biography.
On 22 September 1948, the Provisional State Council of Israel passed the Area of Jurisdiction and Powers Ordnance, 5708–1948. The law officially added to Israel's size by annexing all land it had captured since the war began. It also declared that from then on, any part of Palestine captured by the Israeli army would automatically become part of Israel.
Little triangle pocket
The Arab villagers of the area known as the "Little Triangle" south of Haifa repeatedly fired at Israeli traffic along the main road from Tel Aviv to Haifa and were supplied by the Iraqis from northern Samaria. The sniping at traffic continued during the Second Truce. The poorly planned assaults on 18 June and 8 July had failed to dislodge Arab militia from their superior positions. The Israelis launched Operation Shoter on 24 July in order to gain control of the main road to Haifa and to destroy all the enemy in the area.
Israeli assaults on 24 and 25 July were beaten back with stiff resistance. The Israelis then broke the Arab defences with a combined infantry and armour assault backed by heavy shelling and bombing. Three Arab villages surrendered, and most of the inhabitants fled before and during the attack. The Israeli soldiers and aircraft struck one of the Arab retreat routes, killing 60 Arab soldiers.. Most of the inhabitants fled before and during the attack, reaching northern Samaria; hundreds were forcibly expelled during the following days. At least a hundred militiamen and civilians were killed.
The Arabs claimed that the Israelis had massacred Arab civilians, but the Israelis rejected the claims. A United Nations investigation found no evidence of a massacre. Following the operation, the Tel Aviv-Haifa road was opened to Israeli military and civilian traffic, and Arab roadblocks along the route were removed. Traffic along the Haifa-Hadera coastal railway was also restored.
Third phase: 15 October 1948 – 10 March 1949
Israel launched a series of military operations to drive out the Arab armies and secure the northern and southern borders of Israel.
Northern front – Galilee
On 22 October, the third truce went into effect.Shapira, Anita. Yigal Allon; Native Son; A Biography Translated by Evelyn Abel, University of Pennsylvania Press p. 247 Irregular Arab forces refused to recognise the truce, and continued to harass Israeli forces and settlements in the north. On the same day that the truce came into effect, the Arab Liberation Army violated the truce by attacking Manara, capturing the strongpoint of Sheikh Abed, repulsing counterattacks by local Israeli units, and ambushed Israeli forces attempting to relieve Manara. The IDF's Carmeli Brigade lost 33 dead and 40 wounded.Gelber, 2006, p. 33 Manara and Misgav Am were totally cut off, and Israel's protests at the UN failed to change the situation.
On 24 October, the IDF launched Operation Hiram and captured the entire upper Galilee area, driving the ALA back to Lebanon, and ambushing and destroying an entire Syrian battalion. The Israeli force of four infantry brigades was commanded by Moshe Carmel. The entire operation lasted just 60 hours, during which numerous villages were captured, often after locals or Arab forces put up resistance. Arab losses were estimated at 400 dead and 550 taken prisoner, with low Israeli casualties.
Some prisoners were reportedly executed by the Israeli forces. An estimated 50,000 Palestinian refugees fled into Lebanon, some of them fleeing ahead of the advancing forces, and some expelled from villages which had resisted, while the Arab inhabitants of those villages which had remained at peace were allowed to remain and became Israeli citizens. The villagers of Iqrit and Birim were persuaded to leave their homes by Israeli authorities, who promised them that they would be allowed to return. Israel eventually decided not to allow them to return, and offered them financial compensation, which they refused to accept.
At the end of the month, the IDF had captured the whole of Galilee, driven all ALA forces out of Israel, and had advanced into Lebanon to the Litani River, occupying thirteen Lebanese villages. In the village of Hula, two Israeli officers killed between 35 and 58 prisoners as retaliation for the Haifa Oil Refinery massacre. Both officers were later put on trial for their actions.
Negev
Israel launched a series of military operations to drive out the Arab armies and secure the borders of Israel. However, invading the West Bank might have brought into the borders of the expanding State of Israel a massive Arab population it could not absorb. The Negev desert was an empty space for expansion, so the main war effort shifted to Negev from early October.Shlomo Ben-Ami (Shlomo Ben-Ami (2006), pp. 41–42) Israel decided to destroy or at least drive out the Egyptian expeditionary force since the Egyptian front lines were too vulnerable as permanent borders.
On 15 October, the IDF launched Operation Yoav in the northern Negev. Its goal was to drive a wedge between the Egyptian forces along the coast and the Beersheba-Hebron-Jerusalem road and ultimately to conquer the whole Negev. This was a special concern on the Israeli part because of a British diplomatic campaign to have the entire Negev handed over to Egypt and Jordan, and which thus made Ben-Gurion anxious to have Israeli forces in control of the Negev as soon as possible.
Operation Yoav was headed by the Southern Front commander Yigal Allon. Committed to Yoav were three infantry and one armoured brigades, who were given the task of breaking through the Egyptian lines. The Egyptian positions were badly weakened by the lack of a defence in depth, which meant that once the IDF had broken through the Egyptian lines, there was little to stop them. The operation was a huge success, shattering the Egyptian ranks and forcing the Egyptian Army from the northern Negev, Beersheba and Ashdod.
In the so-called "Faluja Pocket", an encircled Egyptian force was able to hold out for four months until the 1949 Armistice Agreements, when the village was peacefully transferred to Israel and the Egyptian troops left. Four warships of the Israeli Navy provided support by bombarding Egyptian shore installations in the Ashkelon area, and preventing the Egyptian Navy from evacuating retreating Egyptian troops by sea.
On 19 October, Operation Ha-Har began in the Jerusalem Corridor, while a naval battle also took place near Majdal, with three Israeli corvettes facing an Egyptian corvette with air support. An Israeli sailor was killed and four wounded, and two of the ships were damaged. One Egyptian plane was shot down, but the corvette escaped. Israeli naval vessels also shelled Majdal on 17 October, and Gaza on 21 October, with air support from the Israeli Air Force. The same day, the IDF captured Beersheba, and took 120 Egyptian soldiers prisoner. On 22 October, Israeli naval commandos using explosive boats sank the Egyptian flagship Emir Farouk, and damaged an Egyptian minesweeper.
On 9 November 1948, the IDF launched Operation Shmone to capture the Tegart fort in the village of Iraq Suwaydan. The fort's Egyptian defenders had previously repulsed eight attempts to take it, including two during Operation Yoav. Israeli forces bombarded the fort before an assault with artillery and airstrikes by B-17 bombers. After breaching the outlying fences without resistance, the Israelis blew a hole in the fort's outer wall, prompting the 180 Egyptian soldiers manning the fort to surrender without a fight. The defeat prompted the Egyptians to evacuate several nearby positions, including hills the IDF had failed to take by force. Meanwhile, IDF forces took Iraq Suwaydan itself after a fierce battle, losing 6 dead and 14 wounded.
From 5 to 7 December, the IDF conducted Operation Assaf to take control of the Western Negev. The main assaults were spearheaded by mechanised forces, while Golani Brigade infantry covered the rear. An Egyptian counterattack was repulsed. The Egyptians planned another counterattack, but it failed after Israeli aerial reconnaissance revealed Egyptian preparations, and the Israelis launched a preemptive strike. About 100 Egyptians were killed, and 5 tanks were destroyed, with the Israelis losing 5 killed and 30 wounded.
On 22 December, the IDF launched Operation Horev, also called Operation Ayin. The goal of the operation was to drive all remaining Egyptian forces from the Negev, destroying the Egyptian threat on Israel's southern communities and forcing the Egyptians into a ceasefire. During five days of fighting, the Israelis secured the Western Negev, expelling all Egyptian forces from the area.
Israeli forces subsequently launched raids into the Nitzana area, and entered the Sinai Peninsula on 28 December. The IDF captured Umm Katef and Abu Ageila, and advanced north towards Al Arish, with the goal of encircling the entire Egyptian expeditionary force. Israeli forces pulled out of the Sinai on 2 January 1949 following joint British-American pressure and a British threat of military action. IDF forces regrouped at the border with the Gaza Strip. Israeli forces attacked Rafah the following day, and after several days of fighting, Egyptian forces in the Gaza Strip were surrounded. The Egyptians agreed to negotiate a ceasefire on 7 January, and the IDF subsequently pulled out of Gaza. According to Morris, "the inequitable and unfair rules of engagement: the Arabs could launch offensives with impunity, but international interventions always hampered and restrained Israel's counterattacks."Morris 2008, p. 404
On 28 December, the Alexandroni Brigade failed to take the Falluja Pocket, but managed to seize Iraq el-Manshiyeh and temporarily hold it. The Egyptians counterattacked, but were mistaken for a friendly force and allowed to advance, trapping a large number of men. The Israelis lost 87 soldiers.
On 5 March, Operation Uvda was launched following nearly a month of reconnaissance, with the goal of securing the Southern Negev from Jordan. The IDF entered and secured the territory, but did not meet significant resistance along the way, as the area was already designated to be part of the Jewish state in the UN Partition Plan, and the operation meant to establish Israeli sovereignty over the territory rather than actually conquer it. The Golani, Negev, and Alexandroni brigades participated in the operation, together with some smaller units and with naval support.
On 10 March, Israeli forces secured the Southern Negev, reaching the southern tip of Palestine: Umm Rashrash on the Red Sea (where Eilat was built later) and taking it without a battle. Israeli soldiers raised a hand-made Israeli flag ("The Ink Flag") at 16:00 on 10 March, claiming Umm Rashrash for Israel. The raising of the Ink Flag is considered to be the end of the war.
Anglo-Israeli air clashes
As the fighting progressed and Israel mounted an incursion into the Sinai, the Royal Air Force began conducting almost daily reconnaissance missions over Israel and the Sinai. RAF reconnaissance aircraft took off from Egyptian airbases and sometimes flew alongside Royal Egyptian Air Force planes. High-flying British aircraft frequently flew over Haifa and Ramat David Airbase, and became known to the Israelis as the "shuftykeit."
On 20 November 1948, an unarmed RAF photo-reconnaissance De Havilland Mosquito of No. 13 Squadron RAF was shot down by an Israeli Air Force P-51 Mustang flown by American volunteer Wayne Peake as it flew over the Galilee towards Hatzor Airbase. Peake opened fire with his cannons, causing a fire to break out in the port engine. The aircraft turned to sea and lowered its altitude, then exploded and crashed off Ashdod. The pilot and navigator were both killed.Aloni, 2001, p. 18.
Just before noon on 7 January 1949, four Spitfire FR18s from No. 208 Squadron RAF on a reconnaissance mission in the Deir al-Balah area flew over an Israeli convoy that had been attacked by five Egyptian Spitfires fifteen minutes earlier. The pilots had spotted smoking vehicles and were drawn to the scene out of curiosity. Two planes dived to below 500 feet altitude to take pictures of the convoy, while the remaining two covered them from 1,500 feet.Aloni, 2001, p. 22.
Israeli soldiers on the ground, alerted by the sound of the approaching Spitfires and fearing another Egyptian air attack, opened fire with machine guns. One Spitfire was shot down by a tank-mounted machine gun, while the other was lightly damaged and rapidly pulled up. The remaining three Spitfires were then attacked by patrolling IAF Spitfires flown by Chalmers Goodlin and John McElroy, volunteers from the United States and Canada respectively. All three Spitfires were shot down, and one pilot was killed.
Two pilots were captured by Israeli soldiers and taken to Tel Aviv for interrogation, and were later released. Another was rescued by Bedouins and handed over to the Egyptian Army, which turned him over to the RAF. Later that day, four RAF Spitfires from the same squadron escorted by seven Hawker Tempests from No. 213 Squadron RAF and eight from No. 6 Squadron RAF went searching for the lost planes, and were attacked by four IAF Spitfires. The Israeli formation was led by Ezer Weizman. The remaining three were manned by Weizman's wingman Alex Jacobs and American volunteers Bill Schroeder and Caesar Dangott.
The Tempests found they could not jettison their external fuel tanks, and some had non-operational guns. Schroeder shot down a British Tempest, killing pilot David Tattersfield, and Weizman severely damaged a British plane flown by Douglas Liquorish. Weizman's plane and two other British aircraft suffered light damage during the engagement. During the battle, British Tempest pilots treated British Spitfires as potential Israeli aircraft until the British Spitfire pilots were told by radio to wiggle their wings to be more clearly identifiable. The engagement ended when the Israelis realised the danger of their situation and disengaged, returning to Hatzor Airbase.
Israeli prime minister David Ben-Gurion personally ordered the wrecks of the RAF fighters that had been shot down to be dragged into Israeli territory. Israeli troops subsequently visited the crash sites, removed various parts, and buried the other aircraft. However, the Israelis did not manage to conceal the wrecks in time to prevent British reconnaissance planes from photographing them. An RAF salvage team was deployed to recover the wrecks, entering Israeli territory during their search. Two were discovered inside Egypt, while Tattersfield's Tempest was found north of Nirim, inside Israel. Interviews with local Arabs confirmed that the Israelis had visited the crash sites to remove and bury the wrecks. Tattersfield was initially buried near the wreckage, but his body was later removed and reburied at the British War Cemetery in Ramla.Cohen, Michael Joseph: Truman and Israel (1990)
In response, the RAF readied all Tempests and Spitfires to attack any IAF aircraft they encountered and bomb IAF airfields. British troops in the Middle East were placed on high alert with all leave cancelled, and British citizens were advised to leave Israel. The Royal Navy was placed on high alert. At Hatzor Airbase, the general consensus among the pilots, most of whom had flown with or alongside the RAF during World War II, was that the RAF would not allow the loss of five aircraft and two pilots to go without retaliation, and would probably attack the base at dawn the next day. That night, in anticipation of an impending British attack, some pilots decided not to offer any resistance and left the base, while others prepared their Spitfires and were strapped into the cockpits at dawn, preparing to repel a retaliatory airstrike. However, despite pressure from the squadrons involved in the incidents, British commanders refused to authorise any retaliatory strikes.Adrian, p. 7
The day following the incident, British pilots were issued a directive to regard any Israeli aircraft infiltrating Egyptian or Jordanian airspace as hostile and to shoot them down, but were also ordered to avoid activity close to Israel's borders. Later in January 1949, the British managed to prevent the delivery of aviation spirit and other essential fuels to Israel in retaliation for the incident. The British Foreign Office presented the Israeli government with a demand for compensation over the loss of personnel and equipment.Adrian, p. 59
UN Resolution 194
In December 1948, the UN General Assembly passed Resolution 194. It called to establish a UN Conciliation Commission to facilitate peace between Israel and Arab states. However, many of the resolution's articles were not fulfilled, since these were opposed by Israel, rejected by the Arab states, or were overshadowed by war as the 1948 conflict continued.
Weapons
Both sides utilized weapons that had been used in World War II by the British and French forces. Egypt's arsenal included leftover British equipment, while the Syrian arsenal included leftover French weaponry. The Israeli Defense Forces utilized an array of British, American, French and Czechoslovakian military equipment. In addition to the aforementioned, the IDF also used several Davidka mortars, which were a domestically produced weapon by Israel.
According to Amitzur Ilan, "Israel's ability to cope better with the embargo situation was, by far, her greatest strategic asset."
Type Arab armies IDF TanksMatilda tanks, R-39s, FT-17s, R35s, Panzer IVs (dug in and used as stationary gun emplacements by Egypt), Fiat M13/40, Sherman M4, M-22, Vickers MK-6.Cromwell tanks, H39s, M4 Sherman APCs/IFVsBritish World War II era trucks, Humber Mk III & IV, Automitrailleuses Dodge/Bich type, improvised armored cars/trucks, Marmon-Herrington Armoured Cars, Universal Carriers, Lloyd Towing CarriersBritish World War II era trucks, improvised armored cars/trucks, White M3A1 Scout Cars, Daimler Armoured Cars, Universal Carriers, M3 Half-tracks, IHC M14 Half-tracks, M5 Half-tracks ArtilleryMortars, 15 cm sIG33 auf Pz IIs, 25 mm anti-tank guns on Bren carriers, improvised self-propelled guns used by Syrians in 1948–49, 65 mm mountain guns on Lorraine 38L chenillettes, 2-pounder anti-tank guns, 6-pounder anti-tank gunsMortars, British mortars, 65 mm French howitzers (Napoleonchiks), 120 mm French mortars, Davidka mortars AircraftSpitfires, T-6 Texans, C-47 Dakotas, Hawker Hurricanes, Avro AnsonsSpitfires, Avia S-199s, B-17 Flying Fortresses, P-51 Mustangs, C-47 Dakotas Small ArmsLee–Enfield rifles, Bren Guns, Sten guns, MAS 36sSten guns, Mills grenades, Karabiner 98k (Czech copies), Bren Guns, MP 40s, MG-34 Machine guns, Thompson submachine guns, Lee–Enfield rifles, Molotov cocktails, PIAT anti-tank infantry weapon
Aftermath
1949 Armistice Agreements
In 1949, Israel signed separate armistices with Egypt on 24 February, Lebanon on 23 March, Transjordan on 3 April, and Syria on 20 July. The Armistice Demarcation Lines, as set by the agreements, saw the territory under Israeli control encompassing approximately three-quarters of the prior British administered Mandate as it stood after Transjordan's independence in 1946. Israel controlled territories of about one-third more than was allocated to the Jewish State under the UN partition proposal.L. Carl Brown (2013), p. 126. After the armistices, Israel had control over 78% of the territory comprising former Mandatory Palestine or some , including the entire Galilee and Jezreel Valley in the north, the whole Negev in south, West Jerusalem and the coastal plain in the center.
The armistice lines were known afterwards as the "Green Line". The Gaza Strip and the West Bank (including East Jerusalem) were occupied by Egypt and Transjordan respectively. The United Nations Truce Supervision Organization and Mixed Armistice Commissions were set up to monitor ceasefires, supervise the armistice agreements, to prevent isolated incidents from escalating, and assist other UN peacekeeping operations in the region.
Just before the signing of the Israel-Transjordan armistice agreement, general Yigal Allon proposed a military offensive to conquer the West Bank up to the Jordan River as the natural, defensible border of the state. Ben-Gurion refused, although he was aware that the IDF was militarily strong enough to carry out the conquest. He feared the reaction of Western powers and wanted to maintain good relations with the United States and not to provoke the British. Moreover, the results of the war were already satisfactory and Israeli leaders had to build a state.
Casualties
Israel lost 6,373 of its people, about 1% of its population at the time, in the war. About 4,000 were soldiers and the rest were civilians.
The exact number of Arab casualties is unknown. One estimate places the Arab death toll at 7,000, including 3,000 Palestinians, 2,000 Egyptians, 1,000 Jordanians, and 1,000 Syrians. In 1958, Palestinian historian Aref al-Aref calculated that the Arab armies' combined losses amounted to 3,700, with Egypt losing 961 regular and 200 irregular soldiers and Transjordan losing 362 regulars and 200 irregulars. According to Henry Laurens, the Palestinians suffered double the Jewish losses, with 13,000 dead, 1,953 of whom are known to have died in combat situations. Of the remainder, 4,004 remain nameless but the place, tally and date of their death is known, and a further 7,043, for whom only the place of death is known, not their identities nor the date of their death. According to Laurens, the largest part of Palestinian casualties consisted of non-combatants and corresponds to the successful operations of the Israelis.Laurens 2007 p. 194
Demographic outcomes
Arabs
During the 1947–1948 Civil War in Mandatory Palestine and the 1948 Arab–Israeli War that followed, around 750,000 Palestinian Arabs fled or were expelled from their homes, out of approximately 1,200,000 Arabs living in former British Mandate of Palestine, a displacement known to Palestinians as the Nakba. In 1951, the UN Conciliation Commission for Palestine estimated that the number of Palestinian refugees displaced from Israel was 711,000.General Progress Report and Supplementary Report of the United Nations Conciliation Commission for Palestine, Covering the Period from 11 December 1949 to 23 October 1950 , published by the United Nations Conciliation Commission, 23 October 1950. (U.N. General Assembly Official Records, 5th Session, Supplement No. 18, Document A/1367/Rev. 1)
This number did not include displaced Palestinians inside Israeli-held territory. More than 400 Arab villages, and about ten Jewish villages and neighbourhoods, were depopulated during the Arab–Israeli conflict, most of them during 1948. According to estimate based on earlier census, the total Muslim population in Palestine was 1,143,336 in 1947.Government of Palestine, A Survey of Palestine, Supplement, p. 10 (1946) The causes of the 1948 Palestinian exodus are a controversial topic among historians. After the war, around 156,000 Arabs remained in Israel and became Israeli citizens.
Displaced Palestinian Arabs, known as Palestinian refugees, were settled in Palestinian refugee camps throughout the Arab world. The United Nations established UNRWA as a relief and human development agency tasked with providing humanitarian assistance to Palestinian refugees. Arab nations refused to absorb Palestinian refugees, instead keeping them in refugee camps while insisting that they be allowed to return.
Refugee status was also passed on to their descendants, who were also largely denied citizenship in Arab states, except in Transjordan. The Arab League instructed its members to deny Palestinians citizenship "to avoid dissolution of their identity and protect their right of return to their homeland." More than 1.4 million Palestinians still live in 58 recognised refugee camps, while more than 5 million Palestinians live outside Israel and the Palestinian territories.
Palestinian refugees and displaced persons and the lack of a Palestinian right of return remain major issues in the Arab–Israeli conflict.
Jews
In the three years from May 1948 to the end of 1951, 700,000 Jews settled in Israel, mainly along the borders and in former Arab lands, doubling the Jewish population there. Of these, upwards of 300,000 arrived from Asian and North African states.Sachar, pp. 395–403.Devorah Hakohen, Immigrants in Turmoil: Mass Immigration to Israel and Its Repercussions in the 1950s and after, Syracuse University Press 2003 p. 267 Among them, the largest group, over 100,000, was from Iraq. The remaining came mostly from Europe, including 136,000 from the 250,000 displaced Jews of World War II living in refugee camps and urban centers in Germany, Austria, and Italy,Displaced Persons retrieved on 29 October 2007 from the U.S. Holocaust Museum. and more than 270,000 coming from Eastern Europe,Tom Segev, 1949. The First Israelis, Owl Books, 1986, p. 96. mainly Romania and Poland, over 100,000 each.
On the establishment of the state, a top priority was given to a policy for the "ingathering of exiles", and the Mossad LeAliyah Bet gave key assistance to the Jewish Agency to organise immigrants from Europe and the Middle East, and arrange for their transport to Israel. For Ben-Gurion, a fundamental defect of the State was that "it lacked Jews".Devorah Hakohen, Immigrants in Turmoil: Mass Immigration to Israel and Its Repercussions in the 1950s and after, Syracuse University Press 2003 pp. 24, 31, 42, 45.
Jewish immigrants from Arab and Muslim countries left for numerous reasons. The war's outcome had exacerbated Arab hostilities to local Jewish communities. News of the victory aroused messianic expectations in Libya and Yemen; Zionism had taken root in many countries; active incentives for making aliyah formed a key part of Israeli policy; and better economic prospects and security were to be expected from a Jewish state.
Some Arab governments, Egypt, for example, held their Jewish communities hostage at times. Persecution, political instability, and news of a number of violent pogroms also played a role. Some 800,000–1,000,000 Jews eventually left the Arab world over the next three decades as a result of these various factors. An estimated 650,000 of the departees settled in Israel.
Historiography
Since the war, different historiographical traditions have interpreted the events of 1948 differently; in the words of the New Historian Avi Shlaim, "each side subscribes to a different version of events." In the Israeli narrative, the war is Israel's War of Independence. In the Palestinian narrative, the War of 1948 is inextricable from the Nakba, the Zionist movement is one of settler colonialism, and the Israelis are seen as conquerors and the Palestinians as victims. The different narratives of 1948 reflect these different perceptions.
An issue affecting the historiography of 1948 is access to sources and archives, which may have been destroyed, appropriated, censored, or otherwise made unavailable to some or all researchers. Linguistic barriers represent another hurdle, as most research is published exclusively in the author's native language and is not translated.
The historiography of 1948 is tied to political legitimacy in the present and has implications for the Israeli–Palestinian conflict. According to Avraham Sela and Neil Caplan:A major reason for this grip of the past over the present is the unfulfilled quest of both Israelis and Palestinians for legitimacy, in one or more of the following three senses: (a) each party's sense of its own legitimacy as a national community entitled to its own sovereign state; (b) each party's willingness to grant legitimacy to at least part of the competing national narrative of the other; and (c) the international community's extension of legitimacy to the competing rights and claims of Israelis and Palestinians.The narratives of 1948 have also had implications for Palestinian refugees.
Israeli narratives
The Israelis, whether or not they were conquerors, were irrefutably the victors of the war, and for this reason among others, "they were able to propagate more effectively than their opponents their version of this fateful war." Only in 1987 was that narrative effectively challenged outside the Arab world.
Zionist narrative
Avi Shlaim gives the conventional Zionist narrative or the "old history" of the 1948 war as follows:The conflict between Jews and Arabs in Palestine came to a head following the passage, on 29 November 1947, of the United Nations partition resolution that called for the establishment of two states, one Jewish and one Arab. The Jews accepted the U.N. plan despite the painful sacrifices it entailed, but the Palestinians, the neighboring Arab states, and the Arab League rejected it. Great Britain did everything in its power toward the end of the Palestine Mandate to frustrate the establishment of the Jewish state envisaged in the UN plan. With the expiry of the Mandate and the proclamation of the State of Israel, seven Arab states sent their armies into Palestine with the firm intention of strangling the Jewish state at birth. The subsequent struggle was an unequal one between a Jewish David and an Arab Goliath. The infant Jewish state fought a desperate, heroic, and ultimately successful battle for survival against overwhelming odds. During the war, hundreds of thousands of Palestinians fled to the neighboring Arab states, mainly in response to orders from their leaders and despite Jewish pleas to stay and demonstrate that peaceful coexistence was possible. After the war, the story continues, Israeli leaders sought peace with all their heart and all their might but there was no one to talk to on the other side. Arab intransigence was alone responsible for the political deadlock, which was not broken until President Anwar Sadat's visit to Jerusalem thirty years later.According to Shlaim, this narrative is "not history in the proper sense of the word," as most of the literature on the war was producednot by professional academic historiansbut rather by participants in the war, politicians, soldiers, and state-sponsored historians, as well as by sympathetic journalists, chroniclers, and biographers. It also portrays Israelis as morally superior, lacks political analysis, and gives undue weight to "the heroic feats of the Israeli fighters." This nationalist narrative was taught in Israeli schools and used for gaining legitimacy internationally.
New Historians
The standard Zionist narrative of the war remained unchallenged outside the Arab world until the war's fortieth anniversary, when a number of critical books came out, including Simha Flapan's The Birth of Israel: Myths and Realities (1987), Benny Morris's The Birth of the Palestinian Refugee Problem (1987), Ilan Pappé's Britain and the Arab-Israeli Conflict, 1948–51 (1988), and Shlaim's Collusion Across the Jordan: King Abdullah, the Zionist Movement and the Partition of Palestine (1988). These writers came to be known as New Historians or "post-Zionists."
According to Shlaim, the new historians disagreed with the Zionist narrative on six main points: British policy with regard to the Yishuv at the end of the Palestine Mandate, the military balance in 1948, the origins of the Palestinian refugee problem, the nature of relations between Israelis and Jordanians during the war, Arab aims in the war, and the reasons peace remained elusive after the war.
Among their most vitriolic critics was Shabtai Teveth, biographer of David Ben-Gurion, who published "The New Historians," a series of four weekly full-page articles attacking the new historians, in Haaretz May 1989. Teveth claimed that the new historiography was flawed in its practice and that it was politically motived, that it was pro-Palestinian and aimed to delegitimize the State of Israel.
Neo-Zionist narratives
Ilan Pappé identifies a turn in predominant Israeli narratives about 1948 in September 2000. In the climate of the Second Intifada and in the Post-9/11 period, "not only were Israel's brutal military operations against the Palestinians during the new intifada seen as justified, but so was their systematic expulsion in 1948." Evidence of the expulsions, massacres, and war crimes of 1948 brought to light by the New Historians could no longer be ignored, but writers of what Pappé calls a "neo-Zionist" narrative justified these as necessary or unavoidable.
In this period, the focus of Israeli historical writing on 1948 shifted largely from its human impact back to its military aspects. Neo-Zionist writers were given selective access to top-secret material, to which writers critical of Zionism would not have been given access, and much of their work was published by the Israeli Ministry of Defense.
Among those Pappé associated with the neo-Zionist perspective were Benny Morris (who had become more outspokenly defensive of Zionism by this time), , Mordechai Bar-On, Yoav Gelber, Tamir Goren, , Alon Kadish, and , as well as the journal Techelet .
Palestinian narratives
Unlike Israeli narratives that shifted over the decades, Palestinian narratives of 1948 have been more or less constant, focusing on Palestinians' indigenous rights to Palestine, Palestinian victimhood, dispossession, displacement, exile, statelessness, and more "unrequited grievances against colonialism and Zionism." The term 'Nakba' to describe the Palestinian catastrophe in the war of 1948 was coined in Constantin Zureiq's 1948 book Ma'na an-Nakba. Aref al-Aref wrote a six volume work titled that was published in Arabic in the 1950s.
Palestinian narratives have focused on countering the dominant Zionist narrative; the preeminent Palestinian historian of 1948 Walid Khalidi has dedicated much of his career to disproving the official Israeli narrative that the 1948 Palestinian expulsion and flight was voluntary.
Rashid Khalidi and other historians hold that "there is no established, authoritative Palestinian master narrative." They attribute this to, among other reasons, the dispersed and fragmented state of the Palestinian community and the loss, destruction, or appropriation by Israel of relevant documents and libraries. Without access to much in the way of archival materials, Palestinian historians have made use of oral history.
Arab narratives
In the narratives of the wider Arab-Muslim world, 1948 is seen as an "Arab debacle," representative of the region's social and political decline from its "glorious distant past." The official narratives of Arab states on 1948 tended to be apologetic with the goal of defending their political legitimacy, while the Arab nationalists wrote with a focus on distilling and extracting historical lessons to galvanize Arab society, politics, and ideology in preparation for the next conflict with Israelneither approach bridled itself too much with historical accuracy.
Western narratives
In the United States
The American journalist Joan Peters' 1984 book From Time Immemorial had a massive impact on how 1948 was understood in popular and political narratives in the United States.Finkelstein, N. G. 1988. "Disinformation and the Palestine Question: The Not-So-Strange Case of Joan Peters's From Time Immemorial". In Blaming the Victims: Spurious Scholarship and the Palestinian Question, ed. E. W. Said and C. Hitchens, pp. 33–69. London: Verso.
Ilan Pappé asserts the neo-Zionist narrative was pushed in the United States most passionately by Michael Walzer, and by Anita Shapira and Derek Penslar with their 2003 Israeli Historical Revisionism: From Left to Right.
In popular culture
In 1948, the Egyptian film A Girl from Palestine tells the story of an Egyptian fighter pilot.
A 2015 PBS documentary, A Wing and a Prayer, depicts the Al Schwimmer–led airborne smuggling missions to arm Israel.
See also
List of battles and operations in the 1948 Palestine war
List of modern conflicts in the Middle East
List of wars involving Israel
Notes
References
Morris, 2008, pp. 236, 237, 247, 253, 254
Bibliography
Works by involved parties
Dunkelman, Ben (1976) Dual Allegiance: An Autobiography. Macmillan Company of Canada, Toronto.
Kagan, Benjamin (1966) The Secret Battle for Israel. World Publishing, Cleveland.
Lorch, Netanel (1961) The Edge of the Sword: Israel's War of Independence, 1947–1949. New York, London: G. P. Putnam's Sons
Secondary sources
Adrian, Nathan (2004). Britain, Israel and Anglo-Jewry 1949–57. Routledge
Aloni, Shlomo (2001). Arab-Israeli Air Wars 1947–82. Osprey Publishing.
Beckman, Morris (1999). The Jewish Brigade: An Army With Two Masters, 1944–45. Sarpedon Publishers.
Ben-Ami, Shlomo (2006). Scars of War, Wounds of Peace: The Israeli-Arab Tragedy. Oxford University Press.
Benvenisti, Meron (2002). Sacred Landscape. University of California Press.
Bercuson, David (1983). The Secret Army. Stein and Day, New York.
Bickerton, Ian and Hill, Maria (2003). Contested Spaces: The Arab-Israeli Conflict. McGraw-Hill.
Black, Ian (1992). Israel's Secret Wars: A History of Israel's Intelligence Services. Grove Press.
Bowyer Bell, John (1996). Terror Out of Zion: The Fight For Israeli Independence. Transaction Publishers.
Bregman, Ahron (2002). Israel's Wars: A History Since 1947. London: Routledge.
Cragg, Kenneth. Palestine. The Prize and Price of Zion. Cassel, 1997.
Van Creveld, Martin (2004). Moshe Dayan. Weidenfeld & Nicolson.
Collins, Larry and Lapierre, Dominique (1973). O Jerusalem!, Pan Books.
El-Nawawy, Mohammed (2002), The Israeli-Egyptian Peace Process in the Reporting of Western Journalists, Ablex/Greenwood,
Flapan, Simha (1987), The Birth of Israel: Myths and Realities, Pantheon Books, New York.
Geddes, Charles L. (1991). A Documentary History of the Arab-Israeli Conflict. Praeger.
Gelber, Yoav (1997). Jewish-Transjordanian Relations 1921–48: Alliance of Bars Sinister. London: Routledge.
Gelber, Yoav (2004). Israeli-Jordanian Dialogue, 1948–1953: Cooperation, Conspiracy, or Collusion?. Sussex Academic Press.
Gelber, Yoav (2004) "Independence Versus Nakba"; Kinneret Zmora-Bitan Dvir Publishing,
Gelber, Yoav (2006). Palestine 1948. War, Escape and the Emergence of the Palestinian Refugee Problem. Sussex Academic Press.
Gershoni, Haim (1989). Israel: The Way it was. Associated University Presses.
Gilbert, Martin (1998). Israel: A History . Black Swan.
Gilbert, Martin (1976). The Arab-Israeli Conflict: Its History in Maps Weidenfeld & Nicolson.
Gold, Dore (2007), The Fight for Jerusalem: Radical Islam, the West, and the Future of the Holy City, Regnery Publishing,
Israel Foreign Ministry, Foreign Ministry of the Russian Federation, Israel State Archives, Russian Federal Archives, Cummings Center for Russian Studies Tel Aviv University, Oriental Institute (2000). Documents on Israeli Soviet Relations, 1941–53. London: Routledge.
Kaniuk, Yoram (2001). Commander of the Exodus. Grove Press.
Fischbach, Michael R. 'Land'. In Philip Mattar (ed.) Encyclopedia of the Palestinians, Infobase Publishing. 2005. pp. 291–298
Heller, Joseph. The Birth of Israel, 1945–1949: Ben-Gurion and His Critics, University Press of Florida, 2001
Katz, Sam (1988). Israeli Units Since 1948. Osprey Publishing.
Khalaf, Issa Politics in Palestine: Arab Factionalism and Social Disintegration, 1939–1948. SUNY Press, 1991
Khalidi, Rashid (2001). "The Palestinians and 1948: the underlying causes of failure." In Eugene Rogan and Avi Shlaim (eds.). The War for Palestine (pp. 12–36). Cambridge: Cambridge University Press.
Khalidi, Rashid (2006). The Iron Cage:The Story of the Palestinian Struggle for Statehood. Boston: Beacon Press.
Khalidi, Walid (1987). From Haven to Conquest: Readings in Zionism and the Palestine Problem Until 1948. Institute for Palestine Studies.
Khalidi, Walid (ed.) (1992). All that remains. Institute for Palestine Studies.
Krämer, Gudrun, A History of Palestine: From the Ottoman Conquest to the Founding of the State of Israel, Princeton UP 2011.
Krammer, Arnold. (1974). The Forgotten Friendship: Israel and the Soviet Bloc 1947–53. University of Illinois Press, Urbana.
Landis, Joshua. "Syria and the Palestine War: fighting King 'Abdullah's 'Greater Syria plan.'" Rogan and Shlaim. The War for Palestine. 178–205.
Levenberg, Haim (1993). Military Preparations of the Arab Community in Palestine: 1945–1948. London: Routledge.
Levin, Harry. Jerusalem Embattled – A Diary of the City under Siege. Cassels, 1997.
Lockman, Zachary. Comrades and Enemies: Arab and Jewish Workers in Palestine, 1906–1948. University of California Press, 1996
Makdisi Saree, Palestine Inside Out: An Everyday Occupation, W.W. Norton & Company 2010
Masalha, Nur (1992). Expulsion of the Palestinians: The Concept of 'Transfer' in Zionist Political Thought, 1882–1948, Institute for Palestine Studies,
Morris, Benny (1988), The Birth of the Palestinian Refugee Problem, 1947–1949, Cambridge Middle East Library
Morris, Benny (1994), 1948 and after; Israel and the Palestinians
Morris, Benny (2001). Righteous Victims: A History of the Zionist-Arab Conflict, 1881–2001. Vintage Books.
Morris, Benny (2004), The Birth of the Palestinian Refugee Problem Revisited, Cambridge University Press, Cambridge, UK,
Morris, Benny (2008), 1948: The First Arab-Israeli War, Yale University Press, New Haven,
Oring, Elliott (1981). Israeli Humor – The Content: The Content and Structure of the Chizbat of the Palmah. SUNY Press.
Pappe, Ilan (2006), The Ethnic Cleansing of Palestine, Oneworld Publications, Oxford, England,
Reiter, Yitzhak, "National Minority, Regional Majority: Palestinian Arabs Versus Jews in Israel" (Syracuse Studies on Peace and Conflict Resolution), (2009) Syracuse Univ Press (Sd).
Rogan, Eugene L. and Avi Shlaim, eds. The War for Palestine: Rewriting the History of 1948. Cambridge: Cambridge UP, 2001
Rogan, Eugene L. and Avi Shlaim, eds. The War for Palestine: Rewriting the History of 1948. 2nd ed. Cambridge: Cambridge UP, 2007
Rogan, Eugene L. "Jordan and 1948: the persistence of an official history." Rogan and Shlaim. The War for Palestine. pp. 104–124
Sadeh, Eligar (1997). Militarization and State Power in the Arab-Israeli Conflict: Case Study of Israel, 1948–1982. Universal Publishers.
Sachar, Howard M. (1979). A History of Israel, New York: Knopf.
Sayigh, Yezid (2000). Armed Struggle and the Search for State: The Palestinian National Movement, 1949–1993. Oxford: Oxford University Press.
Sela, Avraham. "Abdallah Ibn Hussein." The Continuum Political Encyclopedia of the Middle East. Ed. Avraham Sela. New York: Continuum, 2002. pp. 13–14.
Shapira, Anita (1992). Land and Power: Zionist Resort to Force, 1881–1948. Oxford University Press.
Sheleg, Yair (2001). "A Short History of Terror" Haaretz.
Shlaim, Avi (2001). "Israel and the Arab Coalition." In Eugene Rogan and Avi Shlaim (eds.). The War for Palestine (pp. 79–103). Cambridge: Cambridge University Press.
Slater, Leonard (1970). The Pledge. Simon and Schuster, New York.
Stearns, Peter N. Citation from The Encyclopedia of World History 6th ed., Peter N. Stearns (general editor), 2001 Houghton Mifflin Company, at Bartleby.com.
Tripp, Charles. "Iraq and the 1948 War: mirror of Iraq's disorder." in Rogan and Shlaim. The War for Palestine. pp. 125–150.
Zertal, Idith (2005). Israel's Holocaust and the Politics of Nationhood. Cambridge: Cambridge University Press.
Ancillary works
Brown, Judith M. and Louis, Wm. Roger (1999). The Oxford History of the British Empire. Oxford: Oxford University Press.
Flint, Colin. Introduction to Geopolitics, Routledge 2012
Karsh, Inari & Karsh, Efraim (1999). Empires of the Sand: The Struggle for Mastery in the Middle East, 1789–1923. Harvard University Press.
Penkower, Monty Noam (2002). Decision on Palestine Deferred: America, Britain and Wartime Diplomacy, 1939–1945. London: Routledge.
Oren, Michael, Six Days of War, Random House Ballantine Publishing Group, New York, 2003,
Richelson, Jeffrey T. (1997). A Century of Spies: Intelligence in the Twentieth Century. Oxford: Oxford University Press.
Sicker, Martin (1999). Reshaping Palestine: From Muhammad Ali to the British Mandate, 1831–1922. Praeger/Greenwood.
External links
The Israeli Knesset: About the War of Independence
United Nations: System on the Question of Palestine ()
MidEastWeb: History of Palestine, Israel and the Israeli-Palestinian Conflict
Stephen R. Shalom:
The BBC on the UN Partition Plan
The BBC on the Formation of Israel
Avi Shlaim: Israel and the Arab Coalition in 1948 ()
Category:1948 in Israel
Category:1948 in Palestine
Category:1948 in Egypt
Arab Israeli War
Category:Conflicts in 1948
Category:1948 Palestinian expulsion and flight
Category:Invasions of Israel
Category:Wars of independence
|
wars_military
| 20,238
|
42720
|
Second Boer War
|
https://en.wikipedia.org/wiki/Second_Boer_War
|
The Second Boer War (, , 11 October 189931 May 1902), also known as the Boer War, Transvaal War, Anglo–Boer War, or South African War, was a conflict fought between the British Empire and the Boer republics (the South African Republic and Orange Free State) over Britain's influence in Southern Africa.
The Witwatersrand Gold Rush caused an influx of "foreigners" (Uitlanders) to the South African Republic (SAR), mostly British from the Cape Colony. As they were permitted to vote only after 14 years residence, they protested to the British authorities in the Cape. Negotiations failed at the botched Bloemfontein Conference in June 1899. The conflict broke out in October after the British government decided to send 10,000 troops.
The war had three phases. In the first, the Boers mounted preemptive strikes into British-held territory in Natal and the Cape Colony, besieging British garrisons at Ladysmith, Mafeking, and Kimberley. The Boers won victories at Stormberg, Magersfontein, Colenso and Spion Kop. In the second phase, British fortunes changed when their commanding officer, General Redvers Buller, was replaced by Lord Roberts and Lord Kitchener, who relieved the besieged cities and invaded the Boer republics at the head of a 180,000-strong expeditionary force. The Boers, aware they were unable to resist such a force, refrained from fighting pitched battles, allowing the British to occupy both republics and their capitals. Boer politicians fled or went into hiding; the British annexed the two republics in 1900. In Britain, the Conservative ministry attempted to capitalise by calling an early general election, dubbed a "khaki election". In the third phase, Boer fighters launched a guerrilla campaign. They used hit-and-run attacks and ambushes against the British for two years.
The campaign proved difficult for the British to defeat, due to unfamiliarity with guerrilla tactics and support among civilians. British high command ordered scorched earth policies as part of a counterinsurgency campaign. Over 100,000 Boer civilians were forcibly relocated into concentration camps, where 26,000 died, by starvation and disease. Black Africans were interned to prevent them from supplying the Boers; 20,000 died. British mounted infantry were deployed to track down guerrillas, and few combatants were killed in action, most dying from disease. Kitchener offered terms to remaining Boer leaders to end the conflict. Eager to ensure Boers were released from the camps, most Boer commanders accepted the terms in the Treaty of Vereeniging, surrendering in May 1902. The former republics were transformed into the British colonies of the Transvaal and Orange River, and in 1910 were merged with the Natal and Cape Colonies to form the Union of South Africa, a self-governing colony within the British Empire.
British expeditionary efforts were aided significantly by colonial forces from the Cape Colony, the Natal, Rhodesia, and many volunteers from the British Empire. Black African recruits contributed increasingly to the British effort. International public opinion was sympathetic to the Boers and hostile to the British. Even within the UK, there existed significant opposition to the war. As a result, the Boer cause attracted volunteers from neutral countries, including the German Empire, US, Russia and parts of the British Empire such as Australia and Ireland. Some consider the war the beginning of questioning the British Empire's global dominance, due to the war's surprising duration and unforeseen losses suffered by the British. A trial for British war crimes, including the killings of civilians and prisoners, was opened in January 1902. The war had a lasting effect on the region and on British domestic politics.
Name
The conflict is commonly referred to simply as "the Boer War" because the First Boer War (1880-81) was much smaller. Boer (meaning "farmer") is the common name for Afrikaans-speaking white South Africans descended from the Dutch East India Company's settlers at the Cape of Good Hope. Among some South Africans, it is known as the (Second) Anglo–Boer War. In Afrikaans, it is called the ' ("Second Freedom War"), ' ("Second Boer War"), ("Anglo–Boer War") or ("English War").
In South Africa, it is officially called the South African War. According to a 2011 BBC report, "most scholars prefer to call the war of 1899–1902 the South African War, thereby acknowledging that all South Africans, white and black, were affected by the war and that many were participants".
Origins
The war's origins were complex and stemmed from a century of conflict between the Boers and Britain. Of immediate importance, however, was the question of who would control and benefit most from the lucrative Witwatersrand gold mines discovered in 1884.
European settlement
The first European settlement in South Africa was founded at the Cape of Good Hope in 1652, and administered as part of the Dutch Cape Colony. As a result of political turmoil in the Netherlands, the British occupied the Cape three times during the Napoleonic Wars, and the occupation became permanent after the Battle of Blaauwberg in 1806. The colony was then home to about 26,000 colonists settled under Dutch rule.Entry: Cape Colony. Encyclopædia Britannica Volume 4 Part 2: Brain to Casting. Encyclopædia Britannica, Inc. 1933. James Louis Garvin, editor. Most represented old Dutch families brought to the Cape during the late 17th and early 18th centuries.. Broadly speaking, the colonists included distinct subgroups, including the Boers. The Boers were itinerant farmers who lived on the colony's frontiers, seeking better pastures for their livestock. Many were dissatisfied with aspects of British administration, in particular with Britain's abolition of slavery in 1834. Boers who used forced labor were unable to collect compensation for their slaves.
Between 1836 and 1852, many elected to migrate away from British rule in what became known as the Great Trek. Around 15,000 trekking Boers departed the Cape Colony and followed the eastern coast towards Natal. After Britain annexed Natal in 1843, they journeyed farther north into South Africa's eastern interior. There, they established two independent Boer republics: the South African Republic (1852; also known as the Transvaal Republic) and the Orange Free State (1854).
Scramble for Africa
The southern part of Africa was dominated in the 19th century by a set of struggles to create within it a single unified state. In 1868, Britain annexed Basutoland in the Drakensberg Mountains, following an appeal from Moshoeshoe I, the king of the Sotho people, who sought British protection against the Boers. While the Berlin Conference of 1884–1885 sought to draw boundaries between the European powers' African possessions, it also set the stage for further scrambles. Britain attempted to annex first the South African Republic in 1880, and then, in 1899, both the South African Republic and the Orange Free State.
In the 1880s, Bechuanaland (modern Botswana) became the object of a dispute between the Germans to the west, the Boers to the east, and Britain's Cape Colony to the south. Although Bechuanaland had no economic value, the "Missionaries Road" passed through it towards territory farther north. After the Germans annexed Damaraland and Namaqualand (modern Namibia) in 1884, Britain annexed Bechuanaland in 1885.
In the First Boer War of 1880–1881 the Boers of the Transvaal Republic proved skilful fighters in resisting Britain's attempt at annexation, causing a series of British defeats. The British government of William Ewart Gladstone was unwilling to become mired in a distant war, requiring substantial troop reinforcement and expense, for what was perceived at the time to be a minimal return. An armistice ended the war, and subsequently a peace treaty was signed with the Transvaal President Paul Kruger.
Witwatersrand Gold Rush
In June 1884, British imperial interests were ignited in the discovery by Jan Gerrit Bantjes of what would prove to be the world's largest deposit of gold ore at an outcrop on a ridge south of the Boer capital at Pretoria. The ridge was known locally as the "Witwatersrand" (white water ridge, a watershed). A gold rush to the Transvaal brought thousands of British and other prospectors from around the globe and over the border from the Cape Colony, which had been under British control since 1806.
Gold Production on the Witwatersrand 1898 to 1910 Year No. of Mines Gold output(fine ounces) Value (£) Relative 2010 value (£), Relative Value of a UK Pound Amount – average earnings, retrieved on 27 January 2011 1898 77 4,295,608 £15,141,376 £6,910,000,000 1899 (Jan–Oct) 85 3,946,545 £14,046,686 £6,300,000,000 1899 (Nov) – 1901 (Apr) 12 574,043 £2,024,278 £908,000,000 1901 (May–Dec) 12 238,994 £1,014,687 £441,000,000 1902 45 1,690,100 £7,179,074 £3,090,000,000 1903 56 2,859,482 £12,146,307 £5,220,000,000 1904 62 3,658,241 £15,539,219 £6,640,000,000 1905 68 4,706,433 £19,991,658 £8,490,000,000! scope="row" colspan="5" style="color:#900;"
The city of Johannesburg sprang up nearly overnight as a shanty town. Uitlanders (foreigners, white outsiders) poured in and settled around the mines. The influx was so rapid that uitlanders quickly outnumbered the Boers in Johannesburg and along the Rand, although they remained a minority in the Transvaal. The Boers, nervous and resentful of the uitlanders' growing presence, sought to contain their influence through requiring lengthy residential qualifying periods before voting rights could be obtained; by imposing taxes on the gold industry; and introducing controls through licensing, tariffs and administrative requirements. Among the issues giving rise to tension between the Transvaal government on the one hand and the uitlanders and British interests on the other, were:
Established uitlanders, including the mining magnates, wanted political, social, and economic control over their lives. These rights included a stable constitution, a fair franchise law, an independent judiciary and a better educational system. The Boers recognised that the more concessions they made to the uitlanders the greater the likelihood—with approximately 30,000 white male Boer voters and potentially 60,000 white male uitlanders—that their independent control of the Transvaal would be lost, and the territory absorbed into the British Empire.
The uitlanders resented the taxes levied by the Transvaal government, particularly when this was not spent on Johannesburg or uitlander interests but diverted to projects elsewhere in the Transvaal. For example, as the gold-bearing ore sloped away from the outcrop underground to the south, more and more blasting was necessary to extract it, and mines consumed vast quantities of explosives. A box of dynamite costing five pounds included five shillings tax. Not only was this tax perceived as exorbitant, but British interests were offended when President Paul Kruger gave monopoly rights for the manufacture of the explosive to a non-British branch of the Nobel company, which infuriated Britain. The so-called "dynamite monopoly" became a casus belli.
British imperial interests were alarmed when in 1894–95 Kruger proposed building a railway through Portuguese East Africa to Delagoa Bay, bypassing British-controlled ports in Natal and Cape Town and avoiding British tariffs. The Prime Minister of the Cape Colony was Cecil Rhodes, a man driven by a vision of a British-controlled Africa extending from the Cape to Cairo. Uitlander representatives and British mine owners became increasingly frustrated and angered by their dealings with the Transvaal government. A Reform Committee (Transvaal) was formed to represent the uitlanders.
Jameson Raid
In 1895, a plan to take Johannesburg, and end the control of the Transvaal government, was hatched with the connivance of Cape Prime Minister Rhodes and Johannesburg gold magnate Alfred Beit. A column of 600 armed men was led over the border from Bechuanaland towards Johannesburg by Leander Starr Jameson, the Administrator in Rhodesia of the British South Africa Company, of which Rhodes was the chairman. The column, mainly made up of Rhodesian and Bechuanaland British South Africa Policemen, was equipped with Maxim machine guns and artillery pieces.
The plan was to make a three-day dash to Johannesburg and trigger an uprising by the primarily British expatriate uitlanders, organised by the Johannesburg Reform Committee, before the Boer commandos could mobilise. However, the Transvaal authorities had warning of the raid and tracked it from when it crossed the border. Four days later, the dispirited column was surrounded near Krugersdorp, within sight of Johannesburg. After a skirmish in which the column lost 65 killed and wounded—while the Boers lost one man—Jameson's men surrendered and were arrested.
The botched raid had repercussions throughout southern Africa and Europe. In Rhodesia, the departure of so many policemen enabled the Matabele and Mashona peoples' rising against the British South Africa Company. The rebellion, known as the Second Matabele War, was suppressed only at a great cost.
A few days after the raid, the German Kaiser sent the "Kruger telegram", congratulating President Kruger and the government of the South African Republic on their success. When the text was disclosed in the British press, it generated a storm of anti-German feeling. In the baggage of the raiding column, to the embarrassment of Britain, the Boers found telegrams from Rhodes and other plotters in Johannesburg. Chamberlain had approved Rhodes' plans to send armed assistance in the case of a Johannesburg uprising, but he quickly moved to condemn the raid. Rhodes was censured at the Cape and London parliamentary inquiries, and forced to resign as Prime Minister and Chairman of the British South Africa Company.
The Boer government handed their prisoners over to the British for trial. Jameson was tried in England, where the press and London society, inflamed by anti-Boer and anti-German feeling and in a frenzy of jingoism, treated him as a hero. Although sentenced to 15 months imprisonment, Jameson was rewarded by being named Prime Minister of the Cape Colony (1904–08) and ultimately anointed as one of the founders of the Union of South Africa. For conspiring with Jameson, the uitlander members of the Reform Committee (Transvaal) were tried in the Transvaal courts and found guilty of treason. The four leaders were sentenced to death, but this was commuted to 15 years' imprisonment. In 1896, the other members of the committee were released on payment of £2,000 in fines, all paid by Rhodes. One Reform Committee member, Frederick Gray, committed suicide while in Pretoria jail. His death was a factor in softening the Transvaal government's attitude to the surviving prisoners.
Jan C. Smuts wrote, in 1906:
The raid alienated many Cape Afrikaners from Britain and united the Transvaal Boers behind President Kruger and his government. It drew the Transvaal and Orange Free State together in opposition to British imperialism. In 1897, the two republics concluded a military pact.
Arming the Boers
Kruger re-equipped the Transvaal army, importing 37,000 of the latest 7x57 mm Mauser Model 1895 rifles supplied by Germany, and 40 to 50 million rounds of ammunition. Some commandos used the Martini-Henry Mark III, because thousands of these had been purchased. Unfortunately, the large puff of white smoke after firing gave away the shooter's position. Roughly 7,000 Guedes 1885 rifles had also been purchased a few years earlier, and these were also used during the hostilities.
As the war went on, some commandos relied on captured British rifles, such as the Lee-Metford and Enfield. When the ammunition for the Mausers ran out, the Boers relied primarily on the captured Lee-Metfords. Few Boers used bayonets.
The Boers also purchased the best European German Krupp artillery. By October 1899, the Transvaal State Artillery had 73 heavy guns, including four 155 mm Creusot fortress guns and 25 of the 37 mm Maxim Nordenfeldt guns.
The Boers' Maxim, larger than the British Maxims, was a large calibre, belt-fed, water-cooled "auto cannon" that fired explosive rounds at 450 rounds per minute. It became known as the "Pom Pom".
The Transvaal army was transformed: approximately 25,000 men equipped with modern rifles and artillery could mobilise within two weeks. However, Kruger's victory in the Jameson Raid did nothing to resolve the fundamental problem of finding a formula to conciliate the uitlanders, without surrendering the independence of the Transvaal.
British case for war
The failure to gain improved rights for uitlanders (notably the dynamite tax) became a pretext for war and justification for a military build-up in Cape Colony. The case for war was developed and espoused as far away as the Australian colonies. Cape Colony Governor Sir Alfred Milner; Rhodes; Chamberlain; and mining syndicate owners such as Beit, Barney Barnato, and Lionel Phillips, favoured annexation of the Boer republics. Confident that the Boers would be quickly defeated, they planned and organised a short war, citing the uitlanders' grievances as the motivation. In contrast, the influence of the war party within the British government was limited. Prime Minister, Lord Salisbury, despised jingoism. He was uncertain of the abilities of the British Army. Despite his moral and practical reservations, Salisbury led the UK to war to preserve the Empire's prestige and a feeling of obligation to British South Africans. Salisbury detested the Boers treatment of native Africans, referring to the London Convention of 1884, following Britain's defeat in the first war, as an agreement "really in the interest of slavery". Salisbury was not alone in this. Roger Casement, already on the way to becoming an Irish Nationalist, was nevertheless happy to gather intelligence for the British against the Boers because of their cruelty to Africans. cites
The British government went against the advice of its generals and declined to send substantial reinforcements to South Africa before war broke out. Secretary of State for War Lansdowne did not believe the Boers were preparing for war and that if Britain were to send large numbers of troops, it would strike too aggressive a posture and possibly derail a negotiated settlement—or even encourage a Boer attack.
Negotiations fail
Steyn of the Orange Free State invited Milner and Kruger to attend a conference in Bloemfontein. The conference started on 30 May 1899, but negotiations quickly broke down, as Kruger had no intention of granting meaningful concessions, and Milner had no intention of accepting his normal delaying tactics.Walker, A History of Southern Africa, p. 480
On 9 October 1899, after convincing the Orange Free State to join him and mobilising their forces, Kruger issued an ultimatum giving Britain 48 hours to withdraw troops from the border of Transvaal, despite the fact the only regular British troops near the border of either republic were 4 companies deployed to defend Kimberley. Otherwise, the Transvaal, allied with the Orange Free State, would declare war. News of the ultimatum reached London on the day it expired. The editor of the Times purportedly laughed out loud when he read it, saying 'an official document is seldom amusing and useful yet this was both'. The Times denounced the ultimatum as an 'extravagant farce' and The Globe denounced this 'trumpery little state'. Most editorials were similar to the Daily Telegraph's, which declared: 'of course there can only be one answer to this grotesque challenge. Kruger has asked for war and war he must have!'
Such views were far from those of the British government and the army. Army reform had been a matter of pressing concern since the 1870s, put off because the public did not want the expense of a larger, more professional army and because a large home army was not politically welcome. The Prime Minister had to tell a surprised Queen Victoria that 'We have no army capable of meeting even a second-class Continental Power'.
First phase: The Boer offensive, October–December 1899
British Army deployed
When war with the Boers was imminent in September 1899, a Field Force, referred to as the Army Corps was mobilised and sent to Cape Town. It was "about the equivalent of the I Army Corps of the existing mobilization scheme" and was placed under the command of Gen Sir Redvers Buller, general officer commanding-in-chief of Aldershot Command.Dunlop, Colonel John K., The Development of the British Army 1899–1914, London, Methuen (1938) p. 72. In South Africa the corps never operated as such and the 1st, 2nd, 3rd divisions were widely dispersed.
Boer organization and skills
War was declared on 11 October with a Boer offensive into the British-held Natal and Cape Colony areas. The Boers had about 33,000 soldiers, and outnumbered the British, who could move only 13,000 troops to the front line. The Boers had no problems with mobilisation, since the independent Boers had no regular army units, apart from the Staatsartillerie (Dutch for 'State Artillery'). As with the First Boer War, since most of the Boers were members of civilian militias, none had adopted uniforms or insignia. Only the members of the Staatsartillerie wore light green uniforms.
When danger loomed, all the burgers (citizens) in a district would form a military unit called a commando and elect officers. A full-time official called a Veldkornet maintained muster rolls but had no disciplinary powers. Each man brought his own weapon, usually a hunting rifle, and horse. Those who could not afford a gun were given one by the authorities. The presidents of the Transvaal and Orange Free State simply signed decrees to concentrate within a week, and the commandos could muster between 30,000-40,000 men. Many did not look forward to fighting against fellow Christians and, by and large, fellow Protestants. Many had an overly optimistic sense of what the war would involve, imagining victory could be achieved as fast and easily as in the First Anglo-Boer War. Many, including many generals, had a sense their cause was holy and just, and blessed by God.
It rapidly became clear that the Boers presented the British forces with a severe tactical challenge. The Boers presented a mobile and innovative approach to warfare, drawing on their experiences from the First Boer War. The Boers who made up their commandos were farmers who had their working life in the saddle, as farmers and hunters. They depended on the pot, horse and rifle; they were skilled stalkers and marksmen. As hunters, they had learned to fire from cover; from a prone position and to make the first shot count, knowing that if they missed, the game would be long gone or could charge and potentially kill them. At community gatherings, target shooting was a major sport; they practised shooting at targets, such as hens' eggs perched on posts away. They made expert mounted infantry, using cover, from which they could pour in a destructive fire using modern, smokeless, Mauser rifles. In preparation for hostilities, the Boers had acquired around 100 of the latest Krupp field guns, all horse-drawn and dispersed among the Kommando groups and several Le Creusot "Long Tom" siege guns. The Boers' skill in adapting themselves to become first-rate artillerymen shows that they were a versatile adversary. The Transvaal had an intelligence service that stretched across South Africa and whose extent and efficiency the British were as yet unaware.
Boers besiege Ladysmith, Mafeking and Kimberley
The Boers struck first on 12 October at the Battle of Kraaipan, an attack that heralded the invasion of the Cape Colony and Natal between October 1899 and January 1900. With speed and surprise, the Boers drove quickly towards the British garrison at Ladysmith and the smaller ones at Mafeking and Kimberley. The quick Boer mobilisation resulted in military successes against scattered British forces. Sir George Stuart White, commanding the British division at Ladysmith, unwisely allowed Major-General Penn Symons to throw a brigade forward to the coal-mining town of Dundee (also reported as Glencoe), surrounded by hills. This became the war's first major clash, the Battle of Talana Hill. Boer guns began shelling the British camp from the summit of Talana Hill at dawn on 20 October. Penn Symons immediately counter-attacked: His infantry drove the Boers from the hill, for the loss of 446 British casualties, including Penn Symons.
Another Boer force occupied Elandslaagte, which lay between Ladysmith and Dundee. The British under Major General John French and Colonel Ian Hamilton attacked to clear the line of communications to Dundee. The resulting Battle of Elandslaagte was a clear-cut British tactical victory, but White feared more Boers were about to attack his main position and ordered a chaotic retreat from Elandslaagte, throwing away the advantage gained. The detachment from Dundee was compelled to make an exhausting cross-country retreat to rejoin White's main force. As Boers surrounded Ladysmith and opened fire with siege guns, White ordered a major sortie against them. The result was a disaster, with 140 men killed and over 1,000 captured. The siege of Ladysmith lasted months.
Meanwhile, to the north-west at Mafeking, on the border with Transvaal, Colonel Robert Baden-Powell had raised two regiments of local forces amounting to about 1,200 men in order to attack and create diversions if things went wrong further south. As a railway junction, Mafeking provided good supply facilities and was the obvious place for Baden-Powell to fortify in readiness for such attacks. However, instead of being the aggressor, Baden-Powell was forced to defend Mafeking when 6,000 Boer, commanded by Piet Cronjé, attempted a determined assault. This quickly subsided into a desultory affair, with the Boers prepared to starve the stronghold into submission. So, on 13 October, the 217-day siege of Mafeking began.
Lastly, over to the south of Mafeking lay the diamond mining city of Kimberley, which was also subjected to a siege. Although not militarily significant, it represented an enclave of British imperialism on the borders of the Orange Free State and was hence an important Boer objective. In early November, about 7,500 Boer began their siege, again content to starve the town into submission. Despite Boer shelling, the 40,000 inhabitants, of which only 5,000 were armed, were under little threat, because the town was well-stocked with provisions. The garrison was commanded by Lieutenant Colonel Robert Kekewich, although Rhodes was also a prominent figure in the town's defences.
Siege life took its toll on the defending soldiers and civilians, as food began to grow scarce after a few weeks. In Mafeking, Sol Plaatje wrote, "I saw horseflesh for the first time being treated as a human foodstuff." The cities also dealt with constant artillery bombardment, making the streets dangerous. Near the end of the siege of Kimberley, it was expected that the Boers would intensify their bombardment, so Rhodes displayed a notice encouraging people to go down into shafts of the Kimberley Mine for protection. The townspeople panicked, and people surged into the mineshafts constantly for a 12-hour period. Although the bombardment never came, this did nothing to diminish the anxious civilians' distress. The most well-heeled of the townspeople, including Rhodes, sheltered in the Sanatorium, site of the present-day McGregor Museum; the poorer residents, notably the black population, did not have any shelter from shelling.
In retrospect, the Boers' decision to commit themselves to sieges (Sitzkrieg) was a mistake and an illustration of their lack of strategic vision. Of the seven sieges in the First Boer War, the Boers had prevailed in none. More importantly, it handed the initiative back to the British and allowed them to recover. Generally throughout the campaign, the Boers were too defensive and passive, wasting the opportunities they had for victory. Yet that passivity testified to the fact they had no desire to conquer British territory, but only to preserve their ability to rule in their own territory.
First British relief attempts
On 31 October 1899, General Sir Redvers Henry Buller, a much-respected commander, arrived in South Africa with the Army Corps, made up of the 1st, 2nd and 3rd divisions. Buller originally intended an offensive straight up the railway leading from Cape Town through Bloemfontein to Pretoria. Finding on arrival that British troops were under siege, he split his army corps into detachments to relieve the besieged garrisons. One division, led by Lieutenant General Lord Methuen, was to follow the Western Railway to the north and relieve Kimberley and Mafeking. A smaller force of 3,000, led by Major General William Gatacre, was to push north towards the railway junction at Stormberg and secure the Cape Midlands District from Boer raids and rebellions by Boer inhabitants. Buller led the major part of the army corps to relieve Ladysmith to the east.
The initial results of this offensive were mixed, with Methuen winning bloody skirmishes in the Battle of Belmont on 23 November, the Battle of Graspan on 25 November, and at a larger engagement, the Battle of Modder River, on 28 November resulting in British losses of 71 dead and over 400 wounded. British commanders had been trained on the lessons of the Crimean War and were adept at battalion and regimental set pieces, with columns manoeuvring in jungles, deserts and mountainous regions. What British generals failed to comprehend was the impact of destructive fire from trench positions and the mobility of cavalry raids. The British troops had antiquated tactics—and in some cases antiquated weapons—against the mobile Boer forces with the destructive fire of their modern Mausers, the latest Krupp field guns and their novel tactics.Field Marshal Lord Carver, The Boer War, pp. 259–262 On 7 December, a raid at Enslin Station further highlighted British weaknesses, notably their supply line, which was vulnerable to guerilla attacks.
The middle of December was disastrous for the British. In a period known as Black Week (10–15 December 1899), the British suffered defeats on three fronts. On 10 December, General Gatacre tried to recapture Stormberg railway junction about south of the Orange River. Gatacre's attack was marked by administrative and tactical blunders and the Battle of Stormberg ended in a British defeat, with 135 killed and wounded and two guns and over 600 troops captured. At the Battle of Magersfontein on 11 December, Methuen's 14,000 British troops attempted to capture a Boer position in a dawn attack to relieve Kimberley. This too turned into a disaster when the Highland Brigade became pinned down by accurate Boer fire. After suffering from intense heat and thirst for nine hours, they eventually broke in ill-disciplined retreat. The Boer commanders, Koos de la Rey and Cronjé, had ordered trenches to be dug in an unconventional place to fool the British and give their riflemen a greater firing range. The plan worked, and this tactic helped to write the doctrine of the supremacy of the defensive position, using modern small arms and trench fortifications.'Historical Overview' in Antony O'Brien, Bye-Bye Dolly Gray The British lost 120 killed and 690 wounded and were prevented from relieving Kimberley and Mafeking. A British soldier said of the defeat:
The nadir of Black Week was the Second Battle of Colenso on 15 December, where 21,000 British troops, commanded by Buller, attempted to cross the Tugela River to relieve Ladysmith, where 8,000 Transvaal Boers under the command of Louis Botha were waiting. Through a combination of artillery and accurate rifle fire and better use of the ground, the Boers repelled British attempts to cross the river. After his first attacks failed, Buller broke off the battle and ordered a retreat, abandoning many wounded men, several isolated units and ten field guns to be captured by Botha's men. Buller's forces lost 145 men killed and 1,200 missing or wounded and the Boers suffered only 40 casualties, including 8 killed.
Second phase: British offensive, January-September 1900
The British government took these defeats badly and with the sieges continuing was compelled to send two more divisions plus large numbers of colonial volunteers. By January 1900 this would become the largest force Britain had ever sent overseas, amounting to 180,000 men with further reinforcements being sought.
While watching for these reinforcements, Buller made another bid to relieve Ladysmith by crossing the Tugela west of Colenso. Buller's subordinate, Major General Charles Warren, successfully crossed the river, but was faced with a fresh defensive position centred on a prominent hill known as Spion Kop. In the resulting Battle of Spion Kop, British troops captured the summit by surprise during the early hours of 24 January 1900, but as the fog lifted, they realised too late that they were overlooked by Boer gun emplacements on the surrounding hills. The rest of the day resulted in a disaster caused by poor communication between Buller and his commanders. Between them they issued contradictory orders, on the one hand ordering men off the hill, while other officers ordered fresh reinforcements to defend it. The result was 350 men killed and nearly 1,000 wounded and a retreat across the Tugela River into British territory. There were nearly 300 Boer casualties.
Buller attacked Louis Botha again on 5 February at Vaal Krantz and was again defeated. Buller withdrew early when it appeared that the British would be isolated in an exposed bridgehead across the Tugela, for which he was nicknamed "Sir Reverse" by some of his officers.
Buller replaced
By taking command in person, Buller had allowed the overall direction of the war to drift. Because of concerns about his performance and negative reports from the field, he was replaced as Commander in Chief by Lord Roberts. Roberts assembled a new team for headquarters staff from far and wide: Lord Kitchener (Chief of Staff) from the Sudan; Frederick Russell Burnham (Chief of Scouts), the American scout, from the Klondike; George Henderson from the Staff College; Neville Bowles Chamberlain from Afghanistan; and William Nicholson (Military Secretary) from Calcutta. Like Buller, Roberts first intended to attack directly along the Cape Town–Pretoria railway but, again like Buller, was forced to relieve the beleaguered garrisons. Leaving Buller in command in Natal, Roberts massed his main force near the Orange River and along the Western Railway behind Methuen's force at the Modder River and prepared to make a wide outflanking move to relieve Kimberley.
Except in Natal, the war had stagnated. Other than a single attempt to storm Ladysmith, the Boers made no attempt to capture the besieged towns. In the Cape Midlands, the Boers did not exploit the British defeat at Stormberg and were prevented from capturing the railway junction at Colesberg. In the dry summer, the grazing on the veld became parched, weakening the Boers' horses and draught oxen, and many Boer families joined their menfolk in the siege lines and laagers (encampments), fatally encumbering Cronjé's army.
Roberts relieves the sieges
Roberts launched his main attack on 10 February 1900 and although hampered by a long supply route, managed to outflank the Boers defending Magersfontein. On 14 February, a cavalry division under French launched a major attack to relieve Kimberley. Although encountering severe fire, a massed cavalry charge split the Boer defences on 15 February, opening the way for French to enter Kimberley that evening, ending its 124 days' siege.
Meanwhile, Roberts pursued Piet Cronjé's 7,000-strong force, which had abandoned Magersfontein to head for Bloemfontein. French's cavalry was ordered to assist in the pursuit by embarking on an epic drive towards Paardeberg where Cronjé was attempting to cross the Modder River. At the Battle of Paardeberg from 18 to 27 February, Roberts then surrounded Cronjé's retreating Boer army. On 17 February, a pincer movement involving French's cavalry and the main British force attempted to take the entrenched position, but the frontal attacks were uncoordinated and so repulsed by the Boers. Finally, Roberts resorted to bombarding Cronjé into submission. It took ten days, and when the British troops used the polluted Modder River as water supply, typhoid killed many troops. General Cronjé was finally forced to surrender at the Battle of Paardeberg with 4,000 men.
In Natal, the Battle of the Tugela Heights, which started on 14 February was Buller's fourth attempt to relieve Ladysmith. The losses Buller's troops had sustained convinced Buller to adopt Boer tactics "in the firing line—to advance in small rushes, covered by rifle fire from behind; to use the tactical support of artillery; and above all, to use the ground, making rock and earth work for them as it did for the enemy." Despite reinforcements his progress was painfully slow against stiff opposition. However, on 26 February, after much deliberation, Buller used all his forces in one all-out attack for the first time and succeeded in forcing a crossing of the Tugela to defeat Botha's outnumbered forces north of Colenso. After a siege lasting 118 days, the Relief of Ladysmith was effected, the day after Cronjé surrendered, but at a total cost of 7,000 British casualties. Buller's troops marched into Ladysmith on 28 February.
After a succession of defeats, the Boers realised that against such overwhelming numbers of troops, they had little chance and became demoralised. Roberts then advanced into the Orange Free State from the west, putting the Boers to flight at the Battle of Poplar Grove and capturing Bloemfontein, the capital, unopposed on 13 March with the Boer defenders escaping and scattering. Meanwhile, he detached a small force to relieve Baden-Powell. The Relief of Mafeking on 18 May 1900 provoked riotous celebrations in Britain, the origin of the Edwardian slang word "mafficking". On 28 May, the Orange Free State was annexed and renamed the Orange River Colony.
Capture of Pretoria
After being forced to delay for several weeks at Bloemfontein by a shortage of supplies, an outbreak of typhoid at Paardeberg, and poor medical care, Roberts finally resumed his advance. He was forced to halt again at Kroonstad for 10 days, due once again to the collapse of his medical and supply systems, but captured Johannesburg on 31 May and the capital of the Transvaal, Pretoria, on 5 June. Before the war, the Boers had constructed forts south of Pretoria, but the artillery had been removed from the forts for use in the field, and in the event they abandoned Pretoria without a fight. Having won the principal cities, Roberts declared the war over on 3 September 1900; and the South African Republic was formally annexed.
British observers believed the war to be all but over after the capture of the two capitals. However, the Boers had earlier met at the temporary new capital of the Orange Free State, Kroonstad, and planned a guerrilla campaign to hit the British supply and communication lines. The first engagement of this new form of warfare was at Sanna's Post on 31 March where 1,500 Boers under the command of Christiaan de Wet attacked Bloemfontein's waterworks about east of the city, and ambushed a heavily escorted convoy, which caused 155 British casualties and the capture of seven guns, 117 wagons, and 428 British troops.
After the fall of Pretoria, one of the last formal battles was at Diamond Hill on 11–12 June, where Roberts attempted to drive the remnants of the Boer field army under Botha beyond striking distance of Pretoria. Although Roberts drove the Boers from the hill, Botha did not regard it as a defeat, for he inflicted 162 casualties on the British while suffering only around 50 casualties.
Boers retreat
The set-piece period of the war now largely gave way to a guerrilla war, but one final operation remained. President Kruger and what remained of the Transvaal government had retreated to eastern Transvaal. Roberts, joined by troops from Natal under Buller, advanced against them, and broke their last defensive position at Bergendal on 26 August. As Roberts and Buller followed up along the railway line to Komatipoort, Kruger sought asylum in Portuguese East Africa (modern Mozambique). Some dispirited Boers did likewise, and the British gathered up much war material. However, the core of the Boer fighters under Botha easily broke back through the Drakensberg Mountains into the Transvaal highveld after riding north through the bushveld.
As Roberts's army occupied Pretoria, the Boer fighters in the Orange Free State retreated into the Brandwater Basin, a fertile area in the south-east of the Republic. This offered only temporary sanctuary, as the mountain passes leading to it could be occupied by the British, trapping the Boers. A force under General Archibald Hunter set out from Bloemfontein to achieve this in July 1900. The hard core of the Free State Boers under De Wet, accompanied by President Steyn, left the basin early. Those remaining fell into confusion and most failed to break out before Hunter trapped them. 4,500 Boers surrendered and much equipment was captured, but as with Roberts's drive against Kruger, these losses were of relatively little consequence, as the hard core of the Boer armies and their most determined and active leaders remained at large.
From the Basin, Christiaan de Wet headed west. Although hounded by British columns, he succeeded in crossing the Vaal into western Transvaal, to allow Steyn to travel to meet their leaders. There was much sympathy for the Boers in Europe. In October, President Kruger and members of the Transvaal government left Portuguese East Africa on the Dutch warship De Gelderland, sent by the Queen Wilhelmina of the Netherlands. Paul Kruger's wife, however, was too ill to travel and remained in South Africa where she died on 20 July 1901 without seeing her husband again. President Kruger first went to Marseille and then to the Netherlands, where he stayed before moving to Clarens, Switzerland, where he died in exile in 1904.
Prisoners of war sent overseas
The first sizeable batch of Boer prisoners taken by the British consisted of those captured at the Battle of Elandslaagte on 21 October 1899. Initially, these POWs were held on troopships in Simons Bay until POW camps in Cape Town and Simonstown were completed. In total, six prisoner of war camps would be set up in South Africa. As numbers grew, the British decided they did not want them kept locally. The capture of 4000 POWs in February 1900 was a key event, which made the British realise they could not accommodate all POWs in South Africa. The British feared they could be freed by sympathetic locals. Moreover, they already had trouble supplying their troops and did not want the added burden of sending supplies for the POWs. Britain therefore sent many POWs overseas.
Around 31 prisoner of war camps were consequently set up in British colonies overseas during the war. The first overseas (off African mainland) camps were opened in Saint Helena, which ultimately received about 5,000 POWs. About 5,000 POWs were sent to Ceylon. Other POWs were sent to Bermuda and India.
Oath of neutrality
On 15 March 1900, Lord Roberts proclaimed an amnesty for all burghers, except leaders, who took an oath of neutrality and returned quietly to their homes. It is estimated that between 12,000 and 14,000 burghers took this oath between March and June 1900.
Third phase: Guerrilla war, September 1900–May 1902)
By September 1900, the British were nominally in control of both Republics, with the exception of north Transvaal. However, they discovered they only controlled the territory their columns physically occupied. Despite the loss of their capitals and half their army, the Boer commanders adopted guerrilla warfare, conducting raids against railways, resource and supply targets, aimed at disrupting the operational capacity of the British Army. They avoided pitched battles and casualties were light.
Boer commando units were sent to the district from which its members were recruited, which meant they could rely on local support and knowledge of the terrain and towns, enabling them to live off the land. Their orders were simply to act against the British whenever possible. Their tactics were to strike fast causing as much damage as possible, then withdraw before enemy reinforcements could arrive. The vast distances of the republics allowed Boer commandos freedom to move about and made it nearly impossible for the 250,000 British troops to control the territory effectively using columns alone. As soon as a British column left a town or district, British control of that area faded away. Boer commandos were especially effective during the initial guerrilla phase because Roberts had assumed the war would end with the capture of the capitals and dispersal of the Boer armies. British troops were therefore redeployed out of the area, and had been replaced by lower-quality Imperial Yeomanry and locally-raised irregular corps.
From late May 1900, the first successes of the Boer guerrilla strategy were at Lindley (where 500 Yeomanry surrendered), and at Heilbron (where a large convoy and its escort were captured) and other skirmishes resulting in 1,500 British casualties in less than ten days. In December 1900, De la Rey and Christiaan Beyers attacked and mauled a British brigade at Nooitgedacht, inflicting 650 casualties. As a result, the British, led by Lord Kitchener, mounted extensive searches for Christiaan de Wet, but without success. However, Boer raids on British army camps and other targets were sporadic and poorly planned, and the nature of the Boer guerrilla war itself had no long-term objectives, with the exception to harass the British. This led to a disorganised pattern of scattered engagements between the British and Boers.
Use of blockhouses
The British were forced to revise their tactics. They concentrated on restricting the freedom of movement of the Boer commandos and depriving them of local support. The railway lines had provided vital lines of communication and supply, and as the British had advanced across South Africa, they had used armoured trains and established fortified blockhouses at key points along most lines. They built additional blockhouses (each housing between six to eight soldiers under a non-commissioned officer) at bridges and beside major roads connecting rural towns, and fortified these to protect supply routes against Boer raiders. Eventually over 8,000 such blockhouses were built across the republics, radiating from the larger towns along principal roads and railways. Each blockhouse cost £800-£1,000 and took three months to build. Despite this, they proved effective; not one bridge or any section of railway line at which a blockhouse was manned was blown up.
The blockhouse system required many troops to garrison. Well over 50,000 British troops, or 50 battalions, were involved in blockhouse duty, greater than the approximately 30,000 Boers in the field during the guerrilla phase. In addition, up to 16,000 local Africans were used as armed guards and to patrol the line at night. The Army linked the blockhouses with barbed wire fences to parcel up the wide veld into smaller areas. "New Model" drives were mounted under which a line of troops could sweep an area of veld bounded by blockhouse lines, unlike the earlier inefficient scouring of the countryside by scattered columns.
Scorched earth campaign against civilians
The British implemented a scorched earth policy under which they targeted everything within the controlled areas that could give sustenance to the guerrillas, making it harder for them to survive. As British troops swept the countryside, they systematically destroyed crops, poisoned wells, burned homesteads and farms, and interned Boer and African men, women, children and workers in concentration camps. The British established mounted raiding columns in support of sweeper columns. These were used to rapidly follow and relentlessly harass the Boers to delay them and cut off escape, while the sweeper units caught up. Many of the 90 or so mobile columns formed by the British to participate in such drives were a mixture of British and colonial troops, but they also had a large minority of armed Africans. The number of armed Africans serving with these columns has been estimated at 20,000. The British Army made use of Boer auxiliaries who had been persuaded to change sides and enlist as "National Scouts". Serving under General Andries Cronjé (1849–1923), the National Scouts were despised as joiners but numbered a fifth of the fighting Afrikaners by the end of the War.
The British utilised armoured trains to deliver rapid reaction forces much more quickly to incidents (such as Boer attacks on blockhouses and columns) or drop them off ahead of retreating Boer columns.
Peace committees
Among those Burghers who had stopped fighting, it was decided to form peace committees to persuade those fighting to desist. In December 1900, Lord Kitchener gave permission that a central Burgher Peace Committee be inaugurated in Pretoria. By the end of 1900 thirty envoys were sent out to the districts to form peace committees to persuade burghers to give up. Previous leaders of the Boers, like Generals Piet de Wet and Andries Cronjé were involved in the organisation. Meyer de Kock was an emissary of a peace committee, but he was arrested, convicted of high treason, and executed by firing squad.
Joiners
Some burghers joined the British in their fight against the Boers. By the end of hostilities in May 1902, there were 5,464 burghers working for the British.
Orange Free State
After having conferred with the Transvaal leaders, de Wet returned to the Orange Free State, where he inspired successful attacks and raids in the western part of the country, though he suffered a defeat at Bothaville in November 1900. Many Boers who had returned to their farms and towns, sometimes after being given parole by the British, took up arms again. In late January 1901, De Wet led a renewed invasion of Cape Colony. This was less successful, because there was no general uprising among the Cape Boers, and De Wet's men were hampered by bad weather and pursued by British forces. They narrowly escaped across the Orange River.
From then until the final days of the war, De Wet remained comparatively quiet, rarely attacking British army camps and columns partly because the Orange Free State was effectively left desolate by British sweeps. In December 1901, De Wet attacked and overran an isolated British detachment at Groenkop, inflicting heavy casualties and capturing over 200 British soldiers. This prompted Kitchener to launch the first of the "New Model" drives against him. De Wet escaped the first such drive but lost 300 fighters. This was a severe loss, and a portent of further attrition, although sweep attempts to round up De Wet were badly handled, and De Wet's forces avoided capture.
Western Transvaal
The Boer commandos in the Western Transvaal were very active after September 1901. Several battles were fought there between September 1901 and March 1902. At Moedwil on 30 September 1901 and again at Driefontein on 24 October, General Koos De La Rey's forces attacked British camps and outposts but were forced to withdraw after the British offered strong resistance.
From late 1901 to early 1902, a time of relative quiet descended on the western Transvaal. February 1902 saw the next major battle in that region. On 25 February, De La Rey attacked a British column under Lieutenant-Colonel S. B. von Donop at Ysterspruit near Wolmaransstad. De La Rey succeeded in capturing many men and ammunition. The Boer attacks prompted Lord Methuen, the British second-in-command after Kitchener, to move his column from Vryburg to Klerksdorp to deal with De La Rey. On the morning of 7 March 1902, the Boers attacked the rear guard of Methuen's moving column at Tweebosch. Confusion reigned in British ranks and Methuen was wounded and captured by the Boers.
The Boer victories in the west led to stronger action by the British. In the second half of March 1902, British reinforcements were sent to the Western Transvaal under the direction of Ian Hamilton. The opportunity the British were waiting for arose on 11 April 1902 at Rooiwal, where a commando led by General Jan Kemp and Commandant Potgieter attacked a superior force under Kekewich. The British soldiers were well positioned on the hillside and inflicted casualties on the Boers charging on horseback over a large distance, beating them back. This was the end of the war in the Western Transvaal and the last major battle of the war.
Eastern Transvaal
Two Boer forces fought in this area, under Botha in the south east and under Ben Viljoen in the north east around Lydenburg. Botha's forces were particularly active, raiding railways and British supply convoys, and mounting a renewed invasion of Natal in September 1901. After defeating British mounted infantry in the Battle of Blood River Poort near Dundee, Botha was forced to withdraw by heavy rain that made movement difficult and crippled his horses. Back on the Transvaal territory around his home district of Vryheid, Botha attacked a British raiding column at Bakenlaagte, using an effective mounted charge. One of the most active British units was effectively destroyed. This made Botha's forces the target of increasingly large scorched earth drives by British forces, in which the British made particular use of native scouts and informers. Eventually, Botha had to abandon the high veld and retreat to a narrow enclave bordering Swaziland.
To the north, Ben Viljoen grew steadily less active. His forces mounted comparatively few attacks and as a result, the Boer enclave around Lydenburg was largely unmolested. Viljoen was eventually captured.
Cape Colony
In parts of Cape Colony, particularly the Cape Midlands District where Boers formed a majority of the white inhabitants, the British had always feared a general uprising against them. In fact, no such uprising took place, even in the early days of the war when Boer armies had advanced across the Orange. The cautious conduct of some elderly Orange Free State generals had been one factor that discouraged the Cape Boers from siding with the Boer republics. Nevertheless, there was widespread pro-Boer sympathy. Some Cape Dutch volunteered to help the British, but a larger number volunteered to help the other side. Politics was more important than the military factor: the Cape Dutch, according to Milner 90 percent of whom favoured the rebels, controlled the provincial legislature, and its authorities forbade the British Army to burn farms or to force Boer civilians into concentration camps. The British had more limited options to suppress the insurgency in the Cape Colony as result.
After he escaped across the Orange in March 1901, de Wet had left forces under Cape rebels Kritzinger and Gideon Scheepers to maintain a guerrilla campaign in the Cape Midlands. The campaign here was one of the least chivalrous of the war, with intimidation by both sides of each other's civilian sympathisers. In one of many skirmishes, Commandant Johannes Lötter's small commando was tracked down by a much-superior British column and wiped out at Groenkloof. Several captured Boers, including Lotter and Scheepers, who was captured when he fell ill with appendicitis, were executed by the British for treason or for capital crimes such as the murder of British prisoners or unarmed civilians. Some of the executions took place in public, to deter further disaffection.
Fresh Boer forces under Jan Christiaan Smuts, joined by the surviving rebels under Kritzinger, made another attack on the Cape in September 1901. They suffered severe hardships and were hard pressed by British columns, but eventually rescued themselves by routing some of their pursuers at the Battle of Elands River and capturing their equipment. From then until the end of the war, Smuts increased his forces from among Cape rebels until they numbered 3,000. However, no general uprising took place, and the situation in the Cape remained stalemated.
In January 1902, Boer leader Manie Maritz was implicated in the Leliefontein massacre in the far Northern Cape.
Boer foreign volunteers
While no other government actively supported the Boer cause, individuals from several countries volunteered and formed Foreign Volunteer Units. These primarily came from Europe, particularly the Netherlands, Germany and Sweden-Norway. Other countries such as France, Italy, Ireland (then part of the United Kingdom), and restive areas of the Russian Empire, including Congress Poland and Georgia, also formed smaller volunteer corps. Finns fought in the Scandinavian Corps. Two volunteers, George Henri Anne-Marie Victor de Villebois-Mareuil of France and Yevgeny Maximov of Russia, became veggeneraals (fighting generals) of the South African Republic.
Conclusion
In early 1902, British tactics of containment, denial, and harassment finally began to yield results against the guerrillas. The sourcing and co-ordination of intelligence became increasingly efficient with regular reporting from observers in the blockhouses, from units patrolling the fences and conducting "sweeper" operations, and from native Africans in rural areas who increasingly supplied intelligence, as the Scorched Earth policy took effect and they found themselves competing with the Boers for food supplies. Kitchener's forces at last began to affect the Boers' fighting strength and freedom of manoeuvre, and made it harder for the Boers and their families to survive. Despite this success, almost half the Boer fighting strength, around 15,000 men, were still in the field fighting by May 1902. However, Kitchener's tactics were costly: Britain was running out of time, patience, and money needed for the war.
The British offered terms of peace on various occasions, notably in March 1901, but were rejected by Botha and the "Bitter-enders" among the Boers, who pledged to fight until the bitter end and rejected the demand for compromise made by the "Hands-uppers". Their reasons included hatred of the British, loyalty to their dead comrades, solidarity with fellow commandos, a desire for independence, religious arguments, and fear of captivity or punishment. On the other hand, their women and children were dying in prison camps and independence seemed more and more impossible.
The last of the Boers finally surrendered in May 1902 and the war ended with the Treaty of Vereeniging signed on 31 May 1902. After a period of obstinacy, the British offered the Boers generous terms of conditional surrender in order to bring the war to a conclusion. The Boers were given £3,000,000 for reconstruction and promised eventual limited self-government, which was granted in 1906 and 1907. The treaty ended the existence of the Transvaal and Orange Free State as independent Boer republics and placed them within the British Empire. The Union of South Africa was established as a dominion of the British Empire in 1910.
Nonwhite roles
The policy on both sides was to minimise the role of nonwhites, but the need for manpower continuously stretched those resolves. At the battle of Spion Kop in Ladysmith, Mohandas K. Gandhi with 300 free burgher Indians and 800 indentured Indian labourers started the Ambulance Corps serving the British side. As the war raged across African farms and their homes were destroyed, many became refugees and they, like the Boers, moved to the towns where the British hastily created internment camps. Subsequently, the British scorched earth policies were applied to both Boers and Africans. Although most black Africans were not considered by the British to be hostile, many tens of thousands were also forcibly removed from Boer areas and also placed in concentration camps. Africans were held separately from Boer internees. Eventually there were a total of 64 tented camps for Africans. Conditions were as bad as in the camps for the Boers, but even though, after the Fawcett Commission report, conditions improved in the Boer camps, "improvements were much slower in coming to the black camps"; 20,000 died there.
The Boers and the British both feared the consequences of arming Africans. The memories of the Zulu and other tribal conflicts were still fresh, and they recognised that whoever won would have to deal with the consequences of a mass militarisation of the tribes. There was therefore an unwritten agreement that this war would be a "white man's war." At the outset, British officials instructed all white magistrates in the Natal Colony to appeal to Zulu amakhosi (chiefs) to remain neutral, and President Kruger sent emissaries asking them to stay out of it. However, in some cases there were old scores to be settled, and some Africans, such as the Swazis, were eager to enter the war with the specific aim of reclaiming land won by the Boers. As the war went on there was greater involvement of Africans, and in particular large numbers became embroiled in the conflict on the British side, either voluntarily or involuntarily. By the end of the war, many Africans had been armed and had shown conspicuous gallantry in roles such as scouts, messengers, watchmen in blockhouses, and auxiliaries.
And there were more flash points outside of the war. On 6 May 1902 at Holkrantz in the southeastern Transvaal, a Zulu faction had their cattle stolen and their women and children tortured by the Boers as a punishment for assisting the British. The local Boer officer then sent an insulting message to the tribe, challenging them to take back their cattle. The Zulus attacked at night, and in a mutual bloodbath, the Boers lost 56 killed and 3 wounded, while the Africans suffered 52 killed and 48 wounded.
About 10,000 black men were attached to Boer units where they performed camp duties; a handful unofficially fought in combat. The British Army employed over 14,000 Africans as wagon drivers. Even more had combatant roles as spies, guides, and eventually as soldiers. By 1902 there were about 30,000 armed Africans in the British Army. Sol Plaatje was the only black person to keep a diary during the war, which later proved to be a valuable source about the black participation in the war.
Concentration camps
The term "concentration camp" was used to describe camps operated by the British in South Africa during this conflict in the years 1900–02, and the term grew in prominence during this period.
The camps had originally been set up by the British Army as "refugee camps" to provide refuge for civilian families who had been forced to abandon their homes for whatever reason related to the war. However, when Kitchener took over in late 1900, he introduced new tactics in an attempt to break the guerrilla campaign and the influx of civilians grew dramatically as a result. Disease and starvation killed thousands. Kitchener initiated plans to
As Boer farms were destroyed by the British under their "Scorched Earth" policy—including the systematic destruction of crops and slaughtering of livestock, the burning down of homesteads and farms—to prevent the Boers from resupplying from a home base, many tens of thousands of women and children were forcibly moved into the concentration camps. This was not the first appearance of internment camps, as the Spanish had used internment in Cuba in the Ten Years' War, and the Americans in the Philippine–American War, but the Boer War concentration camp system was the first time that a whole nation had been systematically targeted, and the first in which whole regions had been depopulated.
Eventually, there were a total of 45 tented camps built for Boer internees and 64 for black Africans. Of the 28,000 Boer men captured as prisoners of war, 25,630 were sent overseas to prisoner-of-war camps throughout the British Empire. The vast majority of Boers remaining in the local camps were women and children. Around 26,370 Boer women and children were to perish in these concentration camps. Of the more than 120,000 Blacks (and Coloureds) imprisoned too, around 20,000 died.
The camps were poorly administered from the outset and became increasingly overcrowded when Kitchener's troops implemented the internment strategy on a vast scale. Conditions were terrible for the health of the internees, mainly due to neglect, poor hygiene and bad sanitation. The supply of all items was unreliable, partly because of the constant disruption of communication lines by the Boers. The food rations were meagre and there was a two-tier allocation policy, whereby families of men who were still fighting were routinely given smaller rations than others. The inadequate shelter, poor diet, bad hygiene and overcrowding led to malnutrition and endemic contagious diseases such as measles, typhoid, and dysentery, to which the children were particularly vulnerable. Coupled with a shortage of modern medical facilities, many of the internees died. While much of the British press, including The Times, played down the problems in the camps, Emily Hobhouse helped raise public awareness in Britain of the atrocious conditions, as well as being instrumental in bringing relief to the concentration camps.
War crimes trial
The Boer War saw the first war crimes prosecutions in British history. They centered around the Bushveldt Carbineers (BVC), a British Army irregular regiment of mounted rifles active in the Northern Transvaal. Originally raised in February 1901, the BVC was composed of British and Commonwealth servicemen with an admixture of defectors from the Boer Commandos. On 4 October 1901, a letter signed by 15 members of the Bushveldt Carbineers (BVC) garrison at Fort Edward was secretly dispatched to Col. F.H. Hall, the British Army Officer Commanding at Pietersburg. Written by BVC Trooper Robert Mitchell Cochrane, a former justice of the peace from Western Australia,Arthur Davey (1987), Breaker Morant and the Bushveldt Carbineers, Second Series No. 18. Van Riebeeck Society, Cape Town. Pages 78–82. the letter accused members of the Fort Edward garrison of six "disgraceful incidents":
The shooting of six surrendered Afrikaner men and boys and theft of their money and livestock at Valdezia on 2 July 1901. The orders were given by Captains Alfred Taylor and James Huntley Robertson, and relayed by Sgt. Maj. K.C.B. Morrison to Sgt. D.C. Oldham. The actual killing was alleged to have been carried out by Sgt. Oldham and BVC Troopers Eden, Arnold, Brown, Heath, and Dale.
The shooting of BVC Trooper B.J. van Buuren by BVC Lt. Peter Handcock on 4 July. Trooper van Buuren, an Afrikaner, had "disapproved" of the killings at Valdezia, and informed the victims' wives and children, imprisoned at Fort Edward, of what had happened.
The revenge killing of Floris Visser, a wounded prisoner of war, near the Koedoes River on 11 August. Visser had been captured by a BVC patrol led by Lieut. Harry Morant two days before his death. After Visser had been exhaustively interrogated and conveyed for 15 miles by the patrol, Lt. Morant had ordered his men to form a firing squad and shoot him. The squad consisted of BVC Troopers A.J. Petrie, J.J. Gill, Wild, and T.J. Botha. A coup de grâce was delivered by BVC Lt. Harry Picton. The slaying of Visser was in retaliation for the combat death of Morant's friend, BVC Captain Percy Frederik Hunt, at Duivelskloof on 6 August.
The shooting, ordered by Capt. Taylor and Lt. Morant, of four surrendered Afrikaners and four Dutch schoolteachers, who had been captured at the Elim Hospital in Valdezia, on the morning of 23 August. The firing squad consisted of BVC Lt. George Witton, Sgt. D.C. Oldham, and Troopers J.T. Arnold, Edward Brown, T. Dale, and A. Heath. Although Trooper Cochrane's letter made no mention of the fact, three Native South African witnesses were also shot dead. The ambush and fatal shooting of the Reverend Carl August Daniel Heese of the Berlin Missionary Society near Bandolierkop on the afternoon of 23 August. Rev. Heese had spiritually counseled the Dutch and Afrikaner victims that morning and angrily protested to Morant at Fort Edward upon learning of their deaths. Trooper Cochrane alleged that the killer of Heese was BVC Lt. Handcock. Although Cochrane made no mention that Heese's driver, a member of the Southern Ndebele people, was also killed.
The orders, given by BVC Lt. Charles H.G. Hannam, to open fire on a wagon train containing Afrikaner women and children who were coming in to surrender at Fort Edward, on 5 September. The ensuing gunfire led to the deaths of two boys, aged 5 and 13, and the wounding of a 9-year-old girl.
The shooting of Roelf van Staden and his sons Roelf and Christiaan, near Fort Edward on 7 September. All were coming to surrender in the hope of gaining medical treatment for Christiaan, who was suffering from fever. Instead, they were met at the Sweetwaters Farm near Fort Edward by a party consisting of Lts. Morant and Handcock, joined by BVC Sgt. Maj. Hammet, Corp. MacMahon, and Troopers Hodds, Botha, and Thompson. Roelf van Staden and both his sons were shot, allegedly after being forced to dig their graves.
The letter then accused Field Commander of the BVC, Major Robert William Lenehan, of being "privy to these misdeamenours. It is for this reason that we have taken the liberty of addressing this communication direct to you." After listing civilian witnesses who could confirm their allegations, Trooper Cochrane concluded, "Sir, many of us are Australians who have fought throughout nearly the whole war while others are Africaners who have fought from Colenso till now. We cannot return home with the stigma of these crimes attached to our names. Therefore we humbly pray that a full and exhaustive inquiry be made by Imperial officers in order that the truth be elicited and justice done. Also we beg that all witnesses may be kept in camp at Pietersburg till the inquiry is finished. So deeply do we deplore the opprobrium which must be inseparably attached to these crimes that scarcely a man once his time is up can be prevailed to re-enlist in this corps. Trusting for the credit of thinking you will grant the inquiry we seek." In response to the letter, Col. Hall summoned all Fort Edward officers to Pietersburg on 21 October. All were met by mounted infantry five miles outside Pietersburg on the morning of 23 October and "brought into town like criminals". Lt. Morant was arrested after returning from leave in Pretoria, where he had gone to settle the affairs of his deceased friend Captain Hunt.
Although the trial transcripts, like most others between 1850-1914, were later destroyed by the Civil Service, it is known that a Court of Inquiry, the British military's equivalent to a grand jury, was convened on 16 October. The President of the Court was Col. H.M. Carter, who was assisted by Captain E. Evans and Major Wilfred N. Bolton, the Provost Marshal of Pietersburg. Its first session took place on 6 November and continued for four weeks. Deliberations continued for a further two weeks, at which time it became clear the indictments would be as follows:
In what became known as "The Six Boers Case", Captains Robertson and Taylor, as well as Sgt. Maj. Morrison, were charged with committing the offense of murder while on active service.
In relation to what was dubbed "The Van Buuren Incident", Maj. Lenahan was charged with, "When on active service by culpable neglect failing to make a report which it was his duty to make."
In relation to "The Visser Incident", Lts. Morant, Handcock, Witton, and Picton were charged with "While on active service committing the offense of murder".
In relation to what was incorrectly dubbed "The Eight Boers Case", Lieuts. Morant, Handcock, and Witton were charged with, "While on active service committing the offense of murder". In relation to the slaying of Heese, Lts. Morant and Handcock were charged with, "While on active service committing the offense of murder".
No charges were filed for the three children who had been shot by the Bushveldt Carbineers near Fort Edward.
In relation to what became known as "The Three Boers Case", Lts. Morant and Handcock were charged with, "While on active service committing the offense of murder".
Following the indictments, Maj. R. Whigham and Col. James St. Clair ordered Bolton to appear for the prosecution, as he was less expensive than a barrister.Davey (1987), page 123. Bolton vainly requested to be excused, writing, "My knowledge of law is insufficient for so intricate a matter."Davey (1987), page 122. The first court martial opened on 16 January 1902, with Lieut.-Col. H.C. Denny presiding over a panel of six judges. Maj. J.F. Thomas, a solicitor from Tenterfield, New South Wales, had been retained to defend Maj. Lenahan. The night before, however, he agreed to represent all six defendants. The "Visser Incident" was the first case to go to trial. Lt. Morant's former orderly and interpreter, BVC Trooper Theunis J. Botha, testified that Visser, who had been promised his life would be spared, was cooperative during two days of interrogation and his information was found to have been true. Despite this, Morant ordered him shot. In response, Morant testified that he only followed orders to take no prisoners as relayed to the late Captain Hunt by Col. Hubert Hamilton. He alleged that Visser was captured wearing a British Army jacket and that Hunt's body had been mutilated. In response, the court moved to Pretoria, where Col. Hamilton testified that he had "never spoken to Captain Hunt with reference to his duties in the Northern Transvaal". Though stunned, Maj. Thomas argued that his clients were not guilty because they believed that they "acted under orders". In response, Bolton argued that they were "illegal orders" and said, "The right of killing an armed man exists only so long as he resists; as soon as he submits he is entitled to be treated as a prisoner of war." The Court ruled in Bolton's favor. Morant was found guilty of murder. Handcock, Witton, and Picton were convicted of the lesser charge of manslaughter.
On 27 February Morant and Handcock were executed by firing squad after being convicted of murdering eight Afrikaner POWs. This court-martial for war crimes was one of the first such prosecutions in British history. Although Morant left a confession in his cell, he went on to become a folk hero in modern Australia. Believed by many Australians to be the victim of a kangaroo court, appeals have been made for Morant to be retried or pardoned. His court-martial and death have been the subject of books, a stage play, and Australian New Wave film adaptation. Witton was sentenced to death, but reprieved. Due to immense political pressure, he was released after serving 32 months of a life sentence. Picton was cashiered.Australian Town and Country Journal (Sydney, NSW) 12 April 1902.
https://trove.nla.gov.au/newspaper/article/71522700#titleModal
Imperial involvement
Most troops fighting for the British army came from Britain, and a significant number came from other parts of its Empire. These countries had internal disputes over whether they should remain tied to London, or have independence, which carried over into the debate around sending forces to assist the war. Though not independent on foreign affairs, these countries did have local say over how much support to provide, and how it was provided. Australia, Canada, New Zealand, and Rhodesia all sent volunteers to aid the UK. Troops were also raised to fight with the British from the Cape Colony and Natal. Some Boer fighters, such as Smuts and Botha, were British subjects as they came from the Cape Colony and Colony of Natal, respectively.
There were many volunteers from the Empire who were not selected for the official contingents and travelled privately to form private units, such as the Canadian Scouts and Doyle's Australian Scouts. There were European volunteer units from British India and British Ceylon, though the British refused offers of non-white troops from the Empire. Some Cape Coloureds volunteered early in the war, but later some were effectively conscripted and kept in segregated units. As a community, they received little reward for their services. The war set the pattern for the Empire's involvement in the two World Wars. Specially raised units, consisting of volunteers, were dispatched overseas to serve with forces from elsewhere in the Empire.
Australia
From 1899 to 1901 the six separate self-governing colonies in Australia sent contingents to serve in the war. That much of the population had originated from Britain explains a desire to support it. After the colonies formed the Commonwealth of Australia in 1901, the new Government of Australia sent "Commonwealth" contingents to the war. The Boer War was thus the first war in which the Commonwealth of Australia fought. A few Australians fought on the Boer side. The most famous and colourful character was Colonel Arthur Alfred Lynch, formerly of Ballarat, Victoria, who raised the Second Irish Brigade.
The Australian climate and geography were far closer to that of South Africa than most other parts of the empire, so Australians adapted quickly, with troops serving mostly among the army's "mounted rifles". Enlistment in official Australian contingents totalled 16,463. Another five to seven thousand Australians served in "irregular" regiments raised in South Africa. Perhaps 500 Australian irregulars were killed. In total about 20,000 Australians served and about 1,000 were killed. 267 died from disease, 251 were killed in action or from wounds sustained in battle; 43 men were reported missing.
When the war began some Australians, like some Britons, opposed it. As the war dragged on some Australians became disenchanted, in part because of the sufferings of Boer civilians reported in the press. When the British missed capturing President Paul Kruger, as he escaped Pretoria during its fall in June 1900, a Melbourne Punch cartoon depicted how the War could be won, using the Kelly Gang.
The convictions and executions of two Australian lieutenants, Harry Harbord Morant and Peter Handcock in 1902, and the imprisonment of a third, George Witton, had minimal impact on the Australian public at the time. The controversial court-martial saw the three convicted of executing prisoners under their authority. After the war though, Australians joined an empire-wide campaign that saw Witton released from jail. Much later, some Australians came to see the execution of Morant and Handcock as instances of wrongfully executed Australians, as illustrated in the 1980 Australian film Breaker Morant.
Up to 50 Aboriginal Australians served in the Boer War as trackers. Such is the lack of information available it is even uncertain as to whether they returned to Australia after the war. When the Australian contingents returned the trackers may not have been allowed back to Australia due to the White Australia Policy.
Canada
Around 8,000 Canadians arrived in South Africa to fight for Britain. These arrived in contingents: the first on 30 October 1899, the second on 21 January 1900. A third contingent of cavalry (Strathcona's Horse) embarked on 16/17 March 1900.Chronicle of the 20th Century by John S Bowman They remained until May 1902. With approximately 7,368 soldiers and 12 nurses in a combat zone, the conflict became the largest engagement involving Canadian soldiers from the time of Confederation until the Great War. It was also the first time since the Mahdist War that Canadians had been deployed overseas. 270 Canadian soldiers died during the war.
The arrival and movement of troops was widely documented by war photographers. English-born, and later Canadian, Inglis Sheldon-Williams was one of the most notable, documenting movement of hundreds of troops to Africa.
The Canadian public was initially divided on the decision to go to war, as some did not want Canada to become Britain's 'tool' for engaging in armed conflicts. Many Anglophone citizens were pro-Empire, and wanted prime minister Sir Wilfrid Laurier to support the British. Many Francophone citizens felt threatened by the continuation of British imperialism to their national sovereignty. In the end, to appease citizens who wanted war and avoid angering those against it, Laurier sent 1,000 volunteers under the command of Lieutenant Colonel William Otter to aid the confederation in its war to 'liberate' the peoples of the Boer controlled states in South Africa. The volunteers were provided to the British if the latter paid costs of the battalion after it arrived in South Africa.
The supporters of the war claimed that it "pitted British Freedom, justice and civilization against Boer backwardness". The French Canadians' opposition to the Canadian involvement in a British 'colonial venture' eventually led to a three-day riot in Quebec. Many Canadian soldiers did not actually see combat since many arrived around the time of the signing of the Treaty of Vereeniging on 31 May 1902.
+ Notable Canadian Engagements Battle Description Paardeberg A British-led attack trapped a Boer Army in Central South Africa on the banks of the Modder River from 18 to 27 February 1900. Over 800 Canadian soldiers from Otter's 2nd Special Service Battalion were attached to the British attack force. This was the first major attack involving the Canadians in the Boer War, as well as the first major victory for Commonwealth soldiers. Zand River On 6 May 1900, the Commonwealth's northwards advance to the capital of Pretoria was well on its way. However, the British soldiers encountered a position of Boer soldiers on the Zand River on 10 May. The British commander felt that the best course of action was to use cavalry to envelop the Boers on their left flank and infantry would therefore march on the Boer right flank to secure a crossing. The Canadian 2nd Battalion was the lead unit advancing on the right flank. However, due to disease and casualties from earlier encounters, the 2nd battalion was reduced to approximately half of its initial strength. The Canadian battalion came under fire from the Boers who were occupying protected positions. The battle continued for several hours until the British cavalry was able to flank the Boers and force a retreat. Canadian casualties were two killed and two wounded. The skirmishes around the Zand River would continue and more soldiers from various Commonwealth countries would become involved. Doornkop On the days of 28–30 May 1900, both the Canadian 2nd battalion and the 1st Mounted Infantry Brigade fought together on the same battlefield for the first, and only, time. The Mounted Brigade, which encompassed units such as the Canadian Mounted Rifles and the Royal Canadian Dragoons were given the task to establish a beachhead across a river which the Boers had fortified in an attempt to halt the advancing Commonwealth before they could reach the city of Johannesburg.
Since the Boers were mounting a heavy resistance to the advancing mounted units, the Commonwealth infantry units were tasked with holding the Boer units while the mounted units found another route across the river with less resistance. Even after the cavalry made it across to the other side of the river further down the line, the infantry had to advance onto the town of Doornkop as they were the ones who were tasked with its capture. The Canadians suffered very minimal casualties and achieved their objective after the Boer soldiers retreated from their positions. Although the Canadians suffered minimal casualties, the lead British unit in the infantry advance, the Gordon Highlanders, did sustain heavy casualties in their march from the riflemen of the Boer force. Witpoort On 16 July 1900, British, Canadian, New Zealander, Queenslander forces under the command of Lieutenant General Sir Edward Hutton held off a three pronged Boer attack from daybreak until 2:00 pm. Canadian forces mounted a counter-attack in order to recapture positions which had been lost by the New Zealand Mounted Rifles. Despite heavy losses, including the death of Lieutenant Harold Lothrop Borden (the only son of then Minister for Militia and Defence Sir Frederick William Borden) the Canadians successfully recaptured all positions. Leliefontein On 7 November 1900, a British-Canadian force was searching for a unit of Boer commandos which were known to be operating around the town of Belfast, South Africa. After the British Commander reached the farm of Leliefontein, he began to fear that his line had expanded too far and ordered a withdrawal of the front line troops. The rear guard, consisting of the Royal Canadian Dragoons and two 12 pound guns from D section of the Canadian artillery, were tasked with covering the retreat. The Boers mounted a heavy assault against the Canadians with the intention of capturing the two 12 pound artillery pieces. During this battle, the Afrikaners outnumbered the Canadians almost three to one. A small group of the Dragoons interposed themselves between the Boers and the artillery in order to allow the guns and their crews time to escape. The Dragoons won three Victoria Crosses for their actions during the battle of Leliefontein, the most in any battle with the exception of the Battle of Vimy Ridge in World War I. Boschbult On 31 March 1902, a British-Canadian force was dispatched by General Sir Frederick Walter Kitchener to pursue a 2,500 man strong Boer force. At 1:30 pm, the main force of the column encountered the main 2,500 strong Boer force and became encircled. The 2nd Canadian Mounted Rifles, who had been guarding the baggage train, made a serries of charges in order to relieve pressure on the encircled British force. 21 Canadians from 3 and 4 Troops of 'E' Squadron, under the command of Wallace Bruce Matthews Carruthers became cut off from the main force during a charge, but rather than surrender they fought to the last, eventually running out of ammunition and being overrun. By 5:00 pm, the Boers withdrew.
India
British garrisons in India contributed 18,534 British officers and men, as well as an estimated 10,000 Indian auxiliaries deployed to assist them. India also sent 7,000 horses, ponies and mules. Indian auxiliaries were only employed in non-combatant roles.
The Natal Indian Ambulance Corps, created by Gandhi and financed by the local Indian community, served at the battles of Colenso and Spion Kop.
New Zealand
When war seemed imminent, New Zealand offered its support. On 28 September 1899, Prime Minister Richard Seddon asked Parliament to approve the offer to the imperial government of a contingent of mounted rifles, thus becoming the first British Colony to send troops to the war. The British position in the dispute with the Transvaal was "moderate and righteous", he maintained. He stressed the "crimson tie" of Empire that bound New Zealand to the mother-country and the importance of a strong British Empire for the colony's security.
10 contingents of volunteers, totalling nearly 6,500 men from New Zealand, with 8,000 horses fought in the conflict, along with doctors, nurses, veterinary surgeons and school teachers. 70 New Zealanders died from enemy action, with another 158 killed accidentally or by disease.D.O.W. Hall, (War History Branch, Wellington, 1949). The first New Zealander killed was Farrier Bradford at Jasfontein Farm on 18 December 1899. The war was greeted with enthusiasm when the war was over, and peace greeted with patriotism and national pride. This is best shown by the fact that the Third, Fourth and Fifth contingents from New Zealand were funded by public conscription.
Rhodesia
Rhodesian military units such as the British South Africa Police, Rhodesia Regiment and Southern Rhodesian Volunteers served in the war.
South Africa
During the war, the British army included substantial contingents from South Africa itself. There were large communities of English-speaking immigrants and settlers in Natal and Cape Colony, which formed volunteer units that took the field, or local "town guards". At one stage of the war, a "Colonial Division", consisting of five light horse and infantry units under Brigadier General Edward Brabant, took part in the invasion of the Orange Free State. Part of it withstood a siege by Christiaan de Wet at Wepener on the borders of Basutoland. Another large source of volunteers was the uitlander community, many of whom hastily left Johannesburg in the days immediately preceding the war.
Later during the war, Kitchener attempted to form a Boer Police Force, as part of his efforts to pacify the occupied areas and effect a reconciliation with the Boer community. The members of this force were despised as traitors by the Boers still in the field. Boers who attempted to remain neutral after giving their parole to British forces were derided as "hensoppers" (hands-uppers) and often coerced into giving support to the Boer guerrillas (which was one reason for British scorched earth campaigns throughout the countryside and detention of Boers in concentration camps, to deny anything of use to the guerrillas).
Like the Canadian, and particularly the Australian and New Zealand contingents, many volunteer units formed by South Africans were "light horse" or mounted infantry, well-suited to the countryside and manner of warfare. Some regular British officers scorned their comparative lack of formal discipline, but the light horse units were hardier and more suited to campaigning than the overloaded British cavalry, who were still obsessed with the charge by lance or sabre. At their peak, 24,000 South Africans served in the field in "colonial" units. Notable units (in addition to the Imperial Light Horse) were the South African Light Horse, Rimington's Guides, Kitchener's Horse and the Imperial Light Infantry.
Other nations
The United States of America stayed neutral, but some Americans were eager to participate. Early in the war Lord Roberts cabled Major Frederick Russell Burnham, a veteran of both Matabele wars but then prospecting in the Klondike, to serve on his personal staff as Chief of Scouts. Burnham went on to receive the highest awards of any American who served in the war. American mercenaries participated on both sides.
Aftermath and analysis
The war cast long shadows over the history of the South African region. The predominantly agrarian society of the former Boer republics was profoundly and fundamentally affected by the scorched earth policy. The devastation of Boer and black African populations in the concentration camps and through war and exile, were to have a lasting effect on the demography and quality of life in the region.
Many exiles and prisoners were unable to return to their farms; others attempted to but were forced to abandon them as unworkable given the damage caused by farm burning during the scorched earth policy. Destitute Boers and black Africans swelled the ranks of the unskilled urban poor competing with the "uitlanders" in the mines.
The postwar reconstruction administration was presided over by Lord Milner and his Oxford-educated Milner's Kindergarten. This group of civil servants had a profound effect on the region, eventually leading to the Union of South Africa:
Some scholars identify these new identities as partly underpinning the act of union that followed in 1910. Although challenged by a Boer rebellion only four years later, they did much to shape South African politics between the two world wars and to the present day.
Many Boers referred to the war as the second of the Freedom Wars. The most resistant of Boers wanted to continue the fight and were known as "Bittereinders" (or irreconcilables) and at the end of the war some Boer fighters such as Deneys Reitz chose exile rather than sign an oath, like the following, to pledge allegiance to Britain:
Over the following decade, many returned to South Africa and never signed the pledge. Some, like Reitz, eventually reconciled themselves to the new status quo, but others did not.
Union of South Africa
One of the most important events in the decade after the war was the creation of the Union of South Africa (later the Republic of South Africa). It proved a key ally to Britain as a Dominion of the British Empire during the World Wars. At the start of the First World War a crisis ensued when the South African government led by Louis Botha and other former Boer fighters, such as Jan Smuts, declared support for Britain and agreed to send troops to take over the German colony of German South-West Africa (Namibia).
Many Boers were opposed to fighting for Britain, especially against Germany, which had been sympathetic to their struggle. Some bittereinders and their allies took part in a revolt known as the Maritz rebellion. The rebellion was quickly suppressed, and the leading Boer rebels escaped lightly (especially compared with leading Irish rebels of the Easter Rising), with imprisonment of 6-7 years and heavy fines. Two years later, they were released from prison, as Louis Botha recognised the value of reconciliation.
Military legacy
The war was the harbinger of a new type of combat perseveres, guerrilla warfare. The counterinsurgency techniques and lessons learned (the restriction of movement, the containment of space, the targeting of anything that could give sustenance to guerrillas, the harassment through sweeper groups coupled with rapid reaction forces, the sourcing and co-ordination of intelligence, and the nurturing of native allies) were used by the British, and other forces, in future guerrilla campaigns including to counter Malayan communist rebels during the Malayan Emergency. In World War II the British adopted concepts of raiding from the Boer commandos when they set up special raiding forces, and in acknowledgement chose the name British Commandos.
After the Boer war, the British army underwent reform focused on lessening the emphasis placed on mounted units. It was determined that the traditional role of cavalry was antiquated and improperly used on the battlefield in the Boer War, and the First World War was the proof that mounted attacks had no place in twentieth century combat. Cavalry was put to better use after the reforms in the theatres of the Middle East and World War I, and the idea of mounted infantry was useful in the times when the war was more mobile. An example was during the First World War during the Battle of Mons in which the British cavalry held the Belgian town against German assault.
The Boer war was the beginning of types of conflict involving machine guns, shrapnel and observation balloons which were all used extensively in the First World War. Both sides used a scorched earth policy to deprive the marching enemy of food. And both had to corral civilians into makeshift huts by 'concentrating' them into camps. For example, at Buffelspoort, British soldiers were held in captivity in Boer encampments after surrendering their arms, and civilians were often mixed in with service personnel because the Boers did not have the resources to do otherwise. 116,000 women, children and Boer soldiers were confined to the Commonwealth concentration camps, of which at least 28,000 would die.
The British saw their tactics of scorched earth and concentration camps as a legitimate way of depriving the Boer guerrillas of supplies and safe havens. The Boers saw them as a British attempt to coerce the Boers into surrender, with the camp inmates—mainly families of Boer fighters—seen as deliberately kept in poor conditions to encourage high death rates. Even in the 21st-century, the controversy around the British tactics continued to make headlines.
Effect on British and international politics
Many Irish nationalists sympathised with the Boers as oppressed by British imperialism, much like they viewed themselves. Irish miners already in the Transvaal at the start of the war formed the nucleus of two Irish commandos. The Second Irish Brigade was headed by an Australian of Irish parents, Colonel Arthur Lynch. Groups of Irish volunteers went to fight with the Boers—despite the fact that there were many Irish troops fighting in the British army, including the Royal Dublin Fusiliers. In Britain, the "Pro-Boer" campaign expanded, with writers often idealising the Boer society.
The war highlighted the dangers of Britain's policy of non-alignment and deepened her isolation. The 1900 UK general election, also known as the "Khaki election", was called by the Prime Minister, Lord Salisbury, on the back of British victories. There was much enthusiasm for the war at this point, resulting in a victory for the Conservative government. However, support waned as it became apparent the war would not be easy and it dragged on, partially contributing to the Conservatives' spectacular defeat in 1906. There was outrage at scorched earth tactics and conditions in the concentration camps. It became apparent there were serious problems with public health in Britain as up to 40% of recruits in Britain were unfit for conscription, and suffered from medical problems such as rickets and other poverty-related illnesses. This came at a time of increasing concern for the poor in Britain.
22,000 Empire troops were killed. Britain had expected a swift victory against a mostly unmilitarised and predominantly agricultural-based opponent. Britain was the world's most technologically advanced military. The results caused many both domestically and internationally to question the dominance of the British Empire, especially as nations like the US, Germany, and Japan had become major powers.
Cost
It is estimated that the cost of the war to the British government was £211,156, 000 (equivalent to £19.9bn in 2022).
Cost of War over its entire course Year Cost at the time Relative value in 2022 1899–1900 £23,000,000 £2,180,000,000 1900–1901 £63,737,000 £6,000,000,000 1901–1902 £67,670,000 £6,410,000,000 1902–1903 £47,500,000 £4,450,000,000 Sub-total £201,907,000 £19,040,000,000 Interest £9,249,000 £866,000,000 Grand total £211,156,000 £19,906,000,000
Horses
The number of horses killed was unprecedented in modern warfare. The wastage was particularly heavy among British forces for several reasons: overloading of horses with unnecessary equipment and saddlery, failure to rest and acclimatise horses after long sea voyages and, poor management by inexperienced mounted troops and distant control by unsympathetic staffs.Sydney Frederick Galvayne, War horses present & future: or, Remount life in South Africa. 1902. The average life expectancy of a British horse, from the time of its arrival in Port Elizabeth, was around six weeks.
Most horses and mules brought to South Africa came from the US. In total, 109,878 horses and 81,524 mules were shipped from New Orleans to South Africa in 166 voyages from October 1899 to June 1902. The cost of these animals and their transport was an average of US$597,978 per month. A significant number of horses and mules died during the transit; for example, during the SS Manchester City 36-day passage, 187 of her 2,090 mules died.
Horses were slaughtered for their meat when needed. During the sieges of Kimberley and Ladysmith, horses were consumed as food once regular sources were depleted. The besieged British forces in Ladysmith also produced chevril, a Bovril-like paste, by boiling down the horse meat to a jelly paste and serving it like beef tea.
The Horse Memorial in Port Elizabeth is a tribute to the 300,000 horses that died during the conflict.
Commemorations
The Australian National Boer War Memorial Committee organises events to mark the war on 31 May each year. In Canberra, a commemorative service is usually held at the St John the Baptist Church in Reid. Floral tributes are laid for the dead.
See also
Artillery in the Second Boer War
British logistics in the Second Boer War
First Italo-Ethiopian War
History of South Africa
List of Second Boer War battles
List of Second Boer War Victoria Cross recipients
London to Ladysmith via Pretoria
Military history of South Africa
Volkstaat
Notes
References
Citations
cites
excerpt and text search; a standard scholarly history
excerpt
Historiography
Further reading
– an anthology frequently cited in this article.
– a Boer War bibliography of on-line books.
– detailed official British history
volume 1, maps volume 1 (1906)
volume 2, maps volume 2 (1907)
volume 3, maps volume 3 (1908)
volume 4, maps volume 4 (1910)
Miller, Stephen M. "Politics, the Press, and the Royal Commission on the War in South Africa" International Journal of Military History and Historiography (2022) 44#1 pp. 42–70
External links
Anglo Boer War - Home
The Boer War, A 2 part documentary series shown on British television (1999).
Scrapbook of Boer War, MSS P 456 at L. Tom Perry Special Collections, Harold B. Lee Library, Brigham Young University
The Concentration Camps 1899–1902 by Hennie Barnard
British Commanders of the Boer War, Link
Category:1890s in the South African Republic
Boer War 2
Category:1890s in Transvaal
Category:1899 beginnings
Category:1899 in South Africa
Category:1900 in South Africa
Boer War 2
Category:1900s in the South African Republic
Category:1900s in Transvaal
Category:1901 in South Africa
Category:1902 endings
Category:1902 in South Africa
Category:Canadian Army
Category:Canadian Militia
Boer War 2
Boer War 2
Boer War 2
Boer War 2
Boer War 2
Boer War 2
Boer War 2
Boer War 2
Category:Robert Gascoyne-Cecil, 3rd Marquess of Salisbury
Boer War 2
Boer War 2
Boer War 2
Boer War 2
Boer War 2
Boer War 2
Boer War 2
Boer War 2
|
wars_military
| 16,358
|
43449
|
Ming dynasty
|
https://en.wikipedia.org/wiki/Ming_dynasty
|
The Ming dynasty, officially the Great Ming, was an imperial dynasty of China that ruled from 1368 to 1644, following the collapse of the Mongol-led Yuan dynasty. The Ming was the last imperial dynasty of China ruled by the Han people, the majority ethnic group in China. Although the primary capital of Beijing fell in 1644 to a rebellion led by Li Zicheng (who established the short-lived Shun dynasty), numerous rump regimes ruled by remnants of the Ming imperial family, collectively called the Southern Ming, survived until 1662.
The Ming dynasty's founder, the Hongwu Emperor (1368–1398), attempted to create a society of self-sufficient rural communities ordered in a rigid, immobile system that would guarantee and support a permanent class of soldiers for his dynasty: the empire's standing army exceeded one million troops and the navy's dockyards in Nanjing were the largest in the world. He also took great care breaking the power of the court eunuchs and unrelated magnates, enfeoffing his many sons throughout China and attempting to guide these princes through the Huang-Ming Zuxun, a set of published dynastic instructions. This failed when his teenage successor, the Jianwen Emperor, attempted to curtail his uncle's power, prompting the Jingnan campaign, an uprising that placed the Prince of Yan upon the throne as the Yongle Emperor in 1402. The Yongle Emperor established Yan as a secondary capital and renamed it Beijing, constructed the Forbidden City, and restored the Grand Canal and the primacy of the imperial examinations in official appointments. He rewarded his eunuch supporters and employed them as a counterweight against the Confucian scholar-bureaucrats. One eunuch, Zheng He, led seven enormous voyages of exploration into the Indian Ocean as far as Arabia and the eastern coasts of Africa. Hongwu and Yongle emperors had also expanded the empire's rule into Inner Asia.
The rise of new emperors and new factions diminished such extravagances; the capture of the Emperor Yingzong of Ming during the 1449 Tumu Crisis ended them completely. The imperial navy was allowed to fall into disrepair while forced labor constructed the Liaodong palisade and connected and fortified the Great Wall into its modern form. Wide-ranging censuses of the entire empire were conducted decennially, but the desire to avoid labor and taxes and the difficulty of storing and reviewing the enormous archives at Nanjing hampered accurate figures. Estimates for the late-Ming population vary from 160 to 200 million, but necessary revenues were squeezed out of smaller and smaller numbers of farmers as more disappeared from the official records or "donated" their lands to tax-exempt eunuchs or temples. Haijin laws intended to protect the coasts from Japanese pirates instead turned many into smugglers and pirates themselves.
By the 16th century, the expansion of European trade—though restricted to islands near Guangzhou such as Macau—spread the Columbian exchange of crops, plants, and animals into China, introducing chili peppers to Sichuan cuisine and highly productive maize and potatoes, which diminished famines and spurred population growth. The growth of Portuguese, Spanish, and Dutch trade created new demand for Chinese products and produced a massive influx of South American silver. This abundance of specie re-monetized the Ming economy, whose paper money had suffered repeated hyperinflation and was no longer trusted. While traditional Confucians opposed such a prominent role for commerce and the newly rich it created, the heterodoxy introduced by Wang Yangming permitted a more accommodating attitude. Zhang Juzheng's initially successful reforms proved devastating when a slowdown in agriculture was produced by the Little Ice Age. The value of silver rapidly increased because of a disruption in the supply of imported silver from Spanish and Portuguese sources, making it impossible for Chinese farmers to pay their taxes. Combined with crop failure, floods, and an epidemic, the dynasty collapsed in 1644 as Li Zicheng's rebel forces entered Beijing. Li then established the Shun dynasty, but it was defeated shortly afterwards by the Manchu-led Eight Banner armies of the Qing dynasty, with the help of the defecting Ming general Wu Sangui.
History
Founding
Revolt and rebel rivalry
The Mongol-led Yuan dynasty (1271–1368) ruled before the establishment of the Ming. Explanations for the demise of the Yuan include institutionalized ethnic discrimination against the Han people that stirred resentment and rebellion, overtaxation of areas hard-hit by inflation, and massive flooding of the Yellow River as a result of the abandonment of irrigation projects. Consequently, agriculture and the economy were in shambles, and rebellion broke out among the hundreds of thousands of peasants called upon to work on repairing the levees of the Yellow River. A number of Han groups revolted, including the Red Turbans in 1351. The Red Turbans were affiliated with the White Lotus, a Buddhist secret society. Zhu Yuanzhang was a penniless peasant and Buddhist monk who joined the Red Turbans in 1352; he soon gained a reputation after marrying the foster daughter of a rebel commander. In 1356, Zhu's rebel force captured the city of Nanjing, which he would later establish as the capital of the Ming dynasty.
With the Yuan dynasty crumbling, competing rebel groups began fighting for control of the country and thus the right to establish a new dynasty. In 1363, Zhu Yuanzhang eliminated his archrival and leader of the rebel Han faction, Chen Youliang, in the Battle of Lake Poyang, arguably the largest naval battle in history. Known for its ambitious use of fire ships, Zhu's force of 200,000 Ming sailors were able to defeat a Han rebel force over triple their size, claimed to be 650,000-strong. The victory destroyed the last opposing rebel faction, leaving Zhu Yuanzhang in uncontested control of the bountiful Yangtze valley and cementing his power in the south. After the dynastic head of the Red Turbans suspiciously died in 1367 while a guest of Zhu, there was no one left who was remotely capable of contesting his march to the throne, and he made his imperial ambitions known by sending an army toward the Yuan capital Dadu (present-day Beijing) in 1368. The last Yuan emperor fled north to the upper capital Shangdu, and Zhu declared the founding of the Ming dynasty after razing the Yuan palaces in Dadu to the ground; the city was renamed Beiping in the same year. Zhu Yuanzhang took Hongwu, or "Vastly Martial", as his era name.
Reign of the Hongwu Emperor
Hongwu made an immediate effort to rebuild state infrastructure. He built a wall around Nanjing, as well as new palaces and government halls. The History of Ming states that as early as 1364 Zhu Yuanzhang had begun drafting a new Confucian law code, the Great Ming Code, which was completed by 1397 and repeated certain clauses found in the old Tang Code of 653. Hongwu organized a military system known as the weisuo, which was similar to the fubing system of the Tang dynasty (618–907).
In 1380 Hongwu had the Chancellor Hu Weiyong executed upon suspicion of a conspiracy plot to overthrow him; after that Hongwu abolished the Chancellery and assumed this role as chief executive and emperor, a precedent mostly followed throughout the Ming period. With a growing suspicion of his ministers and subjects, Hongwu established the Embroidered Uniform Guard, a network of secret police drawn from his own palace guard. Some 100,000 people were executed in a series of purges during his rule.
The Hongwu Emperor issued many edicts forbidding Mongol practices and proclaiming his intention to purify China of barbarian influence. However, he also sought to use the Yuan legacy to legitimize his authority in China and other areas ruled by the Yuan. He continued policies of the Yuan dynasty such as continued request for Korean concubines and eunuchs, Mongol-style hereditary military institutions, Mongol-style clothing and hats, promoting archery and horseback riding, and having large numbers of Mongols serve in the Ming military. Until the late 16th century, Mongols still constituted one-third of officers serving in capital forces like the Embroidered Uniform Guard, and other peoples such as Jurchens were also prominent. He frequently wrote to Mongol, Japanese, Korean, Jurchen, Tibetan, and Southwest frontier rulers offering advice on their governmental and dynastic policy, and insisted on leaders from these regions visiting the Ming capital for audiences. He resettled 100,000 Mongols into his territory, with many serving as guards in the capital. The emperor also strongly advertised the hospitality and role granted to Chinggisid nobles in his court.
Hongwu insisted that he was not a rebel, and he attempted to justify his conquest of the other rebel warlords by claiming that he was a Yuan subject and had been divinely-appointed to restore order by crushing rebels. Most Chinese elites did not view the Yuan's Mongol ethnicity as grounds to resist or reject it. Hongwu emphasised that he was not conquering territory from the Yuan dynasty but rather from the rebel warlords. He used this line of argument to attempt to persuade Yuan loyalists to join his cause. The Ming used the tribute they received from former Yuan vassals as proof that the Ming had taken over the Yuan's legitimacy. Tribute missions were regularly celebrated with music and dance in the Ming court.
South-Western frontier
Hui Muslim troops settled in Changde, Hunan, after serving the Ming in campaigns against aboriginal tribes. In 1381, the Ming dynasty annexed the areas of the southwest that had once been part of the Kingdom of Dali following the successful effort by Hui Muslim Ming armies to defeat Mongol and Hui Muslim troops loyal to the Yuan holding out in Yunnan. The Hui troops under General Mu Ying, who was appointed Governor of Yunnan, were resettled in the region as part of a colonization effort. By the end of the 14th century, some 200,000 military colonists settled some 2,000,000 mu (350,000 acres) of land in what is now Yunnan and Guizhou. Roughly half a million more Chinese settlers came in later periods; these migrations caused a major shift in the ethnic make-up of the region, since formerly more than half of the population were non-Han peoples. Resentment over such massive changes in population and the resulting government presence and policies sparked more Miao and Yao revolts in 1464 to 1466, which were crushed by an army of 30,000 Ming troops (including 1,000 Mongols) joining the 160,000 local Guangxi. After the scholar and philosopher Wang Yangming (1472–1529) suppressed another rebellion in the region, he advocated single, unitary administration of Chinese and indigenous ethnic groups in order to bring about sinicisation of the local peoples.
Campaign in the North-East
After the overthrow of the Yuan dynasty in 1368, Manchuria remained under control of the Northern Yuan based in Mongolia. Naghachu, a former Yuan official and a Uriankhai general of the Northern Yuan, won hegemony over the Mongol tribes in Manchuria (the former Yuan province of Liaoyang). He grew strong in the northeast, with forces large enough (numbering hundreds of thousands) to threaten invasion of the newly founded Ming dynasty in order to restore the Mongols to power in China. The Ming decided to defeat him instead of waiting for the Mongols to attack. In 1387 the Ming sent a military campaign to attack Naghachu, which concluded with the surrender of Naghachu and Ming conquest of Manchuria.
The early Ming court could not, and did not, aspire to the control imposed upon the Jurchens in Manchuria by the Mongols, yet it created a norm of organization that would ultimately serve as the main instrument for the relations with peoples along the northeast frontiers. By the end of the Hongwu reign, the essentials of a policy toward the Jurchens had taken shape. Most of the inhabitants of Manchuria, except for the Wild Jurchens, were at peace with China. In 1409, under the Yongle Emperor, the Ming established the Nurgan Regional Military Commission on the banks of the Amur River, and Yishiha, a eunuch of Haixi Jurchen origin, was ordered to lead an expedition to the mouth of the Amur to pacify the Wild Jurchens. After the death of Yongle Emperor, the Nurgan Regional Military Commission was abolished in 1435, and the Ming court ceased to have substantial activities there, although the guards continued to exist in Manchuria. Throughout its existence, the Ming established a total of 384 guards (, wei) and 24 battalions (, suo) in Manchuria, but these were probably only nominal offices and did not necessarily imply political control. By the late Ming period, Ming's political presence in Manchuria has declined significantly.
Relations with Tibet
upright=0.9|thumb|A 17th-century Tibetan thangka of Guhyasamaja Akshobhyavajra; the Ming court gathered various tribute items that were native products of Tibet (such as thangkas), and in return granted gifts to Tibetan tribute-bearers.
The History of Ming—the official dynastic history compiled in 1739 by the subsequent Qing dynasty (1644–1912)—states that the Ming established itinerant commanderies overseeing Tibetan administration while also renewing titles of ex-Yuan dynasty officials from Tibet and conferring new princely titles on leaders of Tibetan Buddhist sects. However, Turrell V. Wylie states that censorship in the History of Ming in favor of bolstering the Ming emperor's prestige and reputation at all costs obfuscates the nuanced history of Sino-Tibetan relations during the Ming era.
Modern scholars debate whether the Ming had sovereignty over Tibet. Some believe it was a relationship of loose suzerainty that was largely cut off when the Jiajing Emperor () persecuted Buddhism in favor of Taoism at court. Others argue that the significant religious nature of the relationship with Tibetan lamas is underrepresented in modern scholarship. Others note the Ming need for Central Asian horses and the need to maintain the tea-horse trade.
The Ming sporadically sent armed forays into Tibet during the 14th century, which the Tibetans successfully resisted. Several scholars point out that unlike the preceding Mongols, the Ming did not garrison permanent troops in Tibet. The Wanli Emperor () attempted to reestablish Sino-Tibetan relations in the wake of a Mongol–Tibetan alliance initiated in 1578, an alliance which affected the foreign policy of the subsequent Qing dynasty in their support for the Dalai Lama of the Yellow Hat sect. By the late 16th century, the Mongols proved to be successful armed protectors of the Yellow Hat Dalai Lama after their increasing presence in the Amdo region, culminating in the conquest of Tibet by Güshi Khan (1582–1655) in 1642, establishing the Khoshut Khanate.
Reign of the Yongle Emperor
Rise to power
The Hongwu Emperor specified his grandson Zhu Yunwen as his successor, and he assumed the throne as the Jianwen Emperor () after Hongwu's death in 1398. The most powerful of Hongwu's sons, Zhu Di, then the militarily mighty disagreed with this, and soon a political showdown erupted between him and his nephew Jianwen. After Jianwen arrested many of Zhu Di's associates, Zhu Di plotted a rebellion that sparked a three-year civil war. Under the pretext of rescuing the young Jianwen from corrupting officials, Zhu Di personally led forces in the revolt; the palace in Nanjing was burned to the ground, along with Jianwen himself, his wife, mother, and courtiers. Zhu Di assumed the throne as the Yongle Emperor (); his reign is universally viewed by scholars as a "second founding" of the Ming dynasty since he reversed many of his father's policies.
New capital and foreign engagement
Yongle demoted Nanjing to a secondary capital and in 1403 announced the new capital of China was to be at his power base in Beijing. Construction of a new city there lasted from 1407 to 1420, employing hundreds of thousands of workers daily. At the center was the political node of the Imperial City, and at the center of this was the Forbidden City, the palatial residence of the emperor and his family. By 1553, the Outer City was added to the south, which brought the overall size of Beijing to .
Beginning in 1405, the Yongle Emperor entrusted his favored eunuch commander Zheng He (1371–1433) as the admiral for a gigantic new fleet of ships designated for international tributary missions. Among the kingdoms visited by Zheng He, Yongle proclaimed the Kingdom of Cochin to be its protectorate. The Chinese had sent diplomatic missions over land since the Han dynasty (202 BCE220 CE) and engaged in private overseas trade, but these missions were unprecedented in grandeur and scale. To service seven different tributary voyages, the Nanjing shipyards constructed two thousand vessels from 1403 to 1419, including treasure ships measuring in length and in width.
Yongle used woodblock printing to spread Chinese culture. He also used the military to expand China's borders. This included the brief occupation of Vietnam, from the initial invasion in 1406 until the Ming withdrawal in 1427 as a result of protracted guerrilla warfare led by Lê Lợi, the founder of the Vietnamese Lê dynasty.
Tumu Crisis and the Ming Mongols
The Oirat leader Esen Tayisi launched an invasion into Ming China in July 1449. The chief eunuch Wang Zhen encouraged the Zhengtong Emperor () to lead a force personally to face the Oirats after a recent Ming defeat; the emperor left the capital and put his half-brother Zhu Qiyu in charge of affairs as temporary regent. On 8 September, Esen routed Zhengtong's army, and Zhengtong was captured—an event known as the Tumu Crisis. The Oirats held the Zhengtong Emperor for ransom. However, this scheme was foiled once the emperor's younger brother assumed the throne under the era name Jingtai (); the Oirats were also repelled once the Jingtai Emperor's confidant and defense minister Yu Qian (1398–1457) gained control of the Ming armed forces. Holding the Zhengtong Emperor in captivity was a useless bargaining chip for the Oirats as long as another sat on his throne, so they released him back into Ming China. The former emperor was placed under house arrest in the palace until the coup against the Jingtai Emperor in 1457 known as the "Wresting the Gate Incident". The former emperor retook the throne under the new era name Tianshun ().
Tianshun proved to be a troubled time and Mongol forces within the Ming military structure continued to be problematic. On 7 August 1461, the Chinese general Cao Qin and his Ming troops of Mongol descent staged a coup against the Tianshun Emperor out of fear of being next on his purge-list of those who aided him in the Wresting the Gate Incident. Cao's rebel force managed to set fire to the western and eastern gates of the Imperial City (doused by rain during the battle) and killed several leading ministers before his forces were finally cornered and he was forced to commit suicide.
While the Yongle Emperor had staged five major offensives north of the Great Wall against the Mongols and the Oirats, the constant threat of Oirat incursions prompted the Ming authorities to fortify the Great Wall from the late 15th century to the 16th century; nevertheless, John Fairbank notes that "it proved to be a futile military gesture but vividly expressed China's siege mentality." Yet the Great Wall was not meant to be a purely defensive fortification; its towers functioned rather as a series of lit beacons and signalling stations to allow rapid warning to friendly units of advancing enemy troops.
Decline
Reign of the Wanli Emperor
The reign of the Wanli Emperor (1572–1620) featured many problems, some of them fiscal in nature. In the beginning of his reign, Wanli surrounded himself with able advisors and made a conscientious effort to handle state affairs. His Grand Secretary Zhang Juzheng (1572–1582) built up an effective network of alliances with senior officials. However, there was no one after him skilled enough to maintain the stability of these alliances; officials soon banded together in opposing political factions. Over time Wanli grew tired of court affairs and frequent political quarreling amongst his ministers, preferring to stay behind the walls of the Forbidden City and out of his officials' sight. Scholar-officials lost prominence in administration as eunuchs became intermediaries between the aloof emperor and his officials; any senior official who wanted to discuss state matters had to persuade powerful eunuchs with a bribe simply to have his demands or message relayed to the emperor. There were several military campaigns during the Wanli Emperor's reign, Ordos campaign, the response to the Bozhou rebellion, and the Imjin War.
Role of eunuchs
The Hongwu Emperor forbade eunuchs to learn how to read or engage in politics. Whether or not these restrictions were carried out with absolute success in his reign, eunuchs during the Yongle Emperor's reign (1402–1424) and afterwards managed huge imperial workshops, commanded armies, and participated in matters of appointment and promotion of officials. Yongle put 75 eunuchs in charge of foreign policy; they traveled frequently to vassal states including Annam, Mongolia, the Ryukyu Islands, and Tibet and less frequently to farther-flung places like Japan and Nepal. In the later 15th century, however, eunuch envoys generally only traveled to Korea.
The eunuchs developed their own bureaucracy that was organized parallel to but was not subject to the civil service bureaucracy. Although there were several dictatorial eunuchs throughout the Ming, such as Wang Zhen, Wang Zhi, and Liu Jin, excessive tyrannical eunuch power did not become evident until the 1590s when the Wanli Emperor increased their rights over the civil bureaucracy and granted them power to collect provincial taxes.
The eunuch Wei Zhongxian (1568–1627) dominated the court of the Tianqi Emperor () and had his political rivals tortured to death, mostly the vocal critics from the faction of the Donglin Society. He ordered temples built in his honor throughout the Ming Empire, and built personal palaces created with funds allocated for building the previous emperor's tombs. His friends and family gained important positions without qualifications. Wei also published a historical work lambasting and belittling his political opponents. The instability at court came right as natural calamity, pestilence, rebellion, and foreign invasion came to a peak. The Chongzhen Emperor () had Wei dismissed from court, which led to Wei's suicide shortly after.
The eunuchs built their own social structure, providing and gaining support to their birth clans. Instead of fathers promoting sons, it was a matter of uncles promoting nephews. The Heishanhui Society in Peking sponsored the temple that conducted rituals for worshiping the memory of Gang Tie, a powerful eunuch of the Yuan dynasty. The Temple became an influential base for highly placed eunuchs, and continued in a somewhat diminished role during the Qing dynasty.
Economic breakdown and natural disasters
During the last years of the Wanli era and those of his two successors, an economic crisis developed that was centered on a sudden widespread lack of the empire's chief medium of exchange: silver. The Portuguese first established trade with China in 1516. Following the Ming Emperor's decision to ban direct trade with Japan, Portuguese traders acted as an intermediary between China and Japan by buying Chinese silks from China and selling it to Japan for silver. After some initial hostilities the Portuguese gained consent from the Ming court in 1557 to settle Macau as their permanent trade base in China. Their role in providing silver was gradually surpassed by the Spanish, while even the Dutch challenged them for control of this trade. Philip IV of Spain () began cracking down on illegal smuggling of silver from New Spain and Peru across the Pacific through the Philippines towards China, in favor of shipping silver mined in the Spanish Latin American colonies through Spanish ports. People began hoarding precious silver as there was progressively less of it, forcing the ratio of the value of copper to silver into a steep decline. In the 1630s a string of one thousand copper coins equaled an ounce of silver; by 1640 that sum could fetch half an ounce; and, by 1643 only one-third of an ounce. For peasants this meant economic disaster, since they paid taxes in silver while conducting local trade and crop sales in copper. Historians have debated the validity of the theory that silver shortages caused the downfall of the Ming dynasty.
Famines became common in northern China in the early 17th century because of unusually dry and cold weather that shortened the growing season—effects of a larger ecological event now known as the Little Ice Age. Famine, alongside tax increases, widespread military desertions, a declining relief system, and natural disasters such as flooding and inability of the government to properly manage irrigation and flood-control projects caused widespread loss of life and normal civility. The central government, starved of resources, could do very little to mitigate the effects of these calamities. Making matters worse, a widespread epidemic, the Great Plague of 1633–1644, spread across China from Zhejiang to Henan, killing an unknown but large number of people. One of the deadliest earthquake of all time, the Shaanxi earthquake of 1556, occurred during the Jiajing Emperor's reign, killing approximately 830,000 people.
Fall of the Ming
Rise of the Manchus
Originally a Ming vassal who officially considered himself a guardian of the Ming border and a local representative of imperial Ming power,The Cambridge History of China: Volume 9, The Ch'ing Empire to 1800, Part 1, by Denis C. Twitchett, John K. Fairbank, p. 29 Nurhaci, leader of the Jianzhou Jurchens, unified other Jurchen clans to create a new Manchu ethnic identity. He offered to lead his armies to support Ming and Joseon armies against the Japanese invasions of Korea in the 1590s. Ming officials declined the offer, but granted him the title of dragon-tiger general for his gesture. Recognizing the weakness of Ming authority in Manchuria at the time, he consolidated power by co-opting or conquering surrounding territories. In 1616 he declared himself Khan and established the Later Jin dynasty in reference to the previous Jurchen-ruled Jin dynasty. In 1618, he openly renounced the Ming overlordship and effectively declared war against the Ming with the "Seven Grievances".
In 1636, Nurhaci's son Hong Taiji renamed his dynasty the "Great Qing" at Mukden (modern Shenyang), which had been made their capital in 1625. Hong Taiji also adopted the Chinese imperial title huangdi, declared the Chongde ("Revering Virtue") era, and changed the ethnic name of his people from "Jurchen" to "Manchu". In 1636, Banner Armies defeated Joseon during the Second Manchu invasion of Korea and forced Joseon to become a Qing tributary. Shortly after, the Koreans renounced their long-held loyalty to the Ming dynasty.
Rebellion, invasion, collapse
A peasant soldier named Li Zicheng mutinied with his fellow soldiers in western Shaanxi in the early 1630s after the Ming government failed to ship much-needed supplies there. In 1634 he was captured by a Ming general and released only on the terms that he return to service. The agreement soon broke down when a local magistrate had thirty-six of his fellow rebels executed; Li's troops retaliated by killing the officials and continued to lead a rebellion based in Rongyang, Henan by 1635. By the 1640s, an ex-soldier and rival to Li—Zhang Xianzhong (1606–1647)—had created a firm rebel base in Chengdu, Sichuan, with the establishment of the Xi dynasty, while Li's center of power was in Hubei with extended influence over Shaanxi and Henan.
In 1640, masses of Chinese peasants who were starving, unable to pay their taxes, and no longer in fear of the frequently defeated Chinese army, began to form into huge bands of rebels. The Chinese military, caught between fruitless efforts to defeat the Manchu raiders from the north and huge peasant revolts in the provinces, essentially fell apart. Unpaid and unfed, the army was defeated by Li Zicheng—now self-styled as the Prince of Shun—and deserted the capital without much of a fight. On 25 April 1644, Beijing fell to a rebel army led by Li Zicheng when the city gates were opened by rebel allies from within. During the turmoil, Chongzhen, the last Ming emperor, accompanied only by a eunuch servant, hanged himself on a tree in the imperial garden right outside the Forbidden City.
Seizing opportunity, the Eight Banners crossed the Great Wall after the Ming border general Wu Sangui (1612–1678) opened the gates at Shanhai Pass. This occurred shortly after he learned about the fate of the capital and an army of Li Zicheng marching towards him; weighing his options of alliance, he decided to side with the Manchus. The Eight Banners under the Manchu Prince Dorgon (1612–1650) and Wu Sangui approached Beijing after the army sent by Li was destroyed at Shanhaiguan; the Prince of Shun's army fled the capital on the fourth of June. On 6 June, the Manchus and Wu entered the capital and proclaimed the young Shunzhi Emperor ruler of China. After being forced out of Xi'an by the Qing, chased along the Han River to Wuchang, and finally along the northern border of Jiangxi, Li Zicheng died there in the summer of 1645, thus ending the Shun dynasty. One report says his death was a suicide; another states that he was beaten to death by peasants after he was caught stealing their food.
Despite the loss of Beijing and the death of the emperor, the Ming were not yet totally destroyed. Nanjing, Fujian, Guangdong, Shanxi, and Yunnan were all strongholds of Ming resistance. However, there were several pretenders for the Ming throne, and their forces were divided. These scattered Ming remnants in southern China after 1644 were collectively designated by 19th-century historians as the Southern Ming. Each bastion of resistance was individually defeated by the Qing until 1662, when the last Southern Ming emperor, Zhu Youlang, the Yongli Emperor, was captured and executed. In 1683, the Qing forces conquered Taiwan and dismantled the Kingdom of Tungning, which had been established by Zheng Chenggong and was the final stronghold of forces loyal to the Ming dynasty.
Government
Province, prefecture, sub-prefecture and county
Described as "one of the greatest eras of orderly government and social stability in human history" by Edwin O. Reischauer, John K. Fairbank and Albert M. Craig, the Ming emperors took over the provincial administration system of the Yuan dynasty, and the thirteen Ming provinces are the precursors of the modern provinces. Throughout the Song dynasty, the largest political division was the circuit. However, after the Jurchen invasion in 1127, the Song court established four semi-autonomous regional command systems based on territorial and military units, with a detached service secretariat that would become the provincial administrations of the Yuan, Ming, and Qing dynasties. Copied on the Yuan model, the Ming provincial bureaucracy contained three commissions: one civil, one military, and one for surveillance. Below the level of the province were prefectures operating under a prefect (zhifu 知府), followed by subprefectures under a subprefect. The lowest unit was the county, overseen by a magistrate. Besides the provinces, there were also two large areas that belonged to no province, but were metropolitan areas attached to Nanjing and Beijing.
Institutions and bureaus
Institutional trends
Departing from the main central administrative system generally known as the Three Departments and Six Ministries system, which was instituted by various dynasties since the late Han (202 BCE220 CE), the Ming administration had only one department, the Secretariat, that controlled the six ministries. Following the execution of the Chancellor Hu Weiyong in 1380, the Hongwu Emperor abolished the Secretariat, the Censorate, and the Chief Military Commission and personally took charge of the Six Ministries and the regional Five Military Commissions. Thus a whole level of administration was cut out and only partially rebuilt by subsequent rulers. The Grand Secretariat, at the beginning a secretarial institution that assisted the emperor with administrative paperwork, was instituted, but without employing grand counselors, or chancellors.
The Hongwu Emperor sent his heir apparent to Shaanxi in 1391 to 'tour and soothe' (xunfu) the region; in 1421 the Yongle Emperor commissioned 26 officials to travel the empire and uphold similar investigatory and patrimonial duties. By 1430 these xunfu assignments became institutionalized as "grand coordinators". Hence, the Censorate was reinstalled and first staffed with investigating censors, later with censors-in-chief. By 1453, the grand coordinators were granted the title vice censor-in-chief or assistant censor-in-chief and were allowed direct access to the emperor. As in prior dynasties, the provincial administrations were monitored by a travelling inspector from the Censorate. Censors had the power to impeach officials on an irregular basis, unlike the senior officials who were to do so only in triennial evaluations of junior officials.
Although decentralization of state power within the provinces occurred in the early Ming, the trend of central government officials delegated to the provinces as virtual provincial governors began in the 1420s. By the late Ming dynasty, there were central government officials delegated to two or more provinces as supreme commanders and viceroys, a system which reined in the power and influence of the military by the civil establishment.
Grand Secretariat and Six Ministries
Governmental institutions in China conformed to a similar pattern for some two thousand years, but each dynasty installed special offices and bureaus, reflecting its own particular interests. The Ming administration utilized Grand Secretaries to assist the emperor, handling paperwork under the reign of the Yongle Emperor and later appointed as top officials of agencies and Grand Preceptor, a top-ranking, non-functional civil service post, under the Hongxi Emperor (). The Grand Secretariat drew its members from the Hanlin Academy and were considered part of the imperial authority, not the ministerial one (hence being at odds with both the emperor and ministers at times). The Secretariat operated as a coordinating agency, whereas the Six Ministries—Personnel, Revenue, Rites, War, Justice, and Public Works—were direct administrative organs of the state:
The Ministry of Personnel was in charge of appointments, merit ratings, promotions, and demotions of officials, as well as granting of honorific titles.
The Ministry of Revenue was in charge of gathering census data, collecting taxes, and handling state revenues, while there were two offices of currency that were subordinate to it.
The Ministry of Rites was in charge of state ceremonies, rituals, and sacrifices; it also oversaw registers for Buddhist and Daoist priesthoods and even the reception of envoys from tributary states.
The Ministry of War was in charge of the appointments, promotions, and demotions of military officers, the maintenance of military installations, equipment, and weapons, as well as the courier system.
The Ministry of Justice was in charge of judicial and penal processes, but had no supervisory role over the Censorate or the Grand Court of Revision.
The Ministry of Public Works had charge of government construction projects, hiring of artisans and laborers for temporary service, manufacturing government equipment, the maintenance of roads and canals, standardization of weights and measures, and the gathering of resources from the countryside.
Bureaus and offices for the imperial household
The imperial household was staffed almost entirely by eunuchs and ladies with their own bureaus. Female servants were organized into the Bureau of Palace Attendance, Bureau of Ceremonies, Bureau of Apparel, Bureau of Foodstuffs, Bureau of the Bedchamber, Bureau of Handicrafts, and Office of Staff Surveillance. Starting in the 1420s, eunuchs began taking over these ladies' positions until only the Bureau of Apparel with its four subsidiary offices remained. Hongwu had his eunuchs organized into the Directorate of Palace Attendants, but as eunuch power at court increased, so did their administrative offices, with eventual twelve directorates, four offices, and eight bureaus. The dynasty had a vast imperial household, staffed with thousands of eunuchs, who were headed by the Directorate of Palace Attendants. The eunuchs were divided into different directorates in charge of staff surveillance, ceremonial rites, food, utensils, documents, stables, seals, apparel, and so on. The offices were in charge of providing fuel, music, paper, and baths. The bureaus were in charge of weapons, silverwork, laundering, headgear, bronze work, textile manufacture, wineries, and gardens. At times, the most influential eunuch in the Directorate of Ceremonial acted as de facto dictator over the state.
Although the imperial household was staffed mostly by eunuchs and palace ladies, there was a civil service office called the Seal Office, which cooperated with eunuch agencies in maintaining imperial seals, tallies, and stamps. There were also civil service offices to oversee the affairs of imperial princes.
Personnel
Scholar-officials
The Hongwu emperor from 1373 to 1384 staffed his bureaus with officials gathered through recommendations only. After that the scholar-officials who populated the many ranks of bureaucracy were recruited through a rigorous examination system that was initially established by the Sui dynasty (581–618). Theoretically the system of exams allowed anyone to join the ranks of imperial officials (although it was frowned upon for merchants to join); in reality the time and funding needed to support the study in preparation for the exam generally limited participants to those already coming from the landholding class. However, the government did exact provincial quotas while drafting officials. This was an effort to curb monopolization of power by landholding gentry who came from the most prosperous regions, where education was the most advanced. The expansion of the printing industry since Song times enhanced the spread of knowledge and number of potential exam candidates throughout the provinces. For young schoolchildren there were printed multiplication tables and primers for elementary vocabulary; for adult examination candidates there were mass-produced, inexpensive volumes of Confucian classics and successful examination answers.
As in earlier periods, the focus of the examination was classical Confucian texts, while the bulk of test material centered on the Four Books outlined by Zhu Xi in the 12th century. Ming era examinations were perhaps more difficult to pass since the 1487 requirement of completing the "eight-legged essay", a departure from basing essays off progressing literary trends. The exams increased in difficulty as the student progressed from the local level, and appropriate titles were accordingly awarded successful applicants. Officials were classified in nine hierarchic grades, each grade divided into two degrees, with ranging salaries (nominally paid in piculs of rice) according to their rank. While provincial graduates who were appointed to office were immediately assigned to low-ranking posts like the county graduates, those who passed the palace examination were awarded a jinshi ('presented scholar') degree and assured a high-level position. In 276 years of Ming rule and ninety palace examinations, the number of doctoral degrees granted by passing the palace examinations was 24,874. Ebrey states that "there were only two to four thousand of these jinshi at any given time, on the order of one out of 10,000 adult males." This was in comparison to the 100,000 shengyuan ('government students'), the lowest tier of graduates, by the 16th century.
The maximum tenure in office was nine years, but every three years officials were graded on their performance by senior officials. If they were graded as superior then they were promoted, if graded adequate then they retained their ranks, and if graded inadequate they were demoted one rank. In extreme cases, officials would be dismissed or punished. Only capital officials of grade 4 and above were exempt from the scrutiny of recorded evaluation, although they were expected to confess any of their faults. There were over 4,000 school instructors in county and prefectural schools who were subject to evaluations every nine years. The Chief Instructor on the prefectural level was classified as equal to a second-grade county graduate. The Supervisorate of Imperial Instruction oversaw the education of the heir apparent to the throne; this office was headed by a Grand Supervisor of Instruction, who was ranked as first class of grade three.
Historians debate whether the examination system expanded or contracted upward social mobility. On the one hand, the exams were graded without regard to a candidate's social background, and were theoretically open to everyone. In actual practice, the successful candidates had years of a very expensive, sophisticated tutoring of the sort that wealthy gentry families specialized in providing their talented sons. In practice, 90 percent of the population was ineligible due to lack of education, but the upper 10 percent had equal chances for moving to the top. To be successful young men had to have extensive, expensive training in classical Chinese, the use of Mandarin in spoken conversation, calligraphy, and had to master the intricate poetic requirements of the eight-legged essay. Not only did the traditional gentry dominate the system, they also learned that conservatism and resistance to new ideas was the path to success. For centuries critics had pointed out these problems, but the examination system only became more abstract and less relevant to the needs of China. The consensus of scholars is that the eight-legged essay can be blamed as a major cause of "China's cultural stagnation and economic backwardness." However Benjamin Ellman argues there were some positive features, since the essay form was capable of fostering "abstract thinking, persuasiveness, and prosodic form" and that its elaborate structure discouraged a wandering, unfocused narrative".
Lesser functionaries
Scholar-officials who entered civil service through examinations acted as executive officials to a much larger body of non-ranked personnel called lesser functionaries. They outnumbered officials by four to one; Charles Hucker estimates that they were perhaps as many as 100,000 throughout the empire. These lesser functionaries performed clerical and technical tasks for government agencies. Yet they should not be confused with lowly lictors, runners, and bearers; lesser functionaries were given periodic merit evaluations like officials and after nine years of service might be accepted into a low civil service rank. The one great advantage of the lesser functionaries over officials was that officials were periodically rotated and assigned to different regional posts and had to rely on the good service and cooperation of the local lesser functionaries.
Eunuchs, princes, and generals
Eunuchs gained unprecedented power over state affairs during the Ming dynasty. One of the most effective means of control was the secret service stationed in what was called the Eastern Depot at the beginning of the dynasty, later the Western Depot. This secret service was overseen by the Directorate of Ceremonial, hence this state organ's often totalitarian affiliation. Eunuchs had ranks that were equivalent to civil service ranks, only theirs had four grades instead of nine.
Descendants of the first Ming emperor were made princes and given (typically nominal) military commands, annual stipends, and large estates. The title used was "king" (, wáng) but—unlike the princes in the Han and Jin dynasties—these estates were not feudatories, the princes did not serve any administrative function, and they partook in military affairs only during the reigns of the first two emperors. The rebellion of the Prince of Yan was justified in part as upholding the rights of the princes, but once the Yongle Emperor was enthroned, he continued his nephew's policy of disarming his brothers and moved their fiefs away from the militarized northern border. Although princes served no organ of state administration, the princes, consorts of the imperial princesses, and ennobled relatives did staff the Imperial Clan Court, which supervised the imperial genealogy.
Like scholar-officials, military generals were ranked in a hierarchic grading system and were given merit evaluations every five years (as opposed to three years for officials). However, military officers had less prestige than officials. This was due to their hereditary service (instead of solely merit-based) and Confucian values that dictated those who chose the profession of violence (wu) over the cultured pursuits of knowledge (wen). Although seen as less prestigious, military officers were not excluded from taking civil service examinations, and after 1478 the military even held their own examinations to test military skills. In addition to taking over the established bureaucratic structure from the Yuan period, the Ming emperors established the new post of the travelling military inspector. In the early half of the dynasty, men of noble lineage dominated the higher ranks of military office; this trend was reversed during the latter half of the dynasty as men from more humble origins eventually displaced them.
Society and culture
Literature and arts
Literature, painting, poetry, music, and Chinese opera of various types flourished during the Ming dynasty, especially in the economically prosperous lower Yangzi valley. Although short fiction had been popular as far back as the Tang dynasty (618–907), and the works of contemporaneous authors such as Xu Guangqi, Xu Xiake, and Song Yingxing were often technical and encyclopedic, the most striking literary development was the vernacular novel. While the gentry elite were educated enough to fully comprehend the language of Classical Chinese, those with rudimentary education—such as women in educated families, merchants, and shop clerks—became a large potential audience for literature and performing arts that employed Vernacular Chinese. Literati scholars edited or developed major Chinese novels into mature form in this period, such as Water Margin and Journey to the West. Jin Ping Mei, published in 1610, although incorporating earlier material, marks the trend toward independent composition and concern with psychology. In the later years of the dynasty, Feng Menglong and Ling Mengchu innovated with vernacular short fiction. Theater scripts were equally imaginative. The most famous, The Peony Pavilion, was written by Tang Xianzu (1550–1616), with its first performance at the Pavilion of Prince Teng in 1598.
Informal essay and travel writing was another highlight. Xu Xiake (1587–1641), a travel literature author, published his Travel Diaries in 404,000 written characters, with information on everything from local geography to mineralogy. The first reference to the publishing of private newspapers in Beijing was in 1582; by 1638 the Peking Gazette switched from using woodblock print to movable type printing. The new literary field of the moral guide to business ethics was developed during the late Ming period, for the readership of the merchant class.
In contrast to Xu Xiake, who focused on technical aspects in his travel literature, the Chinese poet and official Yuan Hongdao (1568–1610) used travel literature to express his desires for individualism as well as autonomy from and frustration with Confucian court politics. Yuan desired to free himself from the ethical compromises that were inseparable from the career of a scholar-official. This anti-official sentiment in Yuan's travel literature and poetry was actually following in the tradition of the Song dynasty poet and official Su Shi (1037–1101). Yuan Hongdao and his two brothers, Yuan Zongdao (1560–1600) and Yuan Zhongdao (1570–1623), were the founders of the Gong'an School of letters. This highly individualistic school of poetry and prose was criticized by the Confucian establishment for its association with intense sensual lyricism, which was also apparent in Ming vernacular novels such as the Jin Ping Mei. Yet even gentry and scholar-officials were affected by the new popular romantic literature, seeking gejis as soulmates to re-enact the heroic love stories that arranged marriages often could not provide or accommodate. During the Ming, some gentry dated well-educated gejis outside of marriage and the concubine system. Gējì culture reshaped the purely sexual relationship with prostitutes into a cultural relationship, and men could even become friends with like-minded gejis.
Famous painters included Ni Zan and Dong Qichang, as well as the Four Masters of the Ming dynasty, Shen Zhou, Tang Yin, Wen Zhengming, and Qiu Ying. They drew upon the techniques, styles, and complexity in painting achieved by their Song and Yuan predecessors, but added techniques and styles. Well-known Ming artists could make a living simply by painting due to the high prices they demanded for their artworks and the great demand by the highly cultured community to collect precious works of art. The artist Qiu Ying was once paid 2.8 kg (100 oz) of silver to paint a long handscroll for the eightieth birthday celebration of the mother of a wealthy patron. Renowned artists often gathered an entourage of followers, some who were amateurs who painted while pursuing an official career and others who were full-time painters.
The period was also renowned for ceramics and porcelains. The major production center for porcelain was the imperial kilns at Jingdezhen in Jiangxi, most famous in the period for blue and white porcelain, but also producing other styles. The Dehua porcelain factories in Fujian catered to European tastes by creating Chinese export porcelain by the late 16th century. Individual potters also became known, such as He Chaozong, who became famous in the early 17th century for his style of white porcelain sculpture. In The Ceramic Trade in Asia, Chuimei Ho estimates that about 16% of late Ming era Chinese ceramic exports were sent to Europe, while the rest were destined for Japan and South East Asia.
Carved designs in lacquerware and designs glazed onto porcelain wares displayed intricate scenes similar in complexity to those in painting. These items could be found in the homes of the wealthy, alongside embroidered silks and wares in jade, ivory, and cloisonné. The houses of the rich were also furnished with rosewood furniture and feathery latticework. The writing materials in a scholar's private study, including elaborately carved brush holders made of stone or wood, were designed and arranged ritually to give an aesthetic appeal.
Connoisseurship in the late Ming period centered on these items of refined artistic taste, which provided work for art dealers and even underground scammers who themselves made imitations and false attributions. The Jesuit Matteo Ricci while staying in Nanjing wrote that Chinese scam artists were ingenious at making forgeries and huge profits. However, there were guides to help the wary new connoisseur; Liu Tong (died 1637) wrote a book printed in 1635 that told his readers how to spot fake and authentic pieces of art. He revealed that a Xuande era (1426–1435) bronze work could be authenticated by judging its sheen; porcelain wares from the Yongle era (1402–1424) could be judged authentic by their thickness.
Religion
The dominant religious beliefs during the Ming dynasty were the various forms of Chinese folk religion and the Three Teachings—Confucianism, Taoism, and Buddhism. The Yuan-supported Tibetan lamas fell from favor, and the early Ming emperors particularly favored Taoism, granting its practitioners many positions in the state's ritual offices. The Hongwu Emperor curtailed the cosmopolitan culture of the Mongol Yuan dynasty, and the prolific Prince of Ning Zhu Quan even composed one encyclopedia attacking Buddhism as a foreign "mourning cult", deleterious to the state, and another encyclopedia that subsequently joined the Taoist canon.
The Yongle Emperor and his successors strongly patronised Tibetan Buddhism by supporting construction, printing of sutras, ceremonies etc., to seek legitimacy among foreign audiences. Yongle tried to portray himself as a Buddhist ideal king, a cakravartin. There is evidence that this portrayal was successful in persuading foreign audiences.
Islam was also well-established throughout China, with a history said to have begun with Sa'd ibn Abi Waqqas during the Tang and strong official support during the Yuan. Although the Ming sharply curtailed this support, there were still several prominent Muslim figures early on, including the powerful eunuch Zheng He. The Hongwu Emperor's generals Chang Yuqun, Lan Yu, Ding Dexing, and Mu Ying have also been identified as Muslim by Hui scholars, though this is doubted by non-Muslim sources. Regardless, the presence of Muslims in the armies that drove the Mongols northwards caused a gradual shift in the Chinese perception of Muslims, transitioning from "foreigners" to "familiar strangers". The Hongwu Emperor wrote The Hundred-word Eulogy praising Islam and Muhammad. Ming emperors strongly sponsored the construction of mosques and granted generous liberties for the practice of Islam.
The advent of the Ming was initially devastating to Christianity: in his first year, the Hongwu Emperor declared the eighty-year-old Franciscan missions among the Yuan heterodox and illegal. The centuries-old Church of the East in China also disappeared. During the later Ming, a new wave of Christian missionaries arrived—particularly Jesuits—who employed new western science and technology in their arguments for conversion. They were educated in Chinese language and culture at St. Paul's College, Macau after its founding in 1579. The most influential was Matteo Ricci, whose "Map of the Myriad Countries of the World" upended traditional geography throughout East Asia, and whose work with the convert Xu Guangqi led to the first Chinese translation of Euclid's Elements in 1607. The discovery of the Xi'an Stele in 1625 also facilitated the treatment of Christianity as a long-established faith in China, rather than as a new and dangerous cult. However, there were strong disagreements about the extent to which converts could continue to perform rituals to the emperor, Confucius, or their ancestors: Ricci had been very accommodating and an attempt by his successors to backtrack from this policy led to the Nanjing Incident of 1616, which exiled four Jesuits to Macau and forced the others out of public life for six years. A series of spectacular failures by the Chinese astronomers—including missing an eclipse easily computed by Xu Guangqi and Sabatino de Ursis—and a return by the Jesuits to presenting themselves as educated scholars in the Confucian mold restored their fortunes. However, by the end of the Ming the Dominicans had begun the Chinese Rites controversy in Rome that would eventually lead to a full ban of Christianity under the Qing.
During his mission, Ricci was also contacted in Beijing by one of the approximately 5,000 Kaifeng Jews and introduced them and their long history in China to Europe. However, the 1642 flood caused by Kaifeng's Ming governor devastated the community, which lost five of its twelve families, its synagogue, and most of its Torah..
Philosophy
Wang Yangming's Confucianism
During the Ming dynasty, the Neo-Confucian doctrines of the Song scholar Zhu Xi were embraced by the court and the Chinese literati at large, although the direct line of his school was destroyed by the Yongle Emperor's extermination of the ten degrees of kinship of Fang Xiaoru in 1402. The Ming scholar most influential upon subsequent generations, however, was Wang Yangming (1472–1529), whose teachings were attacked in his own time for their similarity to Chan Buddhism. Building upon Zhu Xi's concept of the "extension of knowledge" ( or ), gaining understanding through careful and rational investigation of things and events, Wang argued that universal concepts would appear in the minds of anyone. Therefore, he claimed that anyone—no matter their pedigree or education—could become as wise as Confucius and Mencius had been and that their writings were not sources of truth but merely guides that might have flaws when carefully examined. A peasant with a great deal of experience and intelligence would then be wiser than an official who had memorized the Classics but not experienced the real world.
Conservative reaction
Other scholar-bureaucrats were wary of Wang's heterodoxy, the increasing number of his disciples while he was still in office, and his overall socially rebellious message. To curb his influence, he was often sent out to deal with military affairs and rebellions far away from the capital. Yet his ideas penetrated mainstream Chinese thought and spurred new interest in Taoism and Buddhism. Furthermore, people began to question the validity of the social hierarchy and the idea that the scholar should be above the farmer. Wang Yangming's disciple and salt-mine worker Wang Gen gave lectures to commoners about pursuing education to improve their lives, while his follower He Xinyin challenged the elevation and emphasis of the family in Chinese society. His contemporary Li Zhi even taught that women were the intellectual equals of men and should be given a better education; both Li and He eventually died in prison, jailed on charges of spreading "dangerous ideas". Yet these "dangerous ideas" of educating women had long been embraced by some mothers and by courtesans who were as literate and skillful in calligraphy, painting, and poetry as their male guests.
The liberal views of Wang Yangming were opposed by the Censorate and by the Donglin Academy, re-established in 1604. These conservatives wanted a revival of orthodox Confucian ethics. Conservatives such as Gu Xiancheng (1550–1612) argued against Wang's idea of innate moral knowledge, stating that this was simply a legitimization for unscrupulous behavior such as greedy pursuits and personal gain. These two strands of Confucian thought, hardened by Chinese scholars' notions of obligation towards their mentors, developed into pervasive factionalism among the ministers of state, who used any opportunity to impeach members of the other faction from court.
Urban and rural life
upright=0.8|thumb|A Ming dynasty red "seal paste box" in carved lacquer
Wang Gen was able to give philosophical lectures to many commoners from different regions because—following the trend already apparent in the Song dynasty—communities in Ming society were becoming less isolated as the distance between market towns was shrinking. Schools, descent groups, religious associations, and other local voluntary organizations were increasing in number and allowing more contact between educated men and local villagers. Jonathan Spence writes that the distinction between what was town and country was blurred, since suburban areas with farms were located just outside and in some cases within the walls of a city. Not only was the blurring of town and country evident, but also of socioeconomic class in the traditional four occupations, since artisans sometimes worked on farms in peak periods, and farmers often traveled into the city to find work during times of dearth.
A variety of occupations could be chosen or inherited from a father's line of work. These included coffin makers, ironworkers and blacksmiths, tailors, cooks and noodle-makers, retail merchants, tavern, teahouse, or winehouse managers, shoemakers, seal cutters, pawnshop owners, brothel heads, and merchant bankers engaging in a proto-banking system involving notes of exchange. Virtually every town had a brothel where female and male prostitutes could be had. Male catamites fetched a higher price than female concubines since pederasty with a teenage boy was seen as a mark of elite status, regardless of sodomy being repugnant to sexual norms. Public bathing became much more common than in earlier periods. Urban shops and retailers sold a variety of goods such as special paper money to burn at ancestral sacrifices, specialized luxury goods, headgear, fine cloth, teas, and others. Smaller communities and townships too poor or scattered to support shops and artisans obtained their goods from periodic market fairs and traveling peddlers. A small township also provided a place for simple schooling, news and gossip, matchmaking, religious festivals, traveling theater groups, tax collection, and bases of famine relief distribution.
Farming villagers in the north spent their days harvesting crops like wheat and millet, while farmers south of the Huai River engaged in intensive rice cultivation and had lakes and ponds where ducks and fish could be raised. The cultivation of mulberry trees for silkworms and tea bushes could be found mostly south of the Yangtze; even further south sugarcane and citrus were grown as basic crops. Some people in the mountainous southwest made a living by selling lumber from hard bamboo. Besides cutting down trees to sell wood, the poor also made a living by turning wood into charcoal, and by burning oyster shells to make lime and fired pots, and weaving mats and baskets. In the north traveling by horse and carriage was most common, while in the south the myriad of rivers, canals, and lakes provided cheap and easy water transport. Although the south had the characteristic of the wealthy landlord and tenant farmers, there were on average many more owner-cultivators north of the Huai River due to harsher climate, living not far above subsistence level.
Early Ming dynasty saw the strictest sumptuary laws in Chinese history. It was illegal for commoners to wear fine silk or dress in bright red, dark green or yellow colors; nor could they wear boots or guan hats. Women could not use ornaments made from gold, jade, pearl or emerald. Merchants and their families were further banned from using silk. However, these laws were no longer enforced from the mid-Ming period onwards.
Male homosexual marriages were institutionalized in several areas, such as Fujian. Homosexuality was practiced frequently by monks, and spread to Japan with Kukai, a Japanese monk who trained in China.
Science and technology
After the flourishing of science and technology in the Song dynasty, the Ming perhaps saw fewer advancements in science and technology compared to the pace of discovery in the Western world. In fact, key advances in Chinese science in the late Ming were spurred by contact with Europe. In 1626 Johann Adam Schall von Bell wrote the first Chinese treatise on the telescope, the Yuanjingshuo (Far Seeing Optic Glass); in 1634 the Chongzhen Emperor acquired the telescope of the late Johann Schreck (1576–1630). The heliocentric model of the Solar System was rejected by the Catholic missionaries in China, but Johannes Kepler and Galileo Galilei's ideas slowly trickled into China starting with the Polish Jesuit Michael Boym (1612–1659) in 1627, Adam Schall von Bell's treatise in 1640, and finally Joseph Edkins, Alex Wylie, and John Fryer in the 19th century. Catholic Jesuits in China would promote Copernican theory at court, yet at the same time embrace the Ptolemaic system in their writing; it was not until 1865 that Catholic missionaries in China sponsored the heliocentric model as their Protestant peers did. Although Shen Kuo (1031–1095) and Guo Shoujing (1231–1316) had laid the basis for trigonometry in China, another important work in Chinese trigonometry would not be published again until 1607 with the efforts of Xu Guangqi and Matteo Ricci. Some inventions which had their origins in ancient China were reintroduced to China from Europe during the late Ming; for example, the field mill.
By the 16th century the Chinese calendar was in need of reform. Although the Ming had adopted Guo Shoujing's Shoushi calendar of 1281, which was just as accurate as the Gregorian calendar, the Ming Directorate of Astronomy failed to periodically readjust it; this was perhaps due to their lack of expertise since their offices had become hereditary in the Ming and the Statutes of the Ming prohibited private involvement in astronomy. A sixth-generation descendant of the Hongxi Emperor, the "Prince" Zhu Zaiyu (1536–1611), submitted a proposal to fix the calendar in 1595, but the ultra-conservative astronomical commission rejected it. This was the same Zhu Zaiyu who discovered the system of tuning known as equal temperament, a discovery made simultaneously by Simon Stevin (1548–1620) in Europe. In addition to publishing his works on music, he was able to publish his findings on the calendar in 1597. A year earlier, the memorial of Xing Yunlu suggesting a calendar improvement was rejected by the Supervisor of the Astronomical Bureau due to the law banning private practice of astronomy; Xing would later serve with Xu Guangqi to reform the calendar according to Western standards in 1629.
When the Ming founder Hongwu came upon the mechanical devices housed in the Yuan palace at Khanbaliq—such as fountains with balls dancing on their jets, tiger automata, dragon-headed devices that spouted mists of perfume, and mechanical clocks in the tradition of Yi Xing (683–727) and Su Song (1020–1101)—he associated all of them with the decadence of Mongol rule and had them destroyed. This was described in full length by the Divisional Director of the Ministry of Works, Xiao Xun, who also carefully preserved details on the architecture and layout of the Yuan dynasty palace. Later, European Jesuits such as Matteo Ricci and Nicolas Trigault would briefly mention indigenous Chinese clockworks that featured drive wheels. However, both Ricci and Trigault were quick to point out that 16th-century European clockworks were far more advanced than the common time keeping devices in China, which they listed as water clocks, incense clocks, and "other instruments ... with wheels rotated by sand as if by water" (Chinese: ). Chinese records—namely the Yuan Shi—describe the 'five-wheeled sand clock', a mechanism pioneered by Zhan Xiyuan (fl. 1360–1380) which featured the scoop wheel of Su Song's earlier astronomical clock and a stationary dial face over which a pointer circulated, similar to European models of the time. This sand-driven wheel clock was improved upon by Zhou Shuxue (fl. 1530–1558) who added a fourth large gear wheel, changed gear ratios, and widened the orifice for collecting sand grains since he criticized the earlier model for clogging up too often.
The Chinese were intrigued with European technology, but so were visiting Europeans of Chinese technology. In 1584, Abraham Ortelius (1527–1598) featured in his atlas Theatrum Orbis Terrarum the peculiar Chinese innovation of mounting masts and sails onto carriages, just like Chinese ships. Gonzales de Mendoza also mentioned this a year later—noting even the designs of them on Chinese silken robes—while Gerardus Mercator (1512–1594) featured them in his atlas, John Milton (1608–1674) in one of his famous poems, and Andreas Everardus van Braam Houckgeest (1739–1801) in the writings of his travel diary in China.
The encyclopedist Song Yingxing (1587–1666) documented a wide array of technologies, metallurgic and industrial processes in his Tiangong Kaiwu encyclopedia of 1637. This includes mechanical and hydraulic powered devices for agriculture and irrigation, nautical technology such as vessel types and snorkeling gear for pearl divers, the annual processes of sericulture and weaving with the loom, metallurgic processes such as the crucible technique and quenching, manufacturing processes such as for roasting iron pyrite in converting sulphide to oxide in sulfur used in gunpowder compositions—illustrating how ore was piled up with coal briquettes in an earthen furnace with a still-head that sent over sulfur as vapor that would solidify and crystallize—and the use of gunpowder weapons such as a naval mine ignited by use of a rip-cord and steel flint wheel.
Focusing on agriculture in his Nongzheng Quanshu, the agronomist Xu Guangqi (1562–1633) took an interest in irrigation, fertilizers, famine relief, economic and textile crops, and empirical observation of the elements that gave insight into early understandings of chemistry.
There were many advances and new designs in gunpowder weapons during the beginning of the dynasty, but by the mid to late Ming the Chinese began to frequently employ European-style artillery and firearms. The Huolongjing, compiled by Jiao Yu and Liu Bowen sometime before the latter's death on 16 May 1375 (with a preface added by Jiao in 1412), featured many types of cutting-edge gunpowder weaponry for the time. This includes hollow, gunpowder-filled exploding cannonballs, land mines that used a complex trigger mechanism of falling weights, pins, and a steel wheellock to ignite the train of fuses, naval mines, fin-mounted winged rockets for aerodynamic control, multistage rockets propelled by booster rockets before igniting a swarm of smaller rockets issuing forth from the end of the missile (shaped like a dragon's head), and hand cannons that had up to |ten barrels.
Li Shizhen (1518–1593)—one of the most renowned pharmacologists and physicians in Chinese history—belonged to the late Ming period. His Bencao Gangmu is a medical text with 1,892 entries, each entry with its own name called a gang. The mu in the title refers to the synonyms of each name. Inoculation, although it can be traced to earlier Chinese folk medicine, was detailed in Chinese texts by the sixteenth century. Throughout the Ming, around fifty texts were published on the treatment of smallpox. In regards to oral hygiene, the ancient Egyptians had a primitive toothbrush of a twig frayed at the end, but the Chinese were the first to invent the modern bristle toothbrush in 1498, although it used stiff pig hair.
Population
Sinologist historians debate the population figures for each era in the Ming dynasty. The historian Timothy Brook notes that the Ming government census figures are dubious since fiscal obligations prompted many families to underreport the number of people in their households and many county officials to underreport the number of households in their jurisdiction. Children were often underreported, especially female children, as shown by skewed population statistics throughout the Ming. Even adult women were underreported; for example, the Daming Prefecture in North Zhili reported a population of 378,167 males and 226,982 females in 1502. The government attempted to revise the census figures using estimates of the expected average number of people in each household, but this did not solve the widespread problem of tax registration. Some part of the gender imbalance may be attributed to the practice of female infanticide. The practice is well documented in China, going back over two thousand years, and it was described as "rampant" and "practiced by almost every family" by contemporary authors. However, the dramatically skewed sex ratios, which many counties reported exceeding 2:1 by 1586, cannot likely be explained by infanticide alone.
The number of people counted in the 1381 census was 59,873,305; however, this number dropped significantly when the government found that some 3 million people were missing from the tax census of 1391. Even though underreporting figures was made a capital crime in 1381, the need for survival pushed many to abandon the tax registration and wander from their region, where Hongwu had attempted to impose rigid immobility on the populace. The government tried to mitigate this by creating their own conservative estimate of 60,545,812 people in 1393. In his Studies on the Population of China, Ping-ti Ho suggests revising the 1393 census to 65 million people, noting that large areas of North China and frontier areas were not counted in that census. Brook states that the population figures gathered in the official censuses after 1393 ranged between 51 and 62 million, while the population was in fact increasing. Even the Hongzhi Emperor () remarked that the daily increase in subjects coincided with the daily dwindling number of registered civilians and soldiers. William Atwell estimates the population of China around 1400 at 90 million people, citing Heijdra and Mote.
Historians are now turning to local gazetteers of Ming China for clues that would show consistent growth in population. Using the gazetteers, Brook estimates that the overall population under the Chenghua Emperor () was roughly 75 million, despite mid-Ming census figures hovering around 62 million. While prefectures across the empire in the mid-Ming period were reporting either a drop in or stagnant population size, local gazetteers reported massive amounts of incoming vagrant workers with not enough good cultivated land for them to till, so that many would become drifters, con-men, or wood-cutters that contributed to deforestation. The Hongzhi and Zhengde emperors lessened the penalties against those who had fled their home region, while the Jiajing Emperor () finally had officials register migrants wherever they had moved or fled in order to bring in more revenues.
Even with the Jiajing reforms to document migrant workers and merchants, by the late Ming era the government census still did not accurately reflect the enormous growth in population. Gazetteers across the empire noted this and made their own estimations of the overall population in the Ming, some guessing that it had doubled, tripled, or even grown five-fold since 1368. Fairbank estimates a population of 160 million during the late Ming, while Brook estimates 175 million, and Ebrey estimates 200 million. However, a great epidemic that started in Shanxi in 1633, ravaged the densely populated areas along the Grand Canal; a gazetteer in northern Zhejiang noted more than half the population fell ill that year and that 90% of the local populace in one area was dead by 1642.
See also
1642 Yellow River flood
Economy of the Ming dynasty
Taxation in premodern China
Kingdom of Tungning
List of tributary states of China
Luchuan–Pingmian campaigns
Manchuria under Ming rule
Military conquests of the Ming dynasty
Ming ceramics
Ming dynasty in Inner Asia
Ming dynasty family tree
Ming official headwear
Ming poetry
Transition from Ming to Qing
Ye Chunji (for further information on rural economics in the Ming)
Zheng Zhilong
Notes
References
Citations
Works cited
Further reading
Reference works and primary sources
Farmer, Edward L. ed. Ming History: An Introductory Guide to Research (1994).
The Ming History English Translation Project, A collaborative translation project for portions of the History of Ming
Lynn Struve, The Ming-Qing Conflict, 1619–1683: A Historiography and Source Guide, Online Indiana University.
General studies
pp. 723–743 (Archive). pp. 807–832 (Archive).
External links
Notable Ming dynasty painters and galleries at China Online Museum
Ming dynasty art at the Metropolitan Museum of Art
Highlights from the British Museum exhibition ()
Category:1368 establishments in Asia
*02
Category:14th-century establishments in China
*
Category:1640s disestablishments in China
Category:1644 disestablishments in Asia
*
*01
Category:Confucian dynasties
Category:Dynasties of China
Category:Former countries in Chinese history
Category:Medieval East Asia
Category:States and territories disestablished in 1644
Category:States and territories established in 1368
|
ancient_medieval
| 11,961
|
43455
|
Tang dynasty
|
https://en.wikipedia.org/wiki/Tang_dynasty
|
The Tang dynasty (, ; ), or the Tang Empire, was an imperial dynasty of China that ruled from 618 to 907, with an interregnum between 690 and 705. It was preceded by the Sui dynasty and followed by the Five Dynasties and Ten Kingdoms period. Historians generally regard the Tang as a high point in Chinese civilisation, and a golden age of cosmopolitan culture. Tang territory, acquired through the military campaigns of its early rulers, rivalled that of the Han dynasty.
The Li family founded the dynasty after taking advantage of a period of Sui decline and precipitating their final collapse, in turn inaugurating a period of progress and stability in the first half of the dynasty's rule. The dynasty was formally interrupted during 690–705 when Empress Wu Zetian seized the throne, proclaiming the Wu Zhou dynasty and becoming the only legitimate Chinese empress regnant. The An Lushan rebellion (755–763) led to devastation and the decline of central authority during the latter half of the dynasty.
Like the previous Sui dynasty, the Tang maintained a civil-service system by recruiting scholar-officials through standardised examinations and recommendations to office. The rise of regional military governors known as jiedushi during the 9th century undermined this civil order. The dynasty and central government went into decline by the latter half of the 9th century; agrarian rebellions resulted in mass population loss and displacement, widespread poverty, and further government dysfunction that ultimately ended the dynasty in 907.
The Tang capital at Chang'an (present-day Xi'an) was the world's most populous city for much of the dynasty's existence. Two censuses of the 7th and 8th centuries estimated the empire's population at about 50 million people, which grew to an estimated 80 million by the dynasty's end. From its numerous subjects, the dynasty raised professional and conscripted armies of hundreds of thousands of troops to contend with nomadic powers for control of Inner Asia and the lucrative trade-routes along the Silk Road. Far-flung kingdoms and states paid tribute to the Tang court, while the Tang also indirectly controlled several regions through a protectorate system. In addition to its political hegemony, the Tang exerted a powerful cultural influence over neighbouring East Asian nations such as Japan and Korea.
Chinese culture flourished and further matured during the Tang era. It is traditionally considered the greatest age for Chinese poetry. Two of China's most famous poets, Li Bai and Du Fu, belonged to this age, contributing with poets such as Wang Wei to the monumental Three Hundred Tang Poems. Many famous painters such as Han Gan, Zhang Xuan, and Zhou Fang were active, while Chinese court music flourished with instruments such as the popular pipa. Tang scholars compiled a rich variety of historical literature, as well as encyclopaedias and geographical works. Notable innovations included the development of woodblock printing. Buddhism became a major influence in Chinese culture, with native Chinese sects gaining prominence. However, in the 840s, Emperor Wuzong enacted policies to suppress Buddhism, which subsequently declined in influence.
History
Establishment
The House of Li had ethnic Han origins, and it belonged to the northwest military aristocracy prevalent during the Sui dynasty. According to official Tang records, they were paternally descended from Laozi, the traditional founder of Taoism (whose personal name was Li Dan or Li Er), the Han dynasty general Li Guang, and Li Gao, the founder of the Han-ruled Western Liang kingdom. This family was known as the Longxi Li lineage, which also included the prominent Tang poet Li Bai. The Tang emperors were partially of Xianbei ancestry, as Emperor Gaozu of Tang's mother Duchess Dugu was part-Xianbei. Apart from the traditional historiography, some modern historians have suggested the Tang imperial family might have modified its genealogy to conceal their Xianbei heritage.
Emperor Gaozu (born Li Yuan) was the founder of the Tang. He was previously Duke of Tang and governor of Taiyuan, the capital of modern Shanxi, during the collapse of the Sui dynasty (581–618). Li had prestige and military experience, and was a first cousin of Emperor Yang of Sui (their mothers were both one of the Dugu sisters). Li Yuan rose in rebellion in 617, along with his son and his equally militant daughter Princess Pingyang (), who raised and commanded her own troops. In winter 617, Li Yuan occupied Chang'an, relegated Emperor Yang to the position of Taishang Huang ('retired emperor'), and acted as regent to the puppet child-emperor Yang You. On the news of Emperor Yang's murder by General Yuwen Huaji on June 18, 618, Li Yuan declared himself emperor of the newly founded Tang dynasty.
Emperor Gaozu ruled until 626, when he was forcefully deposed by his son Li Shimin, the Prince of Qin. Li Shimin had commanded troops since the age of 18, had prowess with bow and arrow, sword and lance and was known for his effective cavalry charges. Fighting a numerically superior army, he defeated Dou Jiande (573–621) at Luoyang in the Battle of Hulao on May 28, 621. Due to fear of assassination, Li Shimin ambushed and killed two of his brothers, Li Yuanji () and crown prince Li Jiancheng (), in the Xuanwu Gate Incident on July 2, 626. Shortly thereafter, his father abdicated in his favour, and Li Shimin ascended the throne. He is conventionally known by his temple name Taizong.
Although killing two brothers and deposing his father contradicted the Confucian value of filial piety, Taizong showed himself to be a capable leader who listened to the advice of the wisest members of his council. In 628, Emperor Taizong held a Buddhist memorial service for the casualties of war; in 629, he had Buddhist monasteries erected at the sites of major battles so that monks could pray for the fallen on both sides of the fight.
During the Tang campaign against the Eastern Turks, the Eastern Turkic Khaganate was destroyed after the capture of its ruler, Illig Qaghan by the famed Tang military officer Li Jing (571–649), who later became a Chancellor of the Tang dynasty. With this victory, the Turks accepted Taizong as their khagan, a title rendered as Tian Kehan in addition to his rule as emperor of China under the traditional title "Son of Heaven". Taizong was succeeded by his son Li Zhi (as Emperor Gaozong) in 649.
The Tang engaged in military campaigns against the Western Turks, exploiting the rivalry between Western and Eastern Turks in order to weaken both. Under Emperor Taizong, campaigns were dispatched in the Western Regions against Gaochang in 640, Karasahr in 644 and 648, and Kucha in 648. The wars against the Western Turks continued under Emperor Gaozong, and the Western Turkic Khaganate was finally annexed after General Su Dingfang's defeat of Khagan Ashina Helu in 657. Around this time, the Tang court enjoyed visits by numerous dignitaries from foreign lands. These were depicted in the Portraits of Periodical Offering, probably painted by Yan Liben (601–673).
Wu Zetian's usurpation
Having entered Emperor Gaozong's court as a lowly consort, Wu Zetian ultimately acceded to the highest position of power in 690, establishing the short-lived Wu Zhou. Emperor Gaozong suffered a stroke in 655, and Wu began to make many of his court decisions for him, discussing affairs of state with his councillors, who took orders from her while she sat behind a screen. When Empress Wu's eldest son, the crown prince, began to assert his authority and advocate policies opposed by Empress Wu, he suddenly died in 675. Many suspected he was poisoned by Empress Wu. Although the next heir apparent kept a lower profile, Wu accused him of plotting a rebellion in 680; he was banished and later obliged to commit suicide.
In 683, Emperor Gaozong died and was succeeded by Emperor Zhongzong, his eldest surviving son by Wu. Zhongzong tried to appoint his wife's father as chancellor: after only six weeks on the throne, he was deposed by Empress Wu in favour of his younger brother, Emperor Ruizong. This provoked a group of Tang princes to rebel in 684. Wu's armies suppressed them within two months. She proclaimed the Tianshou era of Wu Zhou on October 16, 690, and three days later demoted Emperor Ruizong to crown prince. He was also forced to give up his father's surname Li in favour of the Empress Wu. She then ruled as China's only empress regnant in history.
A palace coup () on February 20, 705, forced Empress Wu to yield her position on February 22. The next day, her son Zhongzong was restored to power; the Tang was formally restored on March 3. She died soon after. To legitimise her rule, she circulated a document known as the Great Cloud Sutra, which predicted that a reincarnation of the Maitreya Buddha would be a female monarch who would dispel illness, worry, and disaster from the world. She even introduced numerous revised written characters for the language, though they reverted to the original forms after her death. Arguably the most important part of her legacy was diminishing the hegemony of the Northwestern aristocracy, allowing people from other clans and regions of China to become more represented in Chinese politics and government.
Emperor Xuanzong's reign
There were many prominent women at court during and after Wu's reign, including Shangguan Wan'er (664–710), a poet, writer, and trusted official in charge of Wu's private office. In 706, the wife of Emperor Zhongzong of Tang, Empress Wei (), persuaded her husband to staff government offices with his sister and her daughters, and in 709 requested that he grant women the right to bequeath hereditary privileges to their sons (which before was a male right only). Empress Wei eventually poisoned Zhongzong, whereupon she placed his fifteen-year-old son upon the throne in 710. Two weeks later, Li Longji (the later Emperor Xuanzong) entered the palace with a few followers and slew Empress Wei and her faction. He then installed his father Emperor Ruizong () on the throne. Just as Emperor Zhongzong was dominated by Empress Wei, so too was Ruizong dominated by Princess Taiping. This ended when Princess Taiping's coup failed in 712, and Emperor Ruizong abdicated to Emperor Xuanzong.
The Tang reached its height during Emperor Xuanzong's 44-year reign, which has been characterized as a golden age of economic prosperity and pleasant lifestyles within the imperial court. Xuanzong was seen as a progressive and benevolent ruler, having abolished the death penalty in 747. Previously, all executions had to be approved by the emperor; in 730, there were only 24 executions. Xuanzong bowed to the consensus of his ministers on policy decisions and made efforts to staff government ministries fairly with different political factions. His staunch Confucian chancellor Zhang Jiuling (673–740) worked to reduce deflation and increase the money supply by upholding the use of private coinage, while his aristocratic and technocratic successor Li Linfu () favoured government monopoly over the issuance of coinage. After 737, most of Xuanzong's confidence rested in Li Linfu, his long-standing chancellor, who championed a more aggressive foreign policy employing non-Chinese generals. This policy ultimately created the conditions for a massive rebellion against Xuanzong.
An Lushan rebellion and catastrophe
Previously at the height of their power, the An Lushan rebellion (755–763) ultimately destroyed the prosperity of the Tang. An Lushan was a half-Sogdian, half-Turkic Tang commander since 744, who had experience fighting the Khitans of Manchuria with a victory in 744, yet most of his campaigns against the Khitans were unsuccessful. He was given great responsibility in Hebei, which allowed him to rebel with an army of more than 100,000 troops. After capturing Luoyang, he named himself emperor of a new, but short-lived, Yan state. Despite early victories scored by the Tang general Guo Ziyi (697–781), the newly recruited troops of the army at the capital were no match for An Lushan's frontier veterans; the court fled Chang'an. While the heir apparent raised troops in Shanxi and Xuanzong fled to Sichuan, they called upon the help of the Uyghur Khaganate in 756. The Uyghur khan Moyanchur was greatly excited at this prospect, and married his own daughter to the Chinese diplomatic envoy once he arrived, receiving in turn a Chinese princess as his bride. The Uyghurs helped recapture the Tang capital from the rebels, but they refused to leave until the Tang paid them an enormous sum of tribute in silk. Even Abbasid Arabs assisted the Tang in putting down the rebellion. A massacre of foreign Arab and Persian Muslim merchants by Tian Shengong happened during the An Lushan rebellion in the 760 Yangzhou massacre. The Tibetans took hold of the opportunity and raided many areas under Chinese control, and even after the Tibetan Empire had fallen apart in 842, followed soon after by the Uyghur Kingdom of Qocho, the Tang were in no position to reconquer Central Asia after 763. So significant was this loss that half a century later jinshi examination candidates were required to write an essay on the causes of the Tang's decline. Although An Lushan was killed by one of his eunuchs in 757, this time of troubles and widespread insurrection continued until rebel Shi Siming was killed by his own son in 763.
After 710, regional military governors called jiedushi gradually came to challenge the power of the central government. After the An Lushan rebellion, the autonomous power and authority accumulated by the jiedushi in Hebei went beyond the central government's control. After a series of rebellions between 781 and 784 in present-day Hebei, Henan, Shandong, and Hubei, the government had to officially acknowledge the jiedushis hereditary rule without accreditation. The Tang government relied on these governors and their armies for protection and to suppress local revolts. In return, the central government would acknowledge the rights of these governors to maintain their army, collect taxes and even to pass on their title to heirs. As time passed, these military governors slowly phased out the prominence of civil officials drafted by exams, and became more autonomous from central authority. The rule of these powerful military governors lasted until 960, when a new civil order under the Song dynasty was established. The abandonment of the equal-field system also meant that people could buy and sell land freely; many poor fell into debt because of this and were forced to sell their land to the wealthy, which led to the exponential growth of large estates. With the breakdown of the land allocation system after 755, the central Chinese state barely interfered in agricultural management and acted merely as tax collector for roughly a millennium, save a few instances such as the Song's failed land nationalisation during the 13th-century war with the Mongols.
With the central government collapsing in authority over the various regions of the empire, it was recorded in 845 that bandits and river pirates in parties of 100 or more began plundering settlements along the Yangtze River with little resistance. In 858, massive floods along the Grand Canal inundated vast tracts of land and terrain of the North China Plain, which drowned tens of thousands of people in the process. The Chinese belief in the Mandate of Heaven granted to the ailing Tang was also challenged when natural disasters led many to believe that the Tang had lost their right to rule. In 873, a disastrous harvest shook the foundations of the empire; in some areas only half of all agricultural produce was gathered, and tens of thousands faced famine and starvation. In the earlier period of the Tang, the central government was able to meet crises in the harvest—from 714 to 719, records show that the Tang government responded effectively to natural disasters by extending the price-regulation granary system throughout the country. The central government was able then to build a large surplus stock of foods to ward off the rising danger of famine and increased agricultural productivity through land reclamation.
Rebuilding and recovery
Although these natural calamities and rebellions stained the reputation and hampered the effectiveness of the central government, the early 9th century is nonetheless viewed as a period of recovery for the Tang. The government's withdrawal from its role in managing the economy had the unintended effect of stimulating trade, as more markets with fewer bureaucratic restrictions were opened up. By 780, the old grain tax and labour service of the 7th century were replaced by a semi-annual tax paid in cash, signifying the shift to a money economy boosted by the merchant class. Cities in the southern Jiangnan region such as Yangzhou, Suzhou, and Hangzhou prospered the most economically during the late Tang period. The government monopoly on salt production, weakened after the An Lushan rebellion, was placed under the Salt Commission, which became one of the most powerful state agencies, run by capable ministers chosen as specialists. The commission began the practice of selling merchants the rights to buy monopoly salt, which they transported and sold in local markets. In 799, salt accounted for over half of the government's revenues. S. A. M. Adshead writes that this salt tax represents "the first time that an indirect tax, rather than tribute, levies on land or people, or profit from state enterprises such as mines, had been the primary resource of a major state". Even after the power of the central government was in decline after the mid-8th century, it was still able to function and give out imperial orders on a massive scale. The Old Book of Tang (945) recorded that a government decree issued in 828 standardised the use of square-pallet chain pumps for irrigation throughout the country.
The last ambitious ruler of the Tang was Emperor Xianzong (), whose reign was aided by the fiscal reforms of the 780s, including a government monopoly on the salt industry. He also had an effective and well-trained imperial army stationed at the capital led by his court eunuchs; this was the Army of Divine Strategy, numbering 240,000 in strength as recorded in 798. Between 806 and 819, Emperor Xianzong conducted seven major military campaigns to quell the rebellious provinces that had claimed autonomy from central authority, managing to subdue all but two of them. Under his reign, there was a brief end to the hereditary jiedushi, as Xianzong appointed his own military officers and staffed the regional bureaucracies once again with civil officials. However, Xianzong's successors proved less capable and more interested in the leisure of hunting, feasting, and playing outdoor sports, allowing eunuchs to amass more power as drafted scholar-officials caused strife in the bureaucracy with factional parties. The eunuchs' power was not challenged following the Ganlu Incident, where Emperor Wenzong () failed in his plot to have them overthrown; instead, Wenzong's allies were publicly executed in Chang'an's West Market on the eunuchs' command.
Decades after the An Lushan rebellion, the Tang was able to muster enough power to launch offensive military campaigns, including its destruction of the Uyghur Khaganate in Mongolia from 840 to 847. The Tang managed to restore indirect control over former territories as far west as the Hexi Corridor and Dunhuang in Gansu; in 848, the general Zhang Yichao (799–872) managed to wrestle control of the region from the Tibetan Empire during its civil war. Shortly afterwards, Emperor Xuanzong of Tang () acknowledged Zhang as the protector () of Sha Prefecture, and military governor of the new Guiyi Circuit.Zizhi Tongjian, vol. 249.
End of the dynasty
In addition to factors like natural calamity and jiedushi claiming autonomy, a rebellion by Huang Chao (874–884) devastated both northern and southern China, took an entire decade to suppress, and resulted in the sacking of both Chang'an and Luoyang. In 878–879, Huang's army committed a massacre in the southern port of Guangzhou against foreign Arab and Persian Muslim, Zoroastrian, Jewish and Christian merchants. A medieval Chinese source claimed that Huang Chao killed eight million people. The Tang never recovered from Huang's rebellion, which paved the way for the later overthrow of the dynasty. Large groups of bandits in the size of small armies ravaged the countryside in the last years of Tang. They smuggled illicit salt, ambushed merchants and convoys, and even besieged several walled cities.
Although older accounts described the Huang Chao uprising as marking the "destruction" of the Tang aristocracy, recent scholarship shows that the decline of the aristocratic advantage in officeholding had been underway long before the rebellion. Furthermore, the exact extent of the physical damage of the rebellion to the Tang elite has also been overstated. Zhou Ding (2024) argues that numerous clans re-established themselves in the Jiangnan and other southern provinces, maintaining local influence well into the tenth century. Amid the sacking of cities and continuing factional strife among eunuchs and officials, however, the court’s fiscal and administrative collapse left many of the old metropolitan families impoverished or displaced, setting the stage for the emergence of a new regional gentry under the Five Dynasties and Song.
During the last two decades of the Tang dynasty, the gradual collapse of central authority led to the rise of the rival military figures Li Keyong and Zhu Wen in northern China. Tang forces had defeated Huang's rebellion with the aid of allied Shatuo, a Turkic people of what is now Shanxi, led by Li Keyong. He was made a jiedushi, and later Prince of Jin, bestowed with the imperial surname Li by the Tang court. Zhu Wen, originally a salt smuggler who served as a lieutenant under the rebel Huang Chao, surrendered to Tang forces. By helping to defeat Huang, he was renamed Zhu Quanzhong ("Zhu of Perfect Loyalty") and granted a rapid series of promotions to military governor of Xuanwu Circuit.
In 901, from his power base of Kaifeng, Zhu Wen seized control of the Tang capital Chang'an and with it the imperial family. By 903, he forced Emperor Zhaozong of Tang to move the capital to Luoyang, preparing to take the throne for himself. In 904, Zhu assassinated Emperor Zhaozong to replace him with the emperor's young son Emperor Ai of Tang. In 905, Zhu executed the brothers of Emperor Ai as well as many officials and Empress Dowager He. In 907, the Tang dynasty was ended when Zhu deposed Ai and took the throne for himself (known posthumously as Emperor Taizu of Later Liang). He established the Later Liang, which inaugurated the Five Dynasties and Ten Kingdoms period. A year later, Zhu had the deposed Emperor Ai poisoned to death.
Zhu Wen's enemy Li Keyong died in 908, having never claimed the title of emperor out of loyalty to the Tang. His son Li Cunxu (Emperor Zhuangzong) inherited his title Prince of Jin along with his father's rivalry against Zhu. In 923, Li Cunxu declared a "restored" Tang dynasty, the Later Tang, before toppling the Later Liang dynasty the same year. However, southern China remained splintered into various small kingdoms until most of China was reunified under the Song dynasty (960–1279). Control over parts of northeast China and Manchuria by the Liao dynasty of the Khitan people also stemmed from this period. In 905, their leader Abaoji formed a military alliance with Li Keyong against Zhu Wen but the Khitans eventually turned against the Later Tang, helping another Shatuo leader Shi Jingtang of Later Jin to overthrow Later Tang in 936.
Administration and politics
Initial reforms
Taizong set out to solve the internal problems within the government that had constantly plagued past dynasties. Building upon the Sui legal code, he issued a new legal code that subsequent Chinese dynasties would model theirs upon, as well as neighbouring polities in Vietnam, Korea, and Japan. The earliest law code to survive was established in 653; it was divided into 500 articles specifying different crimes and penalties ranging from ten blows with a light stick, one hundred blows with a heavy rod, exile, penal servitude, or execution. The legal code distinguished different levels of severity in meted punishments when different members of the social and political hierarchy committed the same crime. For example, the severity of punishment was different when a servant or nephew killed a master or an uncle than when a master or uncle killed a servant or nephew.
The Tang Code was largely retained by later codes such as the early Ming dynasty (1368–1644) code of 1397, yet there were several revisions in later times, such as improved property rights for women during the Song dynasty (960–1279).
The Tang had three departments (), each of which was obliged to draft, review, and implement policies. There were also six ministries () under the administrations that implemented policy, each of which was assigned different tasks. These Three Departments and Six Ministries included the personnel administration, finance, rites, military, justice, and public works—an administrative model which lasted until the fall of the Qing dynasty (1644–1912).
Although the founders of the Tang dynasty drew on the glory of the earlier Han dynasty (202 BC220 AD), the basis for much of their administrative organisation was very similar to that of the previous Northern and Southern dynasties. The Northern Zhou (6th century) fubing system of divisional militia was continued by the Tang, along with farmer-soldiers serving in rotation from the capital or frontier in order to receive appropriated farmland. The equal-field system of the Northern Wei (4th–6th centuries) was also maintained, though with a few modifications.
Although the central and local governments kept extensive records of land property to assess taxes, it became common practice in the Tang for literate and affluent people to create their own private documents and sign contracts. These had their own signature, along with those of a witness and a scribe, to prove in court (if necessary) that their claim to the property was legitimate. The prototype of this actually existed in the ancient Han dynasty, while contractual language became even more common and was embedded in Chinese literary culture in later dynasties.
The centre of the political power of the Tang was the capital city of Chang'an (modern Xi'an), where the emperor maintained his large palace quarters and entertained political emissaries with music, sports, acrobats, poetry, paintings, and dramatic theatre performances. The capital was also filled with incredible wealth and resources to spare. When the Chinese prefectural government officials travelled to the capital in 643 to present the annual report on the affairs in their districts, Emperor Taizong discovered that many had no proper quarters to rest in and were renting rooms from merchants. Therefore, Emperor Taizong ordered the government agencies responsible for municipal construction to build each visiting official his own private mansion in the capital.
Imperial examinations
Students of Confucian studies were candidates for the imperial examinations, which qualified their graduates for appointment to the local, provincial, and central government bureaucracies. Two types of exams were given, mingjing (; 'illuminating the classics') and jinshi (; 'presented scholar'). The mingjing was based upon the Confucian classics and tested the student's knowledge of a wide variety of texts. The jinshi tested a student's literary abilities in writing essays in response to questions on governance and politics, as well as in composing poetry. Candidates were also judged on proper deportment, appearance, speech, and calligraphy, all subjective criteria that favoured the wealthy over those of more modest means who were unable to pay tutors of rhetoric and writing. Although a disproportionate number of civil officials came from aristocratic families, wealth and noble status were not prerequisites, and the exams were open to all male subjects whose fathers were not of the artisan or merchant classes. To promote widespread Confucian education, the Tang government established state-run schools and issued standard versions of the Five Classics with commentaries.
An open competition was designed to draw the best talent into government. But perhaps an even greater consideration for the Tang rulers was avoiding imperial dependence on powerful aristocratic families and warlords by recruiting a body of career officials with no family or local power base. The Tang law code ensured equal division of inherited property among legitimate heirs, encouraging social mobility by preventing powerful families from becoming landed nobility through primogeniture. Contrary to older accounts that portrayed the imperial examination as marginal in Tang times, recent quantitative research shows that by the late seventh century the jinshi degree had already become the primary route to high office, while aristocratic family pedigree had largely lost its predictive power for bureaucratic appointment.Wang, E. H., & Yang, C. Z. (2025). The Political Economy of China’s Imperial Examination System. Cambridge University Press. Open-access Cambridge University Press link. The Tang examination system thus played a decisive institutional role in displacing hereditary privilege and fostering a bureaucratic elite selected by merit. From Tang times until the end of the Qing dynasty in 1912, scholar-officials served as intermediaries between the people and the government, their authority deriving less from lineage than from examination success.
Religion and politics
From the outset, religion played a role in Tang politics. In his bid for power, Li Yuan had attracted a following by claiming descent from the Taoist sage Laozi (). People bidding for office would request the prayers of Buddhist monks, with successful aspirants making donations in return. Before the persecution of Buddhism in the 9th century, Buddhism and Taoism were both accepted. Religion was central in the reign of Emperor Xuanzong (). The emperor invited Taoist and Buddhist monks and clerics to his court, exalted Laozi with grand titles, wrote commentaries on Taoist scriptures, and established a school to train candidates for Taoist examinations. In 726, he called upon the Indian monk Vajrabodhi (671–741) to perform tantric rites to avert a drought. In 742, he personally held the incense burner while patriarch of the Shingon school Amoghavajra (705–774) recited "mystical incantations to secure the victory of Tang forces".
Emperor Xuanzong closely regulated religious finances. Near the beginning of his reign in 713, he liquidated the Inexhaustible Treasury of a prominent Buddhist monastery in Chang'an, which had collected vast riches as multitudes of anonymous repentants left money, silk, and treasure at its doors. Although the monastery used its funds generously, the emperor condemned it for fraudulent banking practices, and distributed its wealth to other Buddhist and Taoist monasteries, and to repair local statues, halls, and bridges. In 714, he forbade Chang'an shops from selling copied Buddhist sutras, giving a monopoly of this trade to the Buddhist clergy.
Taxes and the census
The Tang government attempted to conduct an accurate census of the empire's population, primarily to support effective taxation and military conscription. The early Tang government established modest grain and cloth taxes per household, encouraging households to register and provide the government with accurate demographic information. In the official census of 609, the population was tallied at 9 million households, about 50 million people, and this number did not increase in the census of 742. Patricia Ebrey writes that, notwithstanding census undercounting, China's population had not grown significantly since the earlier Han dynasty, which recorded 58 million people in 2 AD. Adshead disagrees, estimating about 75 million people by 750.
In the Tang census of 754, there were 1,859 cities, 321 prefectures, and 1,538 counties throughout the empire. Although there were many large and prominent cities, the rural and agrarian areas comprised 80–90% of the population. There was also a dramatic migration from northern to southern China, as the North held 75% of the overall population at the dynasty's inception, which by its end was reduced to 50%. The Chinese population would not dramatically increase until the Song dynasty, when it doubled to 100 million because of extensive rice cultivation in central and southern China, coupled with higher yields of grain sold in a growing market.
Military and foreign policy
Protectorates and tributaries
The 7th and first half of the 8th century are generally considered to be the era in which the Tang reached the zenith of its power. In this period, Tang control extended further west than any previous dynasty, stretching from north Vietnam in the south, to a point north of Kashmir bordering Persia in the west, to northern Korea in the north-east.
Some of the kingdoms paying tribute to the Tang dynasty included Kashmir, Nepal, Khotan, Kucha, Kashgar, Silla, Champa, and kingdoms located in Amu Darya and Syr Darya valley. Turkic nomads addressed the Tang emperor as Tian Kehan. After the widespread Göktürk revolt of Shabolüe Khan () was put down at Issyk Kul in 657 by Su Dingfang (591–667), Emperor Gaozong established several protectorates governed by a Protectorate General or Grand Protectorate General, which extended the Chinese sphere of influence as far as Herat in Western Afghanistan. Protectorate Generals were given a great deal of autonomy to handle local crises without waiting for central admission. After Xuanzong's reign, jiedushi were given enormous power, including the ability to maintain their own armies, collect taxes, and pass their titles on hereditarily. This is commonly recognised as the beginning of the fall of Tang's central government.
Soldiers and conscription
By 737, Emperor Xuanzong discarded the policy of conscripting soldiers that were replaced every three years, replacing them with long-service soldiers who were more battle-hardened and efficient. It was more economically feasible as well, since training new recruits and sending them out to the frontier every three years drained the treasury. By the late 7th century, the fubing troops began abandoning military service and the homes provided to them in the equal-field system. The supposed standard of 100 mu of land allotted to each family was in fact decreasing in size in places where population expanded and the wealthy bought up most of the land. Hard-pressed peasants and vagrants were then induced into military service with benefits of exemption from both taxation and corvée labour service, as well as provisions for farmland and dwellings for dependents who accompanied soldiers on the frontier. By 742, the total number of enlisted troops in the Tang armies had risen to about 500,000 men.
Eastern regions
In East Asia, Tang military campaigns were less successful elsewhere than in previous imperial Chinese dynasties. Like the emperors of the Sui dynasty before him, Taizong established a military campaign in 644 against the Korean kingdom of Goguryeo in the Goguryeo–Tang War; however, this led to its withdrawal in the first campaign because they failed to overcome the successful defence led by General Yŏn Kaesomun. The Tang entered into the Silla–Tang alliance, the Chinese fought against Baekje and their Yamato Japanese allies in the Battle of Baekgang in August 663, a decisive Tang–Silla victory. The Tang dynasty navy had several different ship types at its disposal to engage in naval warfare, these ships described by Li Quan in his Taipai Yinjing (Canon of the White and Gloomy Planet of War) of 759. The Battle of Baekgang was actually a restoration movement by remnant forces of Baekje, since their kingdom was toppled in 660 by a Tang–Silla invasion, led by Chinese general Su Dingfang and Korean general Kim Yushin (595–673). In another joint invasion with Silla, the Tang army severely weakened the Goguryeo Kingdom in the north by taking out its outer forts in 645. With joint attacks by Silla and Tang armies under commander Li Shiji (594–669), the Kingdom of Goguryeo was destroyed by 668.
Although they were formerly enemies, the Tang accepted officials and generals of Goguryeo into their administration and military, such as the brothers Yŏn Namsaeng (634–679) and Yŏn Namsan (639–701). From 668 to 676, the Tang Empire controlled northern Korea. However, Silla broke the alliance in 671, and began the Silla–Tang War to expel the Tang forces. At the same time the Tang faced threats on its western border when a large Chinese army was defeated by the Tibetans on the Dafei River in 670. By 676, the Tang army was expelled out of Korea by a unified Silla. Following a revolt of the Eastern Turks in 679, the Tang abandoned its Korean campaigns.
Although the Tang had fought the Japanese, they still held cordial relations with Japan. There were numerous Imperial embassies to China from Japan, diplomatic missions that were not halted until 894 by Emperor Uda (), upon persuasion by Sugawara no Michizane (845–903). The Japanese Emperor Tenmu () even established his conscripted army on that of the Chinese model, based his state ceremonies on the Chinese model, and constructed his palace at Fujiwara on the Chinese model of architecture.
Many Chinese Buddhist monks came to Japan to help further the spread of Buddhism as well. Two 7th-century monks, Zhi Yu and Zhi You, visited the court of Emperor Tenji (), whereupon they presented a gift of a south-pointing chariot that they had crafted. This vehicle employing a differential gear was reproduced in several models for Tenji in 666, as recorded in the Nihon Shoki (720). Japanese monks also visited China; such was the case with Ennin (794–864), who wrote of his travel experiences including travels along the Grand Canal. The Japanese monk Enchin (814–891) stayed in China from 839 to 847, and again from 853 to 858, landing near Fuzhou, Fujian and setting sail for Japan from Taizhou, Zhejiang during his second trip to China.
Western and Northern regions
alt=Tang Dynasty circa 700 CE|thumb|Tang Dynasty circa 700 CE
The Sui and Tang carried out successful military campaigns against the steppe nomads. Chinese foreign policy to the north and west now had to deal with Turkic nomads, who were becoming the most dominant ethnic group in Central Asia. To handle and avoid any threats posed by the Turks, the Sui government repaired fortifications and received their trade and tribute missions. They sent four royal princesses to form heqin marriage alliances with Turkic clan leaders, in 597, 599, 614, and 617. The Sui stirred trouble and conflict among ethnic groups against the Turks. As early as the Sui dynasty, the Turks had become a major militarised force employed by the Chinese. When the Khitans began raiding northeast China in 605, a Chinese general led 20,000 Turks against them, distributing Khitan livestock and women to the Turks as a reward. On two occasions between 635 and 636, Tang royal princesses were married to Turk mercenaries or generals in Chinese service. Throughout the Tang dynasty until the end of 755, there were approximately ten Turkic generals serving under the Tang. While most of the Tang army was made of fubing Chinese conscripts, the majority of the troops led by Turkic generals were of non-Chinese origin, campaigning largely in the western frontier where the presence of fubing troops was low. Some "Turkic" troops were tribalised Han Chinese, a desinicised people.
Civil war in China was almost totally diminished by 626, along with the 628 defeat of the Ordos warlord Liang Shidu; after these internal conflicts, the Tang began an offensive against the Turks. In 630, Tang armies captured areas of the Ordos Desert, modern-day Inner Mongolia province, and southern Mongolia from the Turks. After this military victory, On June 11, 631, Emperor Taizong also sent envoys to the Xueyantuo bearing gold and silk in order to persuade the release of enslaved Chinese prisoners who were captured during the transition from Sui to Tang from the northern frontier; this embassy succeeded in freeing 80,000 Chinese men and women who were then returned to China.
While the Turks were settled in the Ordos region (former territory of the Xiongnu), the Tang government took on the military policy of dominating the central steppe. As during the earlier Han dynasty, the Tang and their Turkic allies conquered and subdued Central Asia during the 640s and 650s. During Emperor Taizong's reign alone, large campaigns were launched against not only the Göktürks, but also separate campaigns against the Tuyuhun, the oasis states, and the Xueyantuo. Under Emperor Gaozong, a campaign led by the general Su Dingfang was launched against the Western Turks ruled by Ashina Helu.
The Tang Empire competed with the Tibetan Empire for control of areas in Inner and Central Asia, which was at times settled with marriage alliances such as the marrying of Princess Wencheng () to Songtsän Gampo (). A Tibetan tradition mentions that Chinese troops captured Lhasa after Songtsän Gampo's death, but no such invasion is mentioned in either Chinese annals or the Tibetan manuscripts of Dunhuang.
There was a string of conflicts with Tibet over territories in the Tarim Basin between 670 and 692; in 763, the Tibetans captured Chang'an for fifteen days during the An Lushan rebellion. In fact, it was during this rebellion that the Tang withdrew its western garrisons stationed in what is now Gansu and Qinghai, which the Tibetans then occupied along with the territory of what is now Xinjiang. Hostilities between the Tang and Tibet continued until they signed the Changqing Treaty in 821. The terms of this treaty, including the fixed borders between the two countries, are recorded in a bilingual inscription on a stone pillar outside the Jokhang temple in Lhasa.
During the Islamic conquest of Persia (633–656), the son of the last ruler of the Sasanian Empire, Prince Peroz and his court moved to Tang China. According to the Old Book of Tang, Peroz was made the head of a Governorate of Persia in present-day Zaranj, Afghanistan. During this conquest of Persia, the Rashidun Caliph Uthman () sent an embassy to the Tang court at Chang'an. Arab sources claim Umayyad commander Qutayba ibn Muslim briefly took Kashgar from China and withdrew after an agreement, but modern historians entirely dismiss this claim. The Arab Umayyad Caliphate in 715 deposed Ikhshid, the king the Fergana Valley, and installed a new king Alutar on the throne. The deposed king fled to Kucha (seat of Anxi Protectorate), and sought Chinese intervention. The Chinese sent 10,000 troops under Zhang Xiaosong to Ferghana. He defeated Alutar and the Arab occupation force at Namangan and reinstalled Ikhshid on the throne. The Tang defeated the Arab Umayyad invaders at the Battle of Aksu (717). The Arab Umayyad commander Al-Yashkuri and his army fled to Tashkent after they were defeated. The Turgesh then crushed the Arab Umayyads and drove them out. By the 740s, the Arabs under the Abbasid Caliphate in Khorasan had reestablished a presence in the Ferghana basin and in Sogdiana. At the Battle of Talas in 751, Karluk mercenaries under the Chinese defected, helping the Arab armies of the Caliphate to defeat the Tang force under commander Gao Xianzhi. Although the battle itself was not of the greatest significance militarily, this was a pivotal moment in history, as it marks the spread of Chinese papermaking into regions west of China as captured Chinese soldiers shared the technique of papermaking to the Arabs. These techniques ultimately reached Europe by the 12th century through Arab-controlled Spain. Although they had fought at Talas, on June 11, 758, an Abbasid embassy arrived at Chang'an simultaneously with the Uyghurs bearing gifts for the Tang emperor. In 788–789 the Chinese concluded a military alliance with the Uyghur Turks who twice defeated the Tibetans, in 789 near the town of Gaochang in Dzungaria, and in 791 near Ningxia on the Yellow River.
Joseph Needham writes that a tributary embassy came to the court of Emperor Taizong in 643 from the Patriarch of Antioch. However, Friedrich Hirth and other sinologists such as S. A. M. Adshead have identified fulin () in the Old and New Book of Tang as the Byzantine Empire, which those histories directly associated with Daqin (i.e. the Roman Empire). The embassy sent in 643 by Boduoli () was identified as Byzantine ruler Constans II Pogonatos, and further embassies were recorded as being sent into the 8th century. Adshead offers a different transliteration stemming from "patriarch" or "patrician", possibly a reference to one of the acting regents for the young Byzantine monarch. The Old and New Book of Tang also provide a description of the Byzantine capital Constantinople, including how it was besieged by the Da Shi (, i.e. the Umayyad Caliphate) forces of Mu'awiya I, who forced them to pay tribute to the Arabs.
Economy
Through use of the land trade along the Silk Road and maritime trade by sail at sea, the Tang were able to acquire and gain many new technologies, cultural practices, rare luxury, and contemporary items. From Europe, the Middle East, Central and South Asia, the Tang dynasty were able to acquire new ideas in fashion, new types of ceramics, and improved silver-smithing techniques. The Tang also gradually adopted the foreign concept of stools and chairs as seating, whereas the Chinese beforehand always sat on mats placed on the floor. People of the Middle East coveted and purchased Chinese goods in bulk, including silks, porcelain, and lacquerwares. Songs, dances, and musical instruments from foreign regions became popular in China during the Tang dynasty. These musical instruments included oboes, flutes, and small lacquered drums from Kucha in the Tarim Basin, and percussion instruments from India such as cymbals. At the court there were nine musical ensembles (expanded from seven in the Sui dynasty) that played eclectic Asian music.
There was great interaction with India, a hub for Buddhist knowledge, with famous travellers such as Xuanzang () visiting the South Asian state. After a 17-year trip, Xuanzang managed to bring back valuable Sanskrit texts to be translated into Chinese. There was also a Turkic–Chinese dictionary available for serious scholars and students, while Turkic folk songs gave inspiration to some Chinese poetry. In the interior of China, trade was facilitated by the Grand Canal and the Tang government's rationalisation of the greater canal system that reduced costs of transporting grain and other commodities. The state also managed roughly of postal service routes by horse or boat.
Silk Road
Although the Silk Road from China to Europe and the Western world was initially formulated during the reign of Emperor Wu (141–87 BC) during the Han, it was reopened by the Tang in 639, when Hou Junji () conquered the West, and remained open for almost four decades. It was closed after the Tibetans captured it in 678, but in 699, the Silk Road reopened when the Tang reconquered the Four Garrisons of Anxi originally installed in 640, once again connecting China directly to the West for land-based trade.
The Tang captured the vital route through the Gilgit valley from Tibet in 722, lost it to the Tibetans in 737, and regained it under the command of the Goguryeo-Korean General Gao Xianzhi. When the An Lushan rebellion ended in 763, the Tang Empire withdrew its troops from its western lands, allowing the Tibetan Empire to largely cut off China's direct access to the Silk Road. An internal rebellion in 848 ousted the Tibetan rulers, and the Tang regained the northwestern prefectures from Tibet in 851. These lands contained crucial grazing areas and pastures for raising horses that the Tang dynasty desperately needed.
Despite the many expatriate European travellers coming into China to live and trade, many travellers, mainly religious monks and missionaries, recorded China's stringent immigrant laws. As the monk Xuanzang and many other monk travellers attested to, there were many government checkpoints along the Silk Road that examined travel permits into the Tang Empire. Furthermore, banditry was a problem along the checkpoints and oasis towns, as Xuanzang also recorded that his group of travellers were assaulted by bandits on multiple occasions.
The Silk Road also affected the art from the period. Horses became a significant symbol of prosperity and power as well as an instrument of military and diplomatic policy. Horses were also revered as a relative of the dragon.
Seaports and maritime trade
Chinese envoys had been sailing through the Indian Ocean to states of India as early as the 2nd century BC, yet it was during the Tang dynasty that a strong Chinese maritime presence could be found in the Persian Gulf and Red Sea, into Persia, Mesopotamia (sailing up the Euphrates River in modern-day Iraq), Arabia, Egypt, Aksum (Ethiopia), and Somalia in the Horn of Africa.
During the Tang dynasty, thousands of foreign expatriate merchants came and lived in numerous Chinese cities to do business with China, including Persians, Arabs, Hindu Indians, Malays, Bengalis, Sinhalese, Khmers, Chams, Jews and Nestorian Christians of the Near East. In 748, the Buddhist monk Jian Zhen described Guangzhou as a bustling mercantile business center where many large and impressive foreign ships came to dock. He wrote that "many large ships came from Borneo, Persia, Qunglun (Java) ... with ... spices, pearls, and jade piled up mountain high", as written in the Yue Jue Shu (Lost Records of the State of Yue). Relations with the Arabs were often strained: When the imperial government was attempting to quell the An Lushan rebellion, Arab and Persian pirates burned and looted Canton on October 30, 758. The Tang government reacted by shutting the port of Canton down for roughly five decades; thus, foreign vessels docked at Hanoi instead. However, when the port reopened, it continued to thrive. In 851, the Arab merchant Sulaiman al-Tajir observed the manufacturing of porcelain in Guangzhou and admired its transparent quality. He also provided a description of Guangzhou's landmarks, granaries, local government administration, some of its written records, treatment of travellers, along with the use of ceramics, rice, wine, and tea. Their presence ended in the vengeful Guangzhou massacre by the rebel Huang Chao in 878, who purportedly slaughtered thousands regardless of ethnicity. Huang's rebellion was eventually suppressed in 884.
Vessels from other East Asian states such as Silla, Bohai and the Hizen Province of Japan were all involved in the Yellow Sea trade, which Silla of Korea dominated. After Silla and Japan reopened renewed hostilities in the late 7th century, most Japanese maritime merchants chose to set sail from Nagasaki towards the mouth of the Huai River, the Yangtze River, and even as far south as the Hangzhou Bay in order to avoid Korean ships in the Yellow Sea. In order to sail back to Japan in 838, the Japanese embassy to China procured nine ships and sixty Korean sailors from the Korean wards of Chuzhou and Lianshui cities along the Huai River. It is also known that Chinese trade ships travelling to Japan set sail from the various ports along the coasts of Zhejiang and Fujian provinces.
The Chinese engaged in large-scale production for overseas export by at least the time of the Tang. This was proven by the discovery of the Belitung shipwreck, a silt-preserved shipwrecked Arabian dhow in the Gaspar Strait near Belitung, which had 63,000 pieces of Tang ceramics, silver, and gold (including a Changsha bowl inscribed with a date corresponding to 826, roughly confirmed by radiocarbon dating of star anise at the wreck). Beginning in 785, the Chinese began to call regularly at Sufala on the East African coast in order to cut out Arab middlemen, with various contemporary Chinese sources giving detailed descriptions of trade in Africa. The official and geographer Jia Dan (730–805) wrote of two common sea trade routes in his day: one from the coast of the Bohai Sea towards Korea and another from Guangzhou through Malacca towards the Nicobar Islands, Sri Lanka and India, the eastern and northern shores of the Arabian Sea to the Euphrates River. In 863, the Chinese author Duan Chengshi () provided a detailed description of the slave trade, ivory trade, and ambergris trade in a country called Bobali, which historians suggest was Berbera in Somalia. In Fustat, Egypt, the fame of Chinese ceramics there led to an enormous demand for Chinese goods; hence Chinese often travelled there. From this time period, the Arab merchant Shulama once wrote of his admiration for Chinese seafaring junks, but noted that their draft was too deep for them to enter the Euphrates River, which forced them to ferry passengers and cargo in small boats. Shulama also noted that Chinese ships were often very large, with capacities up to 700 passengers.
Culture and society
Both the Sui and Tang dynasties had turned away from the more feudal culture of the preceding Northern Dynasties, in favour of staunch civil Confucianism. The governmental system was supported by a large class of Confucian intellectuals selected through either civil service examinations or recommendations. In the Tang period, Taoism and Buddhism were commonly practised ideologies that played a large role in people's daily lives. The Tang Chinese enjoyed feasting, drinking, holidays, sports, and all sorts of entertainment, while Chinese literature blossomed and was more widely accessible with new printing methods. Rich commoners and nobles who worshipped spirits wanted them to know "how important and how admirable they were", so they "wrote or commissioned their own obituaries" and buried figures along with their bodies to ward off evil spirits.
Chang'an
Although Chang'an had served as the capital during the earlier Han and Jin dynasties, after subsequent destruction in warfare, it was the Sui dynasty model that comprised the Tang-era capital. The roughly square dimensions of the city had of outer walls running east to west, and more than of outer walls running north to south. The royal palace, the Taiji Palace, stood north of the city's central axis. From the large Mingde Gates mid-center on the main southern wall, a wide city avenue stretched all the way north to the central administrative city, behind which was the Chentian Gate of the royal palace, or Imperial City. Intersecting this were fourteen main streets running east to west, while eleven main streets ran north to south. These main intersecting roads formed 108 rectangular wards with walls and four gates each, each filled with multiple city blocks. The city was made famous for this checkerboard pattern of main roads with walled and gated districts, its layout even mentioned in one of Du Fu's poems. During the Heian period, cities like Heian-kyō (present-day Kyoto) were arranged in the checkerboard street in accordance with traditional geomancy, following the Chang'an model. Of Chang'an's 108 wards, two were designated as government-supervised markets, and other space reserved for temples, gardens, ponds, etc. Throughout the entire city, there were 111 Buddhist monasteries, 41 Taoist abbeys, 38 family shrines, 2 official temples, 7 churches of foreign religions, 10 city wards with provincial transmission offices, 12 major inns, and 6 graveyards. Some city wards were literally filled with open public playing fields or the backyards of lavish mansions for playing horse polo and cuju (Chinese soccer). In 662, Emperor Gaozong moved the imperial court to the Daming Palace, which became the political center of the empire and served as the royal residence of the Tang emperors for more than 220 years.
upright=1.35|thumb|Map of Chang'an during the Tang
The Tang capital was the largest city in the world at its time, with the population of its wards and suburban countryside reaching two million inhabitants. The Tang capital was very cosmopolitan, with ethnicities of Persia, Central Asia, Japan, Korea, Vietnam, Tibet, India, and many other places living within. Naturally, with this plethora of different ethnicities living in Chang'an, there were also many different practised religions, such as Buddhism, Nestorian Christianity, and Zoroastrianism, among others. With the open access to China that the Silk Road to the west facilitated, many foreign settlers were able to move east to China, while the city of Chang'an itself had about 25,000 foreigners living within. Exotic green-eyed, blond-haired Tocharian ladies serving wine in agate and amber cups, singing, and dancing at taverns attracted customers. If a foreigner in China pursued a Chinese woman for marriage, he was required to stay in China and was unable to take his bride back to his homeland, as stated in a law passed in 628 to protect women from temporary marriages with foreign envoys. Several laws enforcing segregation of foreigners from Chinese were passed during the Tang. In 779, the Tang issued an edict which forced Uyghurs in the capital, Chang'an, to wear their ethnic dress, stopped them from marrying Chinese females, and banned them from passing off as Chinese.
Chang'an was the center of the central government, the home of the imperial family, and was filled with splendor and wealth. However, incidentally it was not the economic hub during the Tang dynasty. The city of Yangzhou along the Grand Canal and close to the Yangtze was the greatest economic center during the Tang.
Yangzhou was the headquarters for the Tang government's salt monopoly, and was the greatest industrial center of China. It acted as a midpoint in shipping of foreign goods to be distributed to the major cities of the north. Much like the seaport of Guangzhou in the south, Yangzhou had thousands of foreign traders from across Asia.
There was also the secondary capital city of Luoyang, which was the favoured capital of the two by Empress Wu. In 691, she had more than 100,000 families from the region around Chang'an move to Luoyang. With a population of about a million, Luoyang became the second largest city in the empire, and with its closeness to the Luo River it benefited from southern agricultural fertility and trade traffic of the Grand Canal. However, the Tang court eventually demoted its capital status and did not visit Luoyang after 743, when Chang'an's problem of acquiring adequate supplies and stores for the year was solved. As early as 736, granaries were built at critical points along the route from Yangzhou to Chang'an, which eliminated shipment delays, spoilage, and pilfering. An artificial lake used as a transshipment pool was dredged east of Chang'an in 743, where curious northerners could finally see the array of boats found in southern China, delivering tax and tribute items to the imperial court.
Literature
The Tang dynasty was a golden age of Chinese literature and art. Over 48,900 poems penned during the Tang, representing over 2,200 authors, have survived to the present day. Skill in the composition of poetry became a required study for those wishing to pass imperial examinations, while poetry was also heavily competitive; poetry contests among guests at banquets and courtiers were common. Poetry styles that were popular in the Tang included gushi and jintishi, with the poet Li Bai (701–762) famous for the former style, and poets like Wang Wei (701–761) and Cui Hao (704–754) famous for their use of the latter. Jintishi poetry, or regulated verse, employed stanzas of eight lines, each consisting of five or seven characters with a fixed pattern of tones, and required the second and third couplets to be antithetical. Tang poems remained popular and great emulation of Tang era poetry began in the Song dynasty; in that period, Yan Yu (; ) was the first to confer the poetry of the High Tang (766) with "canonical status within the classical poetic tradition". Yan Yu reserved the position of highest esteem among all Tang poets for Du Fu (712–770), who was not viewed as such in his own era, and was branded by his peers as an anti-traditional rebel.
The Classical Prose Movement was spurred in large part by the writings of Tang authors Liu Zongyuan (773–819) and Han Yu (768–824). This new prose style broke away from the poetry tradition of piantiwen (; 'parallel prose') style begun in the Han dynasty. Although writers of the Classical Prose Movement imitated piantiwen, they criticised it for its often vague content and lack of colloquial language, focusing more on clarity and precision to make their writing more direct. This guwen (archaic prose) style can be traced back to Han Yu, and would become largely associated with orthodox Neo-Confucianism.
Short story fiction and tales were also popular during the Tang, one of the more famous ones being Yingying's Biography by Yuan Zhen (779–831), which was widely circulated in his own time and by the Yuan dynasty (1279–1368) became the basis for Chinese opera. Timothy C. Wong places this story within the wider context of Tang love tales, which often share the plot designs of quick passion, inescapable societal pressure leading to the abandonment of romance, followed by a period of melancholy. Wong states that this scheme lacks the undying vows and total self-commitment to love found in Western romances such as Romeo and Juliet, but that underlying traditional Chinese values of indivisibility of self from one's environment, including from society, served to create the necessary fictional device of romantic tension. In addition, Tang literature often discussed gender expression. Literary texts such as "Tiandi yinyang jiaohuan dale fu" and "You xianku" depicted how the Tang nobility emphasized Taoist sexology.YAO, PING. “BETWEEN TOPICS AND SOURCES: Researching the History of Sexuality in Imperial China.” In Sexuality in China: Histories of Power and Pleasure, edited by HOWARD CHIANG, 43. University of Washington Press, 2018. http://www.jstor.org/stable/j.ctvcwnwj4.7. Many male Tang poets and literati conveyed their love for male companions when they perceived their wives—often illiterate—to be incapable of understanding their troubles.Hinsch, Bret. “Passions of the Cut Sleeve.” Academic Publisher. De Gruyter Brill, 1990. p. 80, https://www.degruyterbrill.com/document/doi/10.1525/9780520912656/html.
Large encyclopaedias were published during the Tang: the Yiwen Leiju was compiled in 624 under chief editor Ouyang Xun (557–641), Linghu Defen (582–666) and Chen Shuda (). By 729, the team led by scholar Gautama Siddha (), an ethnic Indian born in Chang'an, had finished compiling the Treatise on Astrology of the Kaiyuan Era, an astrological encyclopaedia.
Chinese geographers, such as Jia Dan, wrote accurate descriptions of places far beyond Tang territory. In his work written between 785 and 805, Jia described the sea route going into the mouth of the Persian Gulf, and that the medieval Iranians had erected 'ornamental pillars' in the sea that acted as lighthouse beacons for ships that might go astray. Arabic authors writing a century after Jia, such as al-Masudi and al-Maqdisi, also mentioned these structures in their accounts, confirming Jia's reports. The Tang diplomat Wang Xuance travelled to Magadha, in present-day northeast India, during the 7th century. Afterwards, he wrote the Zhang Tianzhu Guotu (Illustrated Accounts of Central India), a book which contained a large body of geographical information.
Many histories of previous dynasties were compiled between 636 and 659 by court officials during and shortly after the reign of Emperor Taizong of Tang. These included the Book of Liang, Book of Chen, Book of Northern Qi, Book of Zhou, Book of Sui, Book of Jin, History of Northern Dynasties and the History of Southern Dynasties. Although not included in the official Twenty-Four Histories, the Tongdian and Tang Huiyao were nonetheless valuable written historical works of the Tang period. The Shitong written by Liu Zhiji in 710, was a meta-history that surveyed the tradition of Chinese historiography to date. The Great Tang Records on the Western Regions, compiled by Bianji, recounted the journey of Xuanzang, the Tang era's most renowned Buddhist monk.
Other important literature included Duan Chengshi's () Miscellaneous Morsels from Youyang, an entertaining collection of foreign legends and hearsay, reports on natural phenomena, short anecdotes, mythical and mundane tales, as well as notes on various subjects. The exact literary category or classification that Duan's large informal narrative would fit into is still debated among scholars and historians.
Religion and philosophy
Since ancient times, some Chinese had believed in folk religion and Taoism that incorporated many deities. Practitioners believed the Tao and the afterlife was a reality parallel to the living world, complete with its own bureaucracy and afterlife currency needed by dead ancestors. Funerary practices included providing the deceased with everything they might need in the afterlife, including animals, servants, entertainers, hunters, homes, and officials. This ideal is reflected in Tang art. This is also reflected in many short stories written in the Tang about people accidentally winding up in the realm of the dead, only to come back and report their experiences. Taoist ideologies surrounding the medical and health benefits of heterosexuality pervaded. Although such ideologies did not necessarily prevent homosexual or bisexual practices, they advocated for a blueprint of health and wellness that conformed to heterosexuality.YAO, PING. “BETWEEN TOPICS AND SOURCES: Researching the History of Sexuality in Imperial China.” In Sexuality in China: Histories of Power and Pleasure, edited by HOWARD CHIANG, 37. University of Washington Press, 2018. http://www.jstor.org/stable/j.ctvcwnwj4.7.
Buddhism, originating in India around the time of Confucius, continued its influence during the Tang period and was accepted by some members of imperial family, becoming thoroughly sinicised and a permanent part of Chinese traditional culture. In an age before Neo-Confucianism and figures such as Zhu Xi (1130–1200), Buddhism had begun to flourish in China during the Northern and Southern dynasties, and became the dominant ideology during the prosperous Tang. Buddhist monasteries played an integral role in Chinese society, offering lodging for travellers in remote areas, schools for children throughout the country, and a place for urban literati to stage social events and gatherings such as going-away parties. Buddhist monasteries were also engaged in the economy, since their land property and serfs gave them enough revenues to set up mills, oil presses, and other enterprises. Although the monasteries retained 'serfs', these monastery dependents could actually own property and employ others to help them in their work, including their own slaves.
The prominent status of Buddhism in Chinese culture began to decline as the dynasty and central government declined as well during the late 8th century to 9th century. Buddhist convents and temples that were exempt from state taxes beforehand were targeted by the state for taxation. In 845, Emperor Wuzong finally shut down 4,600 Buddhist monasteries along with 40,000 temples and shrines, forcing 260,000 Buddhist monks and nuns to return to secular] life; this episode would later be dubbed one of the Four Buddhist Persecutions in China. Although the ban was lifted just a few years after, Buddhism never regained its once dominant status in Chinese culture. This situation also came about through a revival of interest in native Chinese philosophies such as Confucianism and Taoism. Han Yu (786–824)—who Arthur F. Wright stated was a "brilliant polemicist and ardent xenophobe"—was one of the first men of the Tang to denounce Buddhism. Although his contemporaries found him crude and obnoxious, he foreshadowed the later persecution of Buddhism in the Tang, as well as the revival of Confucian theory with the rise of Neo-Confucianism of the Song dynasty. Nonetheless, Chán Buddhism gained popularity among the educated elite. There were also many famous Chan monks from the Tang era, such as Mazu Daoyi, Baizhang, and Huangbo Xiyun. The sect of Pure Land Buddhism initiated by the Chinese monk Huiyuan (334–416) was also just as popular as Chan Buddhism during the Tang.
Rivaling Buddhism was Taoism, a native Chinese philosophical and religious belief system that found its roots in the Tao Te Ching and the Zhuangzi. The ruling Li family of the Tang dynasty actually claimed descent from Laozi, traditionally credited as the author of the Tao Te Ching. On numerous occasions where Tang princes would become crown prince or Tang princesses taking vows as Taoist priestesses, their lavish former mansions would be converted into Taoist abbeys and places of worship. Many Taoists were associated with alchemy in their pursuits to find an elixir of immortality and a means to create gold from concocted mixtures of many other elements. Although they never achieved their goals in either of these futile pursuits, they did contribute to the discovery of new metal alloys, porcelain products, and new dyes. The historian Joseph Needham labelled the work of the Taoist alchemists as "protoscience rather than pseudoscience". However, the close connection between Taoism and alchemy, which some sinologists have asserted, is refuted by Nathan Sivin, who states that alchemy was just as prominent (if not more so) in the secular sphere and practised more often by laymen.
The Tang dynasty also officially recognised various foreign religions. The Assyrian Church of the East, otherwise known as the Nestorian Church or the Church of the East in China, was given recognition by the Tang court. In 781, the Nestorian Stele was created in order to honour the achievements of their community in China. A Christian monastery was established in Shaanxi province where the Daqin Pagoda still stands, and inside the pagoda there is Christian-themed artwork. Although the religion largely died out after the Tang, it was revived in China following the Mongol invasions of the 13th century.
Although the Sogdians had been responsible for transmitting Buddhism to China from India during the 2nd–4th centuries, soon afterwards they largely converted to Zoroastrianism due to their links to Sasanian Persia. Sogdian merchants and their families living in cities such as Chang'an, Luoyang, and Xiangyang usually built a Zoroastrian temple once their local communities grew larger than 100 households. Sogdians were also responsible for spreading Manicheism in China and the Uyghur Khaganate. The Uyghurs built the first Manichean monastery in China in 768; in 843, the Tang government ordered that the property of all Manichean monasteries be confiscated in response to the outbreak of war with the Uyghurs. With the blanket ban on foreign religions two years later, Manicheism was driven underground and never flourished in China again.
Leisure
More than earlier periods, the Tang era was renowned for the time reserved for leisure activities, especially among the upper classes. Many outdoor sports and activities were enjoyed during the Tang, including archery, hunting, horse polo, cuju (soccer), cockfighting, and even tug of war. Government officials were granted vacations during their tenure in office. Officials were granted 30 days off every three years to visit their parents if they lived away, or 15 days off if the parents lived more than away (travel time not included). Officials were granted nine days of vacation time for weddings of a son or daughter, and either five, three, or one days/day off for the nuptials of close relatives (travel time not included). Officials also received a total of three days off for their son's capping initiation rite into manhood, and one day off for the ceremony of initiation rite of a close relative's son. Among the royalty, the tradition of male homosexuality of Imperial China continued to exist, where eunuchs were often the royalty’s male favorites, both due to their appearance and talents.Hinsch, Bret. “Passions of the Cut Sleeve.” Academic Publisher. De Gruyter Brill, 1990. p.78, https://www.degruyterbrill.com/document/doi/10.1525/9780520912656/html.
Traditional Chinese holidays such as Chinese New Year, Lantern Festival, Cold Food Festival, and others were universal holidays. In Chang'an, there was always lively celebration, especially for the Lantern Festival since the city's nighttime curfew was lifted by the government for three days straight. Between 628 and 758, the imperial throne bestowed a total of sixty-nine grand carnivals nationwide, granted by the emperor in the case of special circumstances such as important military victories, abundant harvests after a long drought or famine, the granting of amnesties, or the instalment of a new crown prince. For special celebration in the Tang era, lavish and gargantuan-sized feasts were sometimes prepared, as the imperial court had staffed agencies to prepare the meals. This included a prepared feast for 1,100 elders of Chang'an in 664, a feast held for 3,500 officers of the Divine Strategy Army in 768, and one in 826 for 1,200 members of the imperial family and women of the palace. Alcohol consumption was a prominent facet of Chinese culture; people during the Tang drank for nearly every social event. An 8th-century court official allegedly had a serpent-shaped structure called the 'ale grotto' built on the ground floor using a total of 50,000 bricks, which featured bowls from which each of his friends could drink.
Status in clothing
In general, garments were made from silk, wool, or linen depending on your social status and what you could afford. Furthermore, there were laws that specified what kinds of clothing could be worn by whom. The color of the clothing also indicated rank. During this period, China's power, culture, economy, and influence were thriving. As a result, women could afford to wear loose-fitting, wide-sleeved garments. Even lower-class women's robes had sleeves four to five feet wide.
Position of women
Concepts of women's social rights and social status during the Tang era were notably liberal-minded for the period. However, this was largely reserved for urban women of elite status, as men and women in the rural countryside laboured hard in their different set of tasks; with wives and daughters responsible for more domestic tasks of weaving textiles and rearing of silk worms, while men tended to farming in the fields. There were many women in the Tang era who gained access to religious authority by taking vows as Taoist priestesses.
In Chang'an, ordinary courtesans inhabited the North Hamlet. They were generally knowledgeable in the rules of drinking games, and received particular training in table manners. While renowned for their politeness, courtesans were reputed to dominate conversations among elite men, and as being unafraid to openly criticise the rudeness of prominent male guests, including for talking too much, too loudly, or for boasting of their accomplishments. Courtesans were sometimes beaten by their procuring madames.Some of these girls were beggars when they were young, and some came from poor families. There were often improper people who secretly sold the girls to brothels. Some girls from the boudoir were married into such families and sold to brothels at high prices. Once they were trapped here, there was no way to get out by themselves. At first, these girls were taught to sing, and then they were quickly reprimanded. If they were a little negligent, they would be whipped with a whip.
Gējìs, or professional singing courtesans, were culturally prominent, and joined talent agencies called jiaofang. The emperor selected particularly talented women from the outer jiaofang to form the spring court, who were supplemented by courtesans from other troupes. During the Tang, singing courtesans who were also talented in the poetry. In addition to singing, some courtesans composed their own songs, and even popularised a new form of lyrical verse that incorporated quotations of famous historical figures.
It was fashionable for women to have full figures; men enjoyed the presence of assertive, active women. The foreign horse-riding sport of polo from Persia became a wildly popular trend among the Chinese elite, and women often played the sport (as glazed earthenware figurines from the time period portray). The preferred hairstyle for women was to bunch their hair up like "an elaborate edifice above the forehead", while affluent ladies wore extravagant head ornaments, combs, pearl necklaces, face powders, and perfumes. A 671 law attempted to force women to wear hats with veils again in order to promote decency, but these laws were ignored as some women started wearing caps and even no hats at all, as well as men's riding clothes and boots, and tight-sleeved bodices.
There were some prominent court women after the era of Empress Wu, such as Yang Guifei (719–756), who had Emperor Xuanzong appoint many of her relatives and cronies to important ministerial and martial positions.
Cuisine
During the earlier Northern and Southern dynasties (420–589), and perhaps even earlier, the drinking of tea (Camellia sinensis) became popular in southern China. Tea was viewed then as a beverage of tasteful pleasure and with pharmacological purpose as well. During the Tang dynasty, tea became synonymous with everything sophisticated in society. The poet Lu Tong (790–835) devoted most of his poetry to his love of tea. The 8th-century author Lu Yu, known as the Sage of Tea, wrote a treatise on the art of drinking tea called The Classic of Tea. Although wrapping paper had been used in China since the 2nd century BC, during the Tang it was used as folded and sewn square bags to hold and preserve the flavor of tea leaves. This followed many other uses for paper such as the first recorded use of toilet paper in 589 by the scholar-official Yan Zhitui (531–591), confirmed in 851 by an Arab traveller who remarked that the Tang lacked cleanliness because they relied on toilet paper instead of washing themselves with water.
In ancient times, the Chinese had outlined the five most basic foodstuffs known as the five grains: sesamum, legumes, wheat, panicled millet, and glutinous millet. The Ming dynasty encyclopedist Song Yingxing noted that rice was not counted among the five grains from the time of the legendary and deified Chinese sage Shennong (the existence of whom Yingxing wrote was "an uncertain matter") into the 2nd millennium BC, because the properly wet and humid climate in southern China for growing rice was not yet fully settled or cultivated by the Chinese. Song Yingxing also noted that in the Ming dynasty, seven tenths of civilians' food was rice. During the Tang dynasty rice was not only the most important staple in southern China, but had also become popular in the north where central authority resided.
During the Tang dynasty, wheat replaced the position of millet and became the main staple crop. As a consequence, wheat cake shared a considerable amount in the staple of Tang. There were four main kinds of cake: steamed cake, boiled cake, pancake, and Hu cake. Steamed cake was consumed commonly by both civilians and aristocrats. Like rougamo in modern Chinese cuisine, steamed cake was usually stuffed with meat and vegetables. Boiled cake was the staple of the Northern Dynasties, and it kept its popularity in the Tang dynasty. It included a wide variety of dishes similar to modern wonton, noodles, and many other kinds of food that soak wheat in water. While aristocrats favoured wonton, civilians usually consumed noodles and noodle slice soup that were easier to produce. Pancakes was rare in China before the Tang, when it gained popularity. Hu cake was extremely popular during the Tang. Hu cake was toasted in the oven, covered with sesame seeds, and served at taverns, inns and shops. Japanese Buddhist monk Ennin observed that Hu cake was popular among all of China's civilians.
During the Tang, the many common foodstuffs and cooking ingredients in addition to those already listed were barley, garlic, salt, turnips, soybeans, pears, apricots, peaches, apples, pomegranates, jujubes, rhubarb, hazelnuts, pine nuts, chestnuts, walnuts, yams, taro, etc. The various meats that were consumed included pork, chicken, lamb (especially preferred in the north), sea otter, bear (which was hard to catch, but there were recipes for steamed, boiled, and marinated bear), and even Bactrian camels. In the south along the coast meat from seafood was by default the most common, as the Chinese enjoyed eating cooked jellyfish with cinnamon, Sichuan pepper, cardamom, and ginger, as well as oysters with wine, fried squid with ginger and vinegar, horseshoe crabs and red swimming crabs, shrimp and pufferfish, which the Chinese called "river piglet".
From the trade overseas and over land, the Chinese acquired peaches from Samarkand, date palms, pistachios, and figs from Persia, pine nuts and ginseng roots from Korea and mangoes from Southeast Asia. In China, there was a great demand for sugar; during the reign of Harsha over northern India (), Indian envoys to the Tang brought two makers of sugar who successfully taught the Chinese how to cultivate sugarcane. Cotton also came from India as a finished product from Bengal, although it was during the Tang that the Chinese began to grow and process cotton, and by the Yuan dynasty it became the prime textile fabric in China.
Some foods were off-limits, as the Tang court encouraged people not to eat beef. This was due to the role of the bull as a valuable working animal. From 831 to 833, Emperor Wenzong even banned the slaughter of cattle on the grounds of his religious convictions to Buddhism.
Methods of food preservation were important, and practised throughout China. The common people used simple methods of preservation, such as digging deep ditches and trenches, brining, and salting their foods. The emperor had large ice pits located in the parks in and around Chang'an for preserving food, while the wealthy and elite had their own smaller ice pits. Each year the emperor had labourers carve 1000 blocks of ice from frozen creeks in mountain valleys, each block with dimensions . Frozen delicacies such as chilled melon were enjoyed during the summer.
Science and technology
Engineering
Technology during the Tang period was built also upon the precedents of the past. Previous advancements in clockworks and timekeeping included the mechanical gear systems of Zhang Heng (78–139) and Ma Jun (), which gave the Tang mathematician, mechanical engineer, astronomer, and monk Yi Xing (683–727) inspiration when he invented the world's first clockwork escapement mechanism in 725. This was used alongside a water clock and waterwheel to power a rotating armillary sphere in representation of astronomical observation. Yi Xing's device also had a mechanically timed bell that was struck automatically every hour, and a drum that was struck automatically every quarter-hour; essentially, a striking clock. Yi Xing's astronomical clock and water-powered armillary sphere became well known throughout the country, since students attempting to pass the imperial examinations by 730 had to write an essay on the device as an exam requirement. However, the most common type of public and palace timekeeping device was the inflow clepsydra. Its design was improved by the Sui dynasty engineers Geng Xun and Yuwen Kai. They provided a steelyard balance that allowed seasonal adjustment in the pressure head of the compensating tank and could then control the rate of flow for different lengths of day and night.
There were many other mechanical inventions during the Tang era. These included a tall mechanical wine server created during the early 8th century in the shape of an artificial mountain, carved out of iron and rested on a lacquered-wooden tortoise frame. This intricate device used a hydraulic pump that siphoned wine out of metal dragon-headed faucets, as well as tilting bowls that were timed to dip wine down, by force of gravity when filled, into an artificial lake that had intricate iron leaves popping up as trays for placing party treats. Furthermore, as the historian Charles Benn describes it:
Yet the use of a teasing mechanical puppet in this wine-serving device was not exactly a novel invention of the Tang, since the use of mechanical puppets in China date to the Qin dynasty (221–206 BC). In the 3rd century Ma Jun had an entire mechanical puppet theatre operated by the rotation of a waterwheel. There are many stories of automatons used in the Tang, including general Yang Wulian's wooden statue of a monk who stretched his hands out to collect contributions; when the number of coins reached a certain weight, the mechanical figure moved his arms to deposit them in a satchel. This weight-and-lever mechanism was exactly like Heron's penny slot machine. Other devices included one by Wang Ju, whose "wooden otter" could allegedly catch fish; Needham suspects a spring trap of some kind was employed here.
In the realm of structural engineering and technical Chinese architecture, there were also government standard building codes, outlined in the early Tang building code Yingshan Ling. Fragments of this book have survived in the Tang Lü (Tang Code), while the Song dynasty architectural manual of the Yingzao Fashi by Li Jie (1065–1101) in 1103 is the oldest existing technical treatise on Chinese architecture that has survived in full. During the reign of Emperor Xuanzong (712–756), there were 34,850 registered craftsmen serving the state, managed by an agency for palace buildings.
Woodblock printing
Woodblock printing made the written word available to vastly greater audiences. One of the world's oldest surviving printed documents is a miniature Buddhist dharani sutra unearthed at Xi'an in 1974, and dated roughly from 650 to 670. The Diamond Sutra is the first full-length book printed at regular size, complete with illustrations embedded with the text and precisely dated to 868. Among the earliest documents to be printed were Buddhist texts as well as calendars, the latter essential for calculating and marking which days were auspicious and which days were not. With so many books coming into circulation for the general public, literacy rates could improve, along with the lower classes being able to obtain cheaper sources of study. Therefore, there were more lower-class people seen entering the Imperial Examinations and passing them by the later Song dynasty. Although the later Bi Sheng's movable type printing in the 11th century was innovative for his period, woodblock printing that became widespread in the Tang remained the dominant method of printing in China until the more advanced printing press from Europe became widely accepted and used in East Asia. The first use of the playing card during the Tang dynasty was an auxiliary invention of the new age of printing.
Cartography
In cartography, the Tang made further advances beyond the standards of the Han. When the Tang chancellor Pei Ju (547–627) was working for the Sui dynasty as a Commercial Commissioner in 605, he created a well-known gridded map with a graduated scale in the tradition of Pei Xiu (224–271). The Tang chancellor Xu Jingzong (592–672) was also known for his map of China drawn in 658. In 785, the Emperor Dezong had the geographer and cartographer Jia Dan (730–805) complete a map of China and her former colonies in Central Asia. Upon its completion in 801, the map was in length and in height, mapped out on a grid scale of one inch equalling one hundred li. A Chinese map of 1137 is similar in complexity to the one made by Jia Dan, carved on a stone stele with a grid scale of 100 li. The only Tang-era maps that have survived are star charts, despite the earliest extant terrain maps of China from the earlier state of Qin.
Medicine
The Chinese of the Tang era were interested in the benefits of officially classifying the medicines used in pharmacology. In 657, Emperor Gaozong of Tang () commissioned an official , complete with text and illustrated drawings for 833 different medicinal substances. In addition to compiling pharmacopeias, the Tang fostered learning in medicine by upholding imperial medical colleges, state examinations for doctors, and publishing forensic manuals for physicians. Authors of medicine in the Tang include Zhen Chuan () and Sun Simiao (581–682); the former was the first to identify in writing that patients with diabetes had an excess of sugar in their urine, and the latter was the first to recognise that diabetic patients should avoid consuming alcohol and starchy foods. As written by Zhen Chuan and others in the Tang, the thyroid glands of sheep and pigs were successfully used to treat goitres; thyroid extracts were not used to treat patients with goitre in the West until 1890. The use of the dental amalgam, manufactured from tin and silver, was first introduced in the medical text Xinxiu bencao written by Su Gong in 659.
Alchemy, gas cylinders, and air conditioning
Chinese scientists of the Tang period employed complex chemical formulas for an array of different purposes, often found through experiments of alchemy. These included a waterproof and dust-repelling cream or varnish for clothes and weapons, fireproof cement for glass and porcelain wares, a waterproof cream applied to silk clothes of underwater divers, a cream designated for polishing bronze mirrors, and many other useful formulas. Porcelain was invented in China during the Tang, although many types of glazed ceramics preceded it.
Since the time of the Han, the Chinese had drilled deep boreholes to transport natural gas from bamboo pipelines to stoves where cast iron evaporation pans boiled brine to extract salt. During the Tang dynasty, a gazetteer of Sichuan province stated that at one of these 'fire wells', men collected natural gas into portable bamboo tubes which could be carried around for dozens of kilometres and still produce a flame. These were essentially the first gas cylinders; Robert Temple assumes some sort of tap was used for this device.
The inventor Ding Huan () of the Han dynasty invented a rotary fan for air conditioning. In 747, Emperor Xuanzong had a "Cool Hall" built in the imperial palace, which Tang Yulin described as having water-powered fan wheels for air conditioning as well as rising jet streams of water from fountains. During the subsequent Song dynasty, written sources mentioned the rotary fan as even more widely used.
Historiography
The first classic work about the Tang is the Old Book of Tang by Liu Xu et al. of the Later Jin. This was edited into another history (labeled the New Book of Tang) by the Song historians Ouyang Xiu, Song Qi, et al. Both were based on earlier annals, now lost. Both of them also rank among the Twenty-Four Histories of China. One of the surviving sources of the Old Book of Tang, primarily covering up to 756, is the Tongdian, which Du You presented to the emperor in 801. The Tang period was placed into the enormous 294-volume universal history text of the Zizhi Tongjian, edited, compiled, and completed in 1084 by a team of scholars under the Song dynasty Chancellor Sima Guang. This historical text covered the history of China from the beginning of the Warring States in 403 BC until the beginning of the Song dynasty in 960.
Notes
References
Citations
Works cited
Further reading
External links
Tang Dynasty (618–907)Heilbrunn Timeline of Art History, Metropolitan Museum of Art
300 Tang PoemsChinese Text Initiative, University of Virginia
Guide to Tang art, with video commentaryArt of Asia, Minneapolis Institute of Art
Category:7th-century establishments in China
Category:7th century in China
Category:8th century in China
Category:9th century in China
Category:10th-century disestablishments in China
Category:10th century in China
Category:618 establishments
Category:907 disestablishments
Category:Dynasties of China
Category:Former countries in Chinese history
Category:Former empires
Category:Medieval East Asia
Category:States and territories disestablished in the 900s
Category:States and territories established in the 610s
|
ancient_medieval
| 14,880
|
43460
|
Han dynasty
|
https://en.wikipedia.org/wiki/Han_dynasty
|
The Han dynasty was an imperial dynasty of China (202 BC9 AD, 25–220 AD) established by Liu Bang and ruled by the House of Liu. The dynasty was preceded by the short-lived Qin dynasty (221–206 BC) and a warring interregnum known as the Chu–Han Contention (206–202 BC), and it was succeeded by the Three Kingdoms period (220–280 AD). The dynasty was briefly interrupted by the Xin dynasty (9–23 AD) established by the usurping regent Wang Mang, and is thus separated into two periods—the Western Han (202 BC9 AD) and the Eastern Han (25–220 AD). Spanning over four centuries, the Han dynasty is considered a golden age in Chinese history, and had a permanent impact on Chinese identity in later periods. The majority ethnic group of modern China refer to themselves as the "Han people" or "Han Chinese". The spoken Chinese and written Chinese are referred to respectively as the "Han language" and "Han characters".
The Han emperor was at the pinnacle of Han society and culture. He presided over the Han government but shared power with both the nobility and the appointed ministers who came largely from the scholarly gentry class. The Han Empire was divided into areas directly controlled by the central government called commanderies, as well as a number of semi-autonomous kingdoms. These kingdoms gradually lost all vestiges of their independence, particularly following the Rebellion of the Seven States. From the reign of Emperor Wu () onward, the Chinese court officially sponsored Confucianism in education and court politics, synthesized with the cosmology of later scholars such as Dong Zhongshu.
The Han dynasty oversaw periods of economic prosperity as well as significant growth in the money economy that had first been established during the Zhou dynasty (–256 BC). The coinage minted by the central government in 119 BC remained the standard in China until the Tang dynasty (618–907 AD). The period saw a number of modest institutional innovations. To finance its military campaigns and the settlement of newly conquered frontier territories, the Han government nationalised private salt and iron industries in 117 BC, creating government monopolies that were later repealed during the Eastern period. There were significant advances in science and technology during the Han period, including the emergence of papermaking, rudders for steering ships, negative numbers in mathematics, raised-relief maps, hydraulic-powered armillary spheres for astronomy, and seismometers that discerned the cardinal direction of distant earthquakes by use of inverted pendulums.
The Han dynasty had many conflicts with the Xiongnu, a nomadic confederation centred in the eastern Eurasian steppe. The Xiongnu defeated the Han in 200 BC, prompting the Han to appease the Xiongnu with a policy of marriage alliance and payments of tribute, though the Xiongnu continued to raid the Han's northern borders. Han policy changed in 133 BC, under Emperor Wu, when Han forces began a series of military campaigns to quell the Xiongnu. The Xiongnu were eventually defeated and forced to accept a status as Han vassals, and the Xiongnu confederation fragmented. The Han conquered the Hexi Corridor and Inner Asian territory of the Tarim Basin from the Xiongnu, helping to establish the Silk Road. The lands north of the Han's borders were later overrun by the nomadic Xianbei confederation. Emperor Wu also launched successful conquests in the south, annexing Nanyue in 111 BC and Dian in 109 BC. He further expanded Han territory into the northern Korean Peninsula, where Han forces conquered Gojoseon and established the Xuantu and Lelang commanderies in 108 BC.
After 92 AD, palace eunuchs increasingly involved themselves in the dynasty's court politics, engaging in violent power struggles between various consort clans of the empresses and empresses dowager. Imperial authority was also seriously challenged by large Taoist religious societies which instigated the Yellow Turban Rebellion and the Five Pecks of Rice Rebellion. Following the death of Emperor Ling (), the palace eunuchs were massacred by military officers, allowing members of the aristocracy and military governors to become warlords and divide the empire. The Han dynasty came to an end in 220 AD when Cao Pi, king of Wei, usurped the throne from Emperor Xian.
Etymology
According to the Shiji, after the collapse of the Qin dynasty the hegemon Xiang Yu appointed Liu Bang as prince of the small fief of Hanzhong, named after its location on the Han River (in modern southwest Shaanxi). Following Liu Bang's victory in the Chu–Han Contention, the resulting Han dynasty was named after the Hanzhong fief.
History
Western Han (202 BC – 9 AD)
China's first imperial dynasty was the Qin dynasty (221–206 BC). The Qin united the Chinese Warring States by conquest, but their regime became unstable after the death of the first emperor Qin Shi Huang. Within four years, the dynasty's authority had collapsed in a rebellion. Two former rebel leaders, Xiang Yu () of Chu and Liu Bang () of Han, engaged in a war to determine who would have hegemony over China, which had fissured into Eighteen Kingdoms, each claiming allegiance to either Xiang Yu or Liu Bang. Although Xiang Yu proved to be an effective commander, Liu Bang defeated him at the Battle of Gaixia (202 BC) in modern-day Anhui. Liu Bang assumed the title of Emperor at the urging of his followers and is known posthumously as Emperor Gaozu (). Chang'an (modern Xi'an) was chosen as the new capital of the reunified empire under Han.
At the beginning of the Western Han (), also known as the Former Han (), thirteen centrally controlled commanderies—including the capital region—existed in the western third of the empire, while the eastern two-thirds were divided into ten semi-autonomous kingdoms. To placate his prominent commanders from the war with Chu, Emperor Gaozu enfeoffed some of them as kings.
By 196, the Han court had replaced all of these kings with royal Liu family members, with the lone exception of Changsha. The loyalty of non-relatives to the emperor was questioned, and after several insurrections by Han kings—with the largest being the Rebellion of the Seven States in 154—the imperial court began enacting a series of reforms that limited the power of these kingdoms in 145, dividing their former territories into new commanderies under central control. Kings were no longer able to appoint their own staff; this duty was assumed by the imperial court. Kings became nominal heads of their fiefs and collected a portion of tax revenues as their personal incomes. The kingdoms were never entirely abolished and existed throughout the remainder of Western and Eastern Han.
To the north of China proper, the nomadic Xiongnu chieftain Modu Chanyu () conquered various tribes inhabiting the eastern portion of the Eurasian Steppe. By the end of his reign, he controlled the Inner Asian regions of Manchuria, Mongolia, and the Tarim Basin, subjugating over twenty states east of Samarkand. Emperor Gaozu was troubled about the abundant Han-manufactured iron weapons traded to the Xiongnu along the northern borders, and he established a trade embargo against the group.
In retaliation, the Xiongnu invaded what is now Shanxi, where they defeated the Han forces at Baideng in 200 BC. After negotiations, the heqin agreement in 198 BC nominally held the leaders of the Xiongnu and the Han as equal partners in a royal marriage alliance, but the Han were forced to send large amounts of tribute items such as silk clothes, food, and wine to the Xiongnu.
Despite the tribute and negotiation between Laoshang Chanyu () and Emperor Wen () to reopen border markets, many of the Chanyu's subordinates chose not to obey the treaty and periodically raided Han territories south of the Great Wall for additional goods. In a court conference assembled by Emperor Wu () in 135 BC, the majority consensus of the ministers was to retain the heqin agreement. Emperor Wu accepted this, despite continuing Xiongnu raids.
However, a court conference the following year convinced the majority that a limited engagement at Mayi involving the assassination of the Chanyu would throw the Xiongnu realm into chaos and benefit the Han. When this plot failed in 133 BC, Emperor Wu launched a series of massive military invasions into Xiongnu territory. The assault culminated in 119 BC at the Battle of Mobei, when Han commanders Huo Qubing () and Wei Qing () forced the Xiongnu court to flee north of the Gobi Desert, and Han forces reached as far north as Lake Baikal.
After Wu's reign, Han forces continued to fight the Xiongnu. The Xiongnu leader Huhanye () finally submitted to the Han as a tributary vassal in 51 BC. Huhanye's rival claimant to the throne, Zhizhi Chanyu (), was killed by Han forces under Chen Tang and Gan Yanshou at the Battle of Zhizhi, in modern Taraz, Kazakhstan.
In 121 BC, Han forces expelled the Xiongnu from a vast territory spanning the Hexi Corridor to Lop Nur. They repelled a joint Xiongnu-Qiang invasion of this northwestern territory in 111 BC. In that same year, the Han court established four new frontier commanderies in this region to consolidate their control: Jiuquan, Zhangyi, Dunhuang, and Wuwei. The majority of people on the frontier were soldiers. On occasion, the court forcibly moved peasant farmers to new frontier settlements, along with government-owned slaves and convicts who performed hard labour. The court also encouraged commoners, such as farmers, merchants, landowners, and hired labourers, to voluntarily migrate to the frontier.
Even before the Han's expansion into Central Asia, diplomat Zhang Qian's travels from 139 to 125 BC had established Chinese contacts with many surrounding civilizations. Zhang encountered Dayuan (Fergana), Kangju (Sogdiana), and Daxia (Bactria, formerly the Greco-Bactrian Kingdom); he also gathered information on Shendu (the Indus River valley) and Anxi (the Parthian Empire). All of these countries eventually received Han embassies. These connections marked the beginning of the Silk Road trade network that extended to the Roman Empire, bringing goods like Chinese silk and Roman glasswares between the two.
From until , Han forces fought the Xiongnu over control of the oasis city-states in the Tarim Basin. The Han were eventually victorious and established the Protectorate of the Western Regions in 60 BC, which dealt with the region's defence and foreign affairs. The Han also expanded southward. The naval conquest of Nanyue in 111 BC expanded the Han realm into what are now modern Guangdong, Guangxi, and northern Vietnam. Yunnan was brought into the Han realm with the conquest of the Dian Kingdom in 109 BC, followed by parts of the Korean Peninsula with the Han conquest of Gojoseon and establishment of the Xuantu and Lelang commanderies in 108 BC. The first nationwide census in Chinese history was taken in 2 AD; the Han's total population was registered as comprising 57,671,400 individuals across 12,366,470 households.
To pay for his military campaigns and colonial expansion, Emperor Wu nationalised several private industries. He created central government monopolies administered largely by former merchants. These monopolies included salt, iron, and liquor production, as well as bronze coinage. The liquor monopoly lasted only from 98 to 81 BC, and the salt and iron monopolies were eventually abolished in the early Eastern Han. The issuing of coinage remained a central government monopoly throughout the rest of the Han dynasty.
The government monopolies were eventually repealed when a political faction known as the Reformists gained greater influence in the court. The Reformists opposed the Modernist faction that had dominated court politics in Emperor Wu's reign and during the subsequent regency of Huo Guang (). The Modernists argued for an aggressive and expansionary foreign policy supported by revenues from heavy government intervention in the private economy. The Reformists, however, overturned these policies, favouring a cautious, non-expansionary approach to foreign policy, frugal budget reform, and lower tax-rates imposed on private entrepreneurs.
Wang Mang's reign and civil war
Wang Zhengjun (71 BC13 AD) was first empress, then empress dowager, and finally grand empress dowager during the reigns of the Emperors Yuan (), Cheng (), and Ai (), respectively. During this time, a succession of her male relatives held the title of regent. Following the death of Ai, Wang Zhengjun's nephew Wang Mang (45 BC23 AD) was appointed regent as Marshall of State on 16 August under Emperor Ping (1 BC6 AD).
When Ping died on 3 February 6 AD, Ruzi Ying () was chosen as the heir and Wang Mang was appointed to serve as acting emperor for the child. Wang promised to relinquish his control to Liu Ying once he came of age. Despite this promise, and against protest and revolts from the nobility, Wang Mang claimed on 10 January that the divine Mandate of Heaven called for the end of the Han dynasty and the beginning of his own: the Xin dynasty (9–23 AD).
Wang Mang initiated a series of major reforms that were ultimately unsuccessful. These reforms included outlawing slavery, nationalizing and equally distributing land between households, and introducing new currencies, a change which debased the value of coinage. Although these reforms provoked considerable opposition, Wang's regime met its ultimate downfall with the massive floods of AD and 11 AD. Gradual silt build-up in the Yellow River had raised its water level and overwhelmed the flood control works. The Yellow River split into two new branches: one emptying to the north and the other to the south of the Shandong Peninsula, though Han engineers managed to dam the southern branch by 70 AD.
The flood dislodged thousands of peasant farmers, many of whom joined roving bandit and rebel groups such as the Red Eyebrows to survive. Wang Mang's armies were incapable of quelling these enlarged rebel groups. Eventually, an insurgent mob forced their way into the Weiyang Palace and killed Wang Mang.
The Gengshi Emperor (), a descendant of Emperor Jing (), attempted to restore the Han dynasty and occupied Chang'an as his capital. However, he was overwhelmed by the Red Eyebrow rebels who deposed, assassinated, and replaced him with the puppet monarch Liu Penzi. Gengshi's distant cousin Liu Xiu, known posthumously as Emperor Guangwu (), after distinguishing himself at the Battle of Kunyang in 23 AD, was urged to succeed Gengshi as emperor.
Under Guangwu's rule, the Han Empire was restored. Guangwu made Luoyang his capital in 25 AD, and by 27 his officers Deng Yu and Feng Yi had forced the Red Eyebrows to surrender and executed their leaders for treason. From 26 until 36 AD, Emperor Guangwu had to wage war against other regional warlords who claimed the title of emperor; when these warlords were defeated, China reunified under the Han.
The period between the foundation of the Han dynasty and Wang Mang's reign is known as the Western Han () or Former Han () (206 BC9 AD). During this period the capital was at Chang'an (modern Xi'an). From the reign of Guangwu the capital was moved eastward to Luoyang. The era from his reign until the fall of Han is known as the Eastern Han or Later Han (25–220 AD).
Eastern Han (25–220 AD)
The Eastern Han (), also known as the Later Han (), formally began on 5 August AD 25, when Liu Xiu became Emperor Guangwu of Han. During the widespread rebellion against Wang Mang, the state of Goguryeo was free to raid Han's Korean commanderies; Han did not reaffirm its control over the region until AD 30.
The Trưng Sisters of Vietnam rebelled against Han in AD 40. Their rebellion was crushed by Han general Ma Yuan () in a campaign from AD 42 to 43. Wang Mang renewed hostilities against the Xiongnu, who were estranged from Han until their leader Bi (), a rival claimant to the throne against his cousin Punu (), submitted to Han as a tributary vassal in AD 50. This created two rival Xiongnu states: the Southern Xiongnu led by Bi, an ally of Han, and the Northern Xiongnu led by Punu, an enemy of Han.
During the turbulent reign of Wang Mang, China lost control over the Tarim Basin, which was conquered by the Northern Xiongnu in AD 63 and used as a base to invade the Hexi Corridor in Gansu. Dou Gu () defeated the Northern Xiongnu at the Battle of Yiwulu in AD 73, evicting them from Turpan and chasing them as far as Lake Barkol before establishing a garrison at Hami. After the new Protector General of the Western Regions Chen Mu () was killed by allies of the Xiongnu in Karasahr and Kucha, the garrison at Hami was withdrawn.
At the Battle of Ikh Bayan in AD 89, Dou Xian () defeated the Northern Xiongnu chanyu who then retreated into the Altai Mountains. After the Northern Xiongnu fled into the Ili River valley in AD 91, the nomadic Xianbei occupied the area from the borders of the Buyeo Kingdom in Manchuria to the Ili River of the Wusun people. The Xianbei reached their apogee under Tanshihuai (), who consistently defeated Chinese armies. However, Tanshihuai's confederation disintegrated after his death.
Ban Chao () enlisted the aid of the Kushan Empire, which controlled territory across South and Central Asia, to subdue Kashgar and its ally Sogdiana. When a request by Kushan ruler Vima Kadphises () for a marriage alliance with the Han was rejected in AD 90, he sent his forces to Wakhan (modern-day Afghanistan) to attack Ban Chao. The conflict ended with the Kushans withdrawing because of lack of supplies. In AD 91, the office of Protector General of the Western Regions was reinstated when it was bestowed on Ban Chao.
Foreign travellers to the Eastern Han empire included Buddhist monks who translated works into Chinese, such as An Shigao from Parthia, and Lokaksema from Kushan-era Gandhara. In addition to tributary relations with the Kushans, the Han empire received gifts from sovereigns in the Parthian Empire, as well as from kings in modern Burma and Japan. He also initiated an unsuccessful mission to Rome in AD 97 with Gan Ying as emissary.
A Roman embassy of Emperor Marcus Aurelius () is recorded in the Weilüe and Book of Later Han to have reached the court of Emperor Huan of Han () in AD 166, yet Rafe de Crespigny asserts that this was most likely a group of Roman merchants. In addition to Roman glasswares and coins found in China, Roman medallions from the reign of Antoninus Pius and his adopted son Marcus Aurelius have been found at Óc Eo in Vietnam. This was near the commandery of Rinan where Chinese sources claim the Romans first landed, as well as embassies from Tianzhu in northern India in 159 and 161. Óc Eo is also thought to be the port city "Cattigara" described by Ptolemy in his Geography () as lying east of the Golden Chersonese (Malay Peninsula) along the Magnus Sinus (i.e. the Gulf of Thailand and South China Sea), where a Greek sailor had visited.
Emperor Zhang's () reign came to be viewed by later Eastern Han scholars as the high point of the dynastic house. Subsequent reigns were increasingly marked by eunuch intervention in court politics and their involvement in the violent power struggles of the imperial consort clans. In 92 AD, with the aid of the eunuch Zheng Zhong (), Emperor He () had Empress Dowager Dou () put under house arrest and her clan stripped of power. This was in revenge for Dou's purging of the clan of his natural mother—Consort Liang—and then concealing her identity from him. After Emperor He's death, his wife Empress Deng Sui () managed state affairs as the regent empress dowager during a turbulent financial crisis and widespread Qiang rebellion that lasted from 107 to 118 AD.
When Empress Dowager Deng died, Emperor An () was convinced by the accusations of the eunuchs Li Run () and Jiang Jing () that Deng and her family had planned to depose him. An dismissed Deng's clan members from office, exiled them, and forced many to commit suicide. After An's death, his wife, Empress Dowager Yan () placed the child Marquess of Beixiang on the throne in an attempt to retain power within her family. However, palace eunuch Sun Cheng () masterminded a successful overthrow of her regime to enthrone Emperor Shun of Han (). Yan was placed under house arrest, her relatives were either killed or exiled, and her eunuch allies were slaughtered. The regent Liang Ji (), brother of Empress Liang Na (), had the brother-in-law of Consort Deng Mengnü () killed after Deng Mengnü resisted Liang Ji's attempts to control her. Afterward, Emperor Huan employed eunuchs to depose Liang Ji, who was then forced to commit suicide.
Students from the imperial university organized a widespread student protest against the eunuchs of Emperor Huan's court. Huan further alienated the bureaucracy when he initiated grandiose construction projects and hosted thousands of concubines in his harem at a time of economic crisis. Palace eunuchs imprisoned the official Li Ying () and his associates from the Imperial University on a dubious charge of treason. In 167 AD, the Grand Commandant Dou Wu () convinced his son-in-law, Emperor Huan, to release them. However, the emperor permanently barred Li Ying and his associates from serving in office, marking the beginning of the Partisan Prohibitions.
Following Huan's death, Dou Wu and the Grand Tutor Chen Fan () attempted a coup against the eunuchs Hou Lan (), Cao Jie (), and Wang Fu (). When the plot was uncovered, the eunuchs arrested Empress Dowager Dou () and Chen Fan. General Zhang Huan () favoured the eunuchs. He and his troops confronted Dou Wu and his retainers at the palace gate where each side shouted accusations of treason against the other. When the retainers gradually deserted Dou Wu, he was forced to commit suicide.
Under Emperor Ling () the eunuchs had the partisan prohibitions renewed and expanded, while also auctioning off top government offices. Many affairs of state were entrusted to the eunuchs Zhao Zhong () and Zhang Rang () while Emperor Ling spent much of his time roleplaying with concubines and participating in military parades.
End of the Han dynasty
The Partisan Prohibitions were repealed during the Yellow Turban Rebellion and Five Pecks of Rice Rebellion in 184 AD, largely because the court did not want to continue to alienate a significant portion of the gentry class who might otherwise join the rebellions. The Yellow Turbans and Five-Pecks-of-Rice adherents belonged to two different hierarchical Taoist religious societies led by faith healers Zhang Jue () and Zhang Lu (), respectively.
Zhang Lu's rebellion, in what is now northern Sichuan and southern Shaanxi, was not quelled until 215 AD. Zhang Jue's massive rebellion across eight provinces was annihilated by Han forces within a year; however, the following decades saw much smaller recurrent uprisings. Although the Yellow Turbans were defeated, many generals appointed during the crisis never disbanded their assembled militias and used these troops to amass power outside of the collapsing imperial authority.
General-in-chief He Jin (), half-brother to Empress He (), plotted with Yuan Shao () to overthrow the eunuchs by having several generals march to the outskirts of the capital. There, in a written petition to Empress He, they demanded the eunuchs' execution. After a period of hesitation, Empress He consented. When the eunuchs discovered this, however, they had her brother He Miao () rescind the order. The eunuchs assassinated He Jin on 22 September 189.
Yuan Shao then besieged Luoyang's Northern Palace while his brother Yuan Shu () besieged the Southern Palace. On September 25 both palaces were breached and approximately two thousand eunuchs were killed. Zhang Rang had previously fled with Emperor Shao () and his brother Liu Xie—the future Emperor Xian of Han (). While being pursued by the Yuan brothers, Zhang committed suicide by jumping into the Yellow River.
General Dong Zhuo () found the young emperor and his brother wandering in the countryside. He escorted them safely back to the capital and was made Minister of Works, taking control of Luoyang and forcing Yuan Shao to flee. After Dong Zhuo demoted Emperor Shao and promoted his brother Liu Xie as Emperor Xian, Yuan Shao led a coalition of former officials and officers against Dong, who burned Luoyang to the ground and resettled the court at Chang'an in May 191 AD. Dong Zhuo later poisoned Emperor Shao.
Dong was killed by his adopted son Lü Bu () in a plot hatched by Wang Yun (). Emperor Xian fled from Chang'an in 195 AD to the ruins of Luoyang. Xian was persuaded by Cao Cao (155–220 AD), then Governor of Yan Province in modern western Shandong and eastern Henan, to move the capital to Xuchang in 196 AD.
Yuan Shao challenged Cao Cao for control over the emperor. Yuan's power was greatly diminished after Cao defeated him at the Battle of Guandu in 200 AD. After Yuan died, Cao killed Yuan Shao's son Yuan Tan (173–205 AD), who had fought with his brothers over the family inheritance. His brothers Yuan Shang and Yuan Xi were killed in 207 AD by Gongsun Kang (), who sent their heads to Cao Cao.
After Cao's defeat at the naval Battle of Red Cliffs in 208 AD, China was divided into three spheres of influence, with Cao Cao dominating the north, Sun Quan (182–252 AD) dominating the south, and Liu Bei (161–223 AD) dominating the west. Cao Cao died in March 220 AD. By December his son Cao Pi (187–226 AD) had Emperor Xian relinquish the throne to him and is known posthumously as Emperor Wen of Wei. This formally ended the Han dynasty and initiated an age of conflict between the Three Kingdoms: Cao Wei, Eastern Wu, and Shu Han.
Culture and society
Social class
In the hierarchical social order, the emperor was at the apex of Han society and government. However, the emperor was often a minor, ruled over by a regent such as the empress dowager or one of her male relatives. Ranked immediately below the emperor were the kings who were of the same Liu family clan. The rest of society, including nobles lower than kings and all commoners excluding slaves, belonged to one of twenty ranks (èrshí gōngchéng ).
Each successive rank gave its holder greater pensions and legal privileges. The highest rank, of full marquess, came with a state pension and a territorial fiefdom. Holders of the rank immediately below, that of ordinary marquess, received a pension, but had no territorial rule. Scholar-bureaucrats who served in government belonged to the wider commoner social class and were ranked just below nobles in social prestige. The highest government officials could be enfeoffed as marquesses.
By the Eastern Han, local elites of unattached scholars, teachers, students, and government officials began to identify themselves as members of a nationwide gentry class with shared values and a commitment to mainstream scholarship. When the government became noticeably corrupt in mid-to-late Eastern Han, many gentry even considered the cultivation of morally grounded personal relationships more important than serving in public office.
Farmers, namely small landowner–cultivators, were ranked just below scholars and officials in the social hierarchy. Other agricultural cultivators were of a lower status, such as tenants, wage labourers, and slaves. The Han dynasty made adjustments to slavery in China and saw an increase in agricultural slaves. Artisans, technicians, tradespeople, and craftsmen had a legal and socioeconomic status between that of owner-cultivator farmers and common merchants.
State-registered merchants, who were forced by law to wear white-coloured clothes and pay high commercial taxes, were considered by the gentry as social parasites with a contemptible status. These were often petty shopkeepers of urban marketplaces; merchants such as industrialists and itinerant traders working between a network of cities could avoid registering as merchants and were often wealthier and more powerful than the vast majority of government officials.
Wealthy landowners, such as nobles and officials, often provided lodging for retainers who provided valuable work or duties, sometimes including fighting bandits or riding into battle. Unlike slaves, retainers could come and go from their master's home as they pleased. Physicians, pig breeders, and butchers had fairly high social status, while occultist diviners, runners, and messengers had low status.
Marriage, gender, and kinship
The Han-era family was patrilineal and typically had four to five nuclear family members living in one household. Multiple generations of extended family members did not occupy the same house, unlike families of later dynasties. According to Confucian family norms, various family members were treated with different levels of respect and intimacy. For example, there were different accepted time frames for mourning the death of a father versus a paternal uncle.
Marriages were highly ritualized, particularly for the wealthy, and included many important steps. The giving of betrothal gifts, known as bride price and dowry, were especially important. A lack of either was considered dishonourable and the woman would have been seen not as a wife, but as a concubine. Arranged marriages were typical, with the father's input on his offspring's spouse being considered more important than the mother's.
Monogamous marriages were also normal, although nobles and high officials were wealthy enough to afford and support concubines as additional lovers. Under certain conditions dictated by custom, not law, both men and women were able to divorce their spouses and remarry. However, a woman who had been widowed continued to belong to her husband's family after his death. In order to remarry, the widow would have to be returned to her family in exchange for a ransom fee. Her children would not be allowed to go with her.
Among the nobility, bisexuality was the norm, continuing the accepted tradition of sexual expression amongst other nobles since the Zhou dynasty. In the Royal Court, Emperors often favored eunuchs above other non-castrated men for their bodies' "sexual passivity". On the other hand, Han authors did not view male homosexual individuals as effeminate, as occurred in later dynasties. While non-royal nobility were obligated to heterosexual marriages, male concubines were widely accepted. Despite openness to bisexuality or homosexuality, Han dynasty norms around gender and family obligated most moral questions, including that of polygamy, homosexuality, and bisexuality, to be solved by the patriarch within the household.
Apart from the passing of noble titles or ranks, inheritance practices did not involve primogeniture; each son received an equal share of the family property. Unlike the practice in later dynasties, the father usually sent his adult married sons away with their portions of the family fortune. Daughters received a portion of the family fortune through their dowries, though this was usually much less than the shares of sons. A different distribution of the remainder could be specified in a will, but it is unclear how common this was.
Women were expected to obey the will of their father, then their husband, and then their adult son in old age. However, it is known from contemporary sources that there were many deviations to this rule, especially in regard to mothers over their sons, and empresses who ordered around and openly humiliated their fathers and brothers. Women were exempt from the annual corvée labour duties, but often engaged in a range of income-earning occupations aside from their domestic chores of cooking and cleaning.
The most common occupation for women was weaving clothes for the family, for sale at market, or for large textile enterprises that employed hundreds of women. Other women helped on their brothers' farms or became singers, dancers, sorceresses, respected medical physicians, and successful merchants who could afford their own silk clothes. Some women formed spinning collectives, aggregating the resources of several different families.
Education, literature, and philosophy
The early Western Han court simultaneously accepted the philosophical teachings of Legalism, Huang-Lao Taoism, and Confucianism in making state decisions and shaping government policy. However, the Han court under Emperor Wu gave Confucianism exclusive patronage. In 136 BC, he abolished all academic chairs not concerned with the Five Classics, and in 124 BC he established the Imperial University, at which he encouraged nominees for office to receive a Confucian education.
Unlike the original ideology espoused by Confucius (551–479 BC), Han Confucianism in Emperor Wu's reign was the creation of Dong Zhongshu (179–104 BC). Dong was a scholar and minor official who aggregated the ethical Confucian ideas of ritual, filial piety, and harmonious relationships with five phases and yin-yang cosmologies. Dong's synthesis justified the imperial system of government within the natural order of the universe.
The Imperial University grew in importance as the student body grew to over 30,000 by the 2nd century AD. A Confucian-based education was also made available at commandery-level schools and private schools opened in small towns, where teachers earned respectable incomes from tuition payments. Schools were established in far southern regions where standard Chinese texts were used to assimilate the local populace.
Some important texts were created and studied by scholars. Philosophy written by Yang Xiong (53 BC18 AD), Huan Tan (43 BC28 AD), Wang Chong (27–100 AD), and Wang Fu (78–163 AD) questioned whether human nature was innately good or evil and posed challenges to Dong's universal order. The Shiji started by Sima Tan () and finished by his son Sima Qian (145–86 BC) established the standard model for imperial China's tradition of official histories, being emulated initially by the Book of Han authored by Ban Biao (3–54 AD) with his son Ban Gu (32–92 AD), and his daughter Ban Zhao (45–116 AD). Biographies on important figures were written by members of the gentry. There were also dictionaries published during the Han period such as the Shuowen Jiezi by Xu Shen () and the Fangyan by Yang Xiong. Han dynasty poetry was dominated by the fu genre, which achieved its greatest prominence during the reign of Emperor Wu.
Law and order
Han scholars such as Jia Yi (201–169 BC) portrayed the Qin as a brutal regime. However, archaeological evidence from Zhangjiashan and Shuihudi reveal that many of the statutes in the Han law code compiled by Chancellor Xiao He () were derived from Qin law.
Various cases for rape, physical abuse, and murder were prosecuted in court. Women, although usually having fewer rights by custom, were allowed to level civil and criminal charges against men. While suspects were jailed, convicted criminals were never imprisoned. Instead, punishments were commonly monetary fines, periods of forced hard labour for convicts, and the penalty of death by beheading. Early Han punishments of torturous mutilation were borrowed from Qin law. A series of reforms abolished mutilation punishments with progressively less-severe beatings by the bastinado.
Acting as a judge in lawsuits was one of the many duties of the county magistrate and Administrators of commanderies. Complex, high-profile, or unresolved cases were often deferred to the Minister of Justice in the capital or even the emperor. In each Han county was several districts, each overseen by a chief of police. Order in the cities was maintained by government officers in the marketplaces and constables in the neighbourhoods.
Food
The most common staple crops consumed during Han were wheat, barley, foxtail millet, proso millet, rice, and beans. Commonly eaten fruits and vegetables included chestnuts, pears, plums, peaches, melons, apricots, strawberries, red bayberries, jujubes, calabash, bamboo shoots, mustard plant, and taro. Domesticated animals that were also eaten included chickens, Mandarin ducks, geese, cows, sheep, pigs, camels, and dogs (various types were bred specifically for food, while most were used as pets). Turtles and fish were taken from streams and lakes. Commonly hunted game, such as owl, pheasant, magpie, sika deer, and Chinese bamboo partridge were consumed. Seasoning included sugar, honey, salt, and soy sauce. Beer and wine were regularly consumed.
Clothing
The types of clothing worn and the materials used during the Han period depended upon social class. Wealthy folk could afford silk robes, skirts, socks, and mittens, coats made of badger or fox fur, duck plumes, and slippers with inlaid leather, pearls, and silk lining. Peasants commonly wore clothes made of hemp, wool, and ferret skins.
Religion, cosmology, and metaphysics
Families throughout Han China made ritual sacrifices of animals and food to deities, spirits, and ancestors at temples and shrines. They believed that these items could be used by those in the spiritual realm. It was thought that each person had a two-part soul: the spirit-soul which journeyed to the afterlife paradise of immortals (xian), and the body-soul which remained in its grave or tomb on earth and was only reunited with the spirit-soul through a ritual ceremony.
In addition to his many other roles, the emperor acted as the highest priest in the land who made sacrifices to Heaven, the main deities known as the Five Powers, and spirits of mountains and rivers known as shen. It was believed that the three realms of Heaven, Earth, and Mankind were linked by natural cycles of yin and yang and the five phases. If the emperor did not behave according to proper ritual, ethics, and morals, he could disrupt the fine balance of these cosmological cycles and cause calamities such as earthquakes, floods, droughts, epidemics, and swarms of locusts.
It was believed that immortality could be achieved if one reached the lands of the Queen Mother of the West or Mount Penglai. Han-era Taoists assembled into small groups of hermits who attempted to achieve immortality through breathing exercises, sexual techniques, and the use of medical elixirs.
By the 2nd century AD, Taoists formed large hierarchical religious societies such as the Way of the Five Pecks of Rice. Its followers believed that the sage-philosopher Laozi () was a holy prophet who would offer salvation and good health if his devout followers would confess their sins, ban the worship of unclean gods who accepted meat sacrifices, and chant sections of the Tao Te Ching.
Buddhism first entered Imperial China through the Silk Road during the Eastern Han, and was first mentioned in 65 AD. Liu Ying (), a half-brother to Emperor Ming of Han (), was one of its earliest Chinese adherents, although Chinese Buddhism at this point was heavily associated with Huang–Lao Taoism. China's first known Buddhist temple, the White Horse Temple, was constructed outside the wall of Luoyang during Emperor Ming's reign. Important Buddhist canons were translated into Chinese during the 2nd century AD, including the Sutra of Forty-two Chapters, Perfection of Wisdom, Shurangama Sutra, and Pratyutpanna Sutra.
Government and politics
Central government
In Han government, the emperor was the supreme judge and lawgiver, the commander-in-chief of the armed forces and sole designator of official nominees appointed to the top posts in central and local administrations; those who earned a 600-bushel salary-rank or higher. Theoretically, there were no limits to his power.
However, state organs with competing interests and institutions such as the court conference (tingyi )—where ministers were convened to reach a majority consensus on an issue—pressured the emperor to accept the advice of his ministers on policy decisions. If the emperor rejected a court conference decision, he risked alienating his high ministers. Nevertheless, emperors sometimes did reject the majority opinion reached at court conferences.
Below the emperor were his cabinet members known as the Three Councillors of State. These were the Chancellor or Minister over the Masses, the Imperial Counsellor or Excellency of Works (Yushi dafu or Da sikong ), and Grand Commandant or Grand Marshal (Taiwei or Da sima ).
The Chancellor, whose title had changed in 8 BC to Minister over the Masses, was chiefly responsible for drafting the government budget. The Chancellor's other duties included managing provincial registers for land and population, leading court conferences, acting as judge in lawsuits, and recommending nominees for high office. He could appoint officials below the salary-rank of 600 bushels.
The Imperial Counsellor's chief duty was to conduct disciplinary procedures for officials. He shared similar duties with the Chancellor, such as receiving annual provincial reports. However, when his title was changed to Minister of Works in 8 BC, his chief duty became the oversight of public works projects.
The Grand Commandant, whose title was changed to Grand Marshal in 119 BC before reverting to Grand Commandant in 51 AD, was the irregularly posted commander of the military and then regent during the Western Han period. In the Eastern Han era he was chiefly a civil official who shared many of the same censorial powers as the other two Councillors of State.
Ranked below the Three Councillors of State were the Nine Ministers, who each headed a specialized ministry. The Minister of Ceremonies (Taichang ) was the chief official in charge of religious rites, rituals, prayers, and the maintenance of ancestral temples and altars. The Minister of the Household (Guang lu xun ) was in charge of the emperor's security within the palace grounds, external imperial parks, and wherever the emperor made an outing by chariot.
The Minister of the Guards (Weiwei ) was responsible for securing and patrolling the walls, towers, and gates of the imperial palaces. The Minister Coachman (Taipu ) was responsible for the maintenance of imperial stables, horses, carriages, and coach-houses for the emperor and his palace attendants, as well as the supply of horses for the armed forces. The Minister of Justice (Tingwei ) was the chief official in charge of upholding, administering, and interpreting the law. The Minister Herald (Da honglu ) was the chief official in charge of receiving honoured guests like nobles and foreign ambassadors at court.
The Minister of the Imperial Clan (Zongzheng ) oversaw the imperial court's interactions with the empire's nobility and extended imperial family, such as granting fiefs and titles. The Minister of Finance (da sìnong ) was the treasurer for the official bureaucracy and the armed forces who handled tax revenues and set standards for units of measurement. The Minister Steward (Shaofu ) served the emperor exclusively, providing him with entertainment and amusements, proper food and clothing, medicine and physical care, valuables and equipment.
Local government
The Han empire, excluding kingdoms and marquessates, was divided, in descending order of size, into political units of provinces, commanderies, and counties. A county was divided into several districts (xiang ), the latter composed of a group of hamlets (li ), each containing about a hundred families.
The heads of provinces, whose official title was changed from Inspector to Governor and vice versa several times during Han, were responsible for inspecting several commandery-level and kingdom-level administrations. On the basis of their reports, the officials in these local administrations would be promoted, demoted, dismissed, or prosecuted by the imperial court.
A governor could take various actions without permission from the imperial court. The lower-ranked inspector had executive powers only during times of crisis, such as raising militias across the commanderies under his jurisdiction to suppress a rebellion.
A commandery consisted of a group of counties, and was headed by an administrator. He was the top civil and military leader of the commandery and handled defence, lawsuits, seasonal instructions to farmers, and recommendations of nominees for office sent annually to the capital in a quota system first established by Emperor Wu. The head of a large county of about 10,000 households was called a Prefect, while the heads of smaller counties were called chiefs, and both could be referred to as magistrates. A Magistrate maintained law and order in his county, registered the populace for taxation, mobilized commoners for annual corvée duties, repaired schools, and supervised public works.
Kingdoms and marquessates
Kingdoms—roughly the size of commanderies—were ruled exclusively by the emperor's male relatives as semi-autonomous fiefdoms. Before 157 BC, some kingdoms were ruled by non-relatives, granted to them in return for their services to Emperor Gaozu. The administration of each kingdom was very similar to that of the central government. Although the emperor appointed the Chancellor of each kingdom, kings appointed all the remaining civil officials in their fiefs.
However, in 145 BC, after several insurrections by the kings, Emperor Jing removed the kings' rights to appoint officials whose salaries were higher than 400 bushels. The Imperial Counsellors and Nine Ministers (excluding the Minister Coachman) of every kingdom were abolished, although the Chancellor was still appointed by the central government.
With these reforms, kings were reduced to being nominal heads of their fiefs, gaining a personal income from only a portion of the taxes collected in their kingdom. Similarly, the officials in the administrative staff of a full marquess's fief were appointed by the central government. A marquess's chancellor was ranked as the equivalent of a county prefect. Like a king, the marquess collected a portion of the tax revenues in his fief as personal income.
Until the reign of Emperor Jing of Han, the Han emperors had great difficulties controlling their vassal kings, who often switched allegiances to the Xiongnu whenever they felt threatened by imperial centralization of power. The seven years of Gaozu's reign featured defections by three vassal kings and one marquess, who then aligned themselves with the Xiongnu. Even imperial princes controlling fiefdoms would sometimes invite a Xiongnu invasion in response to the Emperor's threats. The Han moved to secure a treaty with the Xiongnu, aiming to clearly divide authority between them. The Han and Xiongnu now held one another out as the "two masters" with sole dominion over their respective peoples; they cemented this agreement with a marriage alliance (heqin), before eliminating the rebellious vassal kings in 154 BC. This prompted some of the Xiongnu vassals to swap allegiances to the Han, starting in 147. Han court officials were initially hostile to the idea of disrupting the status quo by expanding into Xiongnu territory in the steppe. The surrendered Xiongnu were integrated into a parallel military and political structures loyal to the Han emperor, a step toward a potential Han challenge to the superiority of Xiongnu cavalry in steppe warfare. This also brought the Han into contact with the interstate trade networks through the Tarim Basin in the far northwest, allowing for the Han's expansion from a regional state to a universalist, cosmopolitan empire achieved in part through further marriage alliances with the Wusun, another steppe power.
Military
At the beginning of the Han, every male commoner aged twenty-three was liable for conscription into the military. The minimum age was reduced to twenty following the reign of Emperor Zhao (). Conscripted soldiers underwent one year of training and one year of service as non-professional soldiers. The year of training was spent in one of three branches of the armed forces: infantry, cavalry, or navy. Prior to the abolition of much of the conscription system after 30 AD, soldiers could be called up for future service following the completion of their terms. They had to continue training regularly to maintain their skills, and were subject to annual inspections of their military readiness. The year of active service was served either on the frontier, in a king's court, or in the capital under the Minister of the Guards. A small professional army was stationed near the capital.
During the Eastern Han, conscription could be avoided if one paid a commutable tax. The Eastern Han court favoured the recruitment of a volunteer army. The volunteer army comprised the Southern Army (Nanjun ), while the standing army stationed in and near the capital was the Northern Army (Beijun ). Led by Colonels (Xiaowei ), the Northern Army consisted of five regiments, each composed of several thousand soldiers. When central authority collapsed after 189 AD, wealthy landowners, members of the aristocracy/nobility, and regional military-governors relied upon their retainers to act as their own personal troops.
During times of war, the volunteer army was increased, and a much larger militia was raised across the country to supplement the Northern Army. In these circumstances, a general (jiangjun ) led a division, which was divided into regiments led by a colonel or major (sima ). Regiments were divided into companies and led by captains. Platoons were the smallest units.
Economy
Currency
The Han dynasty inherited the ban liang coin type from the Qin. In the beginning of the Han, Emperor Gaozu closed the government mint in favour of private minting of coins. This decision was reversed in 186 BC by his widow Grand Empress Dowager Lü Zhi (), who abolished private minting. In 182 BC, Lü Zhi issued a bronze coin that was much lighter in weight than previous coins. This caused widespread inflation that was not reduced until 175 BC, when Emperor Wen allowed private minters to manufacture coins that were precisely in weight.
In 144 BC, Emperor Jing abolished private minting in favour of central-government and commandery-level minting; he also introduced a new coin. Emperor Wu introduced another in 120 BC, but a year later he abandoned the ban liangs entirely in favour of the wuzhu coin, weighing . The wuzhu became China's standard coin until the Tang dynasty (618–907). Its use was interrupted briefly by several new currencies introduced during Wang Mang's regime until it was reinstated in 40 AD by Emperor Guangwu.
Since commandery-issued coins were often of inferior quality and lighter weight, the central government closed commandery mints and monopolized the issue of coinage in 113 BC. This central government issuance of coinage was overseen by the Superintendent of Waterways and Parks, this duty being transferred to the Minister of Finance during the Eastern Han.
Taxation and property
Aside from the landowner's land tax paid in a portion of their crop yield, the poll tax and property taxes were paid in coin cash. The annual poll tax rate for adult men and women was 120 coins and 20 coins for minors. Merchants were required to pay a higher rate of 240 coins. The poll tax stimulated a money economy that necessitated the minting of over 28,000,000,000 coins from 118 BC to 5 AD, an average of 220,000,000 coins a year.
The widespread circulation of coin cash allowed successful merchants to invest money in land, empowering the very social class the government attempted to suppress through heavy commercial and property taxes. Emperor Wu even enacted laws which banned registered merchants from owning land, yet powerful merchants were able to avoid registration and own large tracts of land.
The small landowner-cultivators formed the majority of the Han tax base; this revenue was threatened during the latter half of Eastern Han when many peasants fell into debt and were forced to work as farming tenants for wealthy landlords. The Han government enacted reforms in order to keep small landowner-cultivators out of debt and on their own farms. These reforms included reducing taxes, temporary remissions of taxes, granting loans, and providing landless peasants temporary lodging and work in agricultural colonies until they could recover from their debts.
In 168 BC, the land tax rate was reduced from one-fifteenth of a farming household's crop yield to one-thirtieth, and later to one-hundredth of a crop yield for the last decades of the dynasty. The consequent loss of government revenue was compensated for by increasing property taxes.
The labour tax took the form of conscripted labour for one month per year, which was imposed upon male commoners aged fifteen to fifty-six. This could be avoided in Eastern Han with a commutable tax, since hired labour became more popular.
Private manufacture and government monopolies
In the early Western Han, a wealthy salt or iron industrialist, whether a semi-autonomous king or wealthy merchant, could boast funds that rivalled the imperial treasury and amass a peasant workforce numbering in the thousands. This kept many peasants away from their farms and denied the government a significant portion of its land tax revenue. To eliminate the influence of such private entrepreneurs, Emperor Wu nationalized the salt and iron industries in 117 BC and allowed many of the former industrialists to become officials administering the state monopolies. By the Eastern Han, the central government monopolies were repealed in favour of production by commandery and county administrations, as well as private businessmen.
Liquor was another profitable private industry nationalized by the central government in 98 BC. However, this was repealed in 81 BC and a property tax rate of two coins for every was levied for those who traded it privately. By 110 BC, Emperor Wu also interfered with the profitable trade in grain when he eliminated speculation by selling government stores of grain at a lower price than that demanded by merchants. Apart from Emperor Ming's creation of a short-lived Office for Price Adjustment and Stabilization, which was abolished in 68 AD, central-government price control regulations were largely absent during the Eastern Han.
Science and technology
The Han dynasty was a unique period in the development of premodern Chinese science and technology, comparable to the level of scientific and technological growth during the Song dynasty (960–1279).
Writing materials
In the 1st millennium BC, typical ancient Chinese writing materials were bronzeware, oracle bones, and bamboo slips or wooden boards. By the beginning of the Han, the chief writing materials were clay tablets, silk cloth, hemp paper, and rolled scrolls made from bamboo strips sewn together with hempen string; these were passed through drilled holes and secured with clay stamps.
The oldest known Chinese piece of hempen paper dates to the 2nd century BC. The standard papermaking process was invented by Cai Lun (AD 50–121) in 105. The oldest known surviving piece of paper with writing on it was found in the ruins of a Han watchtower that had been abandoned in AD 110, in Inner Mongolia.
Metallurgy and agriculture
Evidence suggests that blast furnaces, that convert raw iron ore into pig iron, which can be remelted in a cupola furnace to produce cast iron by means of a cold blast and hot blast, were operational in China by the late Spring and Autumn period (). The bloomery was non-existent in ancient China; however, the Han-era Chinese produced wrought iron by injecting excess oxygen into a furnace and causing decarburisation. Cast iron and pig iron could be converted into wrought iron and steel using a fining process.
The Han dynasty Chinese used bronze and iron to make a range of weapons, culinary tools, carpenters' tools, and domestic wares. A significant product of these improved iron-smelting techniques was the manufacture of new agricultural tools. The three-legged iron seed drill, invented by the 2nd century BC, enabled farmers to carefully plant crops in rows instead of sowing seeds by hand. The heavy mouldboard iron plough, also invented during the Han, required only one man to control it with two oxen to pull it. It had three ploughshares, a seed box for the drills, a tool which turned down the soil and could sow roughly of land in a single day.
To protect crops from wind and drought, the grain intendant Zhao Guo () created the alternating fields system (daitianfa ) during Emperor Wu's reign. This system switched the positions of furrows and ridges between growing seasons. Once experiments with this system yielded successful results, the government officially sponsored it and encouraged peasants to use it. Han farmers also used the pit field system ( ) for growing crops, which involved heavily fertilized pits that did not require ploughs or oxen and could be placed on sloping terrain. In the southern and small parts of central Han-era China, paddy fields were chiefly used to grow rice, while farmers along the Huai River used transplantation methods of rice production.
Structural and geo-technical engineering
Timber was the chief building material during the Han; it was used to build palace halls, multi-story residential towers and halls, and single-story houses. Because wood decays rapidly, the only remaining evidence of Han wooden architecture is a collection of scattered ceramic roof tiles. The oldest surviving wooden halls in China date to the Tang dynasty. Architectural historian Robert L. Thorp points out the scarcity of Han-era archaeological remains, and claims that often unreliable Han-era literary and artistic sources are used by historians as clues concerning lost Han architecture.
Though Han wooden structures decayed, some Han dynasty ruins made of brick, stone, and rammed earth remain intact. This includes stone pillar-gates, brick tomb chambers, rammed-earth city walls, rammed-earth and brick beacon towers, rammed-earth sections of the Great Wall, rammed-earth platforms where elevated halls once stood, and two rammed-earth castles in Gansu. The ruins of rammed-earth walls that once surrounded the capitals Chang'an and Luoyang still stand, along with their drainage systems of brick arches, ditches, and ceramic water pipes. Monumental stone pillar-gates called que, of which 29 dated to the Han survive, formed entrances of walled enclosures at shrine and tomb sites. These pillars feature artistic imitations of wooden and ceramic building components such as roof tiles, eaves, and balustrades.
The courtyard house is the most common type of home portrayed in Han artwork. Ceramic architectural models of buildings, like houses and towers, were found in Han tombs, perhaps to provide lodging for the dead in the afterlife. These provide valuable clues about lost wooden architecture. The artistic designs found on ceramic roof tiles of tower models are in some cases exact matches to Han roof tiles found at archaeological sites.
Over ten Han-era underground tombs have been found, many of them featuring archways, vaulted chambers, and domed roofs. Underground vaults and domes did not require buttress supports since they were held in place by earthen pits. The use of brick vaults and domes in aboveground Han structures is unknown.
From Han literary sources, it is known that wooden-trestle beam bridges, arch bridges, simple suspension bridges, and floating pontoon bridges existed during the Han. However, there are only two known references to arch bridges in Han literature. There is only one Han-era relief sculpture, located in Sichuan, that depicts an arch bridge.
Underground mine shafts were dug to extract metal ores, with some reaching depths of more than . Borehole drilling and derricks were used to lift brine to iron pans where it was distilled into salt. The distillation furnaces were heated by natural gas funnelled to the surface through bamboo pipelines. It is possible that these boreholes reached a total depth of .
Mechanical and hydraulic engineering
Han-era mechanical engineering comes largely from the choice observational writings of sometimes-disinterested Confucian scholars who generally considered scientific and engineering endeavours to be far beneath them. Professional artisan-engineers (jiang ) did not leave behind detailed records of their work. Han scholars, who often had little or no expertise in mechanical engineering, sometimes provided insufficient information on the various technologies they described.
Nevertheless, some literary sources provide crucial information. For example, in 15 BC the philosopher and poet Yang Xiong described the invention of the belt drive for a quilling machine, which was of great importance to early textile manufacturing. The inventions of mechanical engineer and craftsman Ding Huan are mentioned in the Miscellaneous Notes on the Western Capital. Around AD 180, Ding created a manually operated rotary fan used for air conditioning within palace buildings. Ding also used gimbals as pivotal supports for one of his incense burners and invented the world's first known zoetrope lamp.
Modern archaeology has led to the discovery of Han artwork portraying inventions which were otherwise absent in Han literary sources. As observed in Han miniature tomb models, but not in literary sources, the crank handle was used to operate the fans of winnowing machines that separated grain from chaff. The odometer cart, invented during the Han period, measured journey lengths, using mechanical figures banging drums and gongs to indicate each distance travelled. This invention is depicted in Han artwork by the 2nd century, yet detailed written descriptions were not offered until the 3rd century.
Modern archaeologists have also unearthed specimens of devices used during the Han dynasty, for example a pair of sliding metal calipers used by craftsmen for making minute measurements. These calipers contain inscriptions of the exact day and year they were manufactured. These tools are not mentioned in any Han literary sources.
The waterwheel appeared in Chinese records during the Han. As mentioned by Huan Tan , they were used to turn gears that lifted iron trip hammers, and were used in pounding, threshing, and polishing grain. However, there is no sufficient evidence for the watermill in China until around the 5th century. The administrator, mechanical engineer, and metallurgist Du Shi () created a waterwheel-powered reciprocator that worked the bellows for the smelting of iron. Waterwheels were also used to power chain pumps that lifted water to raised irrigation ditches. The chain pump was first mentioned in China by the philosopher Wang Chong in his 1st-century Lunheng.
The armillary sphere, a three-dimensional representation of the movements in the celestial sphere, was invented by the Han during the 1st century BC. Using a water clock, waterwheel, and a series of gears, the Court Astronomer Zhang Heng (78–139 AD) was able to mechanically rotate his metal-ringed armillary sphere. To address the problem of slowed timekeeping in the pressure head of the inflow water clock, Zhang was the first in China to install an additional tank between the reservoir and inflow vessel.
Zhang also invented a device he termed an "earthquake weathervane" ( ), which the British sinologist and historian Joseph Needham described as "the ancestor of all seismographs".Cited in . This device was able to detect the exact cardinal or ordinal direction of earthquakes from hundreds of kilometres away. It employed an inverted pendulum that, when disturbed by ground tremors, would trigger a set of gears that dropped a metal ball from one of eight dragon mouths (representing all eight directions) into a metal toad's mouth. The account of this device in the Book of the Later Han describes how, on one occasion, one of the metal balls was triggered without any of the observers feeling a disturbance. Several days later, a messenger arrived bearing news that an earthquake had struck in Longxi Commandery (modern Gansu), the direction the device had indicated, which forced the officials at court to admit the efficacy of Zhang's device.
Mathematics
Three Han mathematical treatises still exist. These are the Book on Numbers and Computation, the Zhoubi Suanjing, and the Nine Chapters on the Mathematical Art. Han mathematical achievements include solving problems with right triangles, square roots, cube roots, and matrix methods, finding more accurate approximations for pi, providing mathematical proof of the Pythagorean theorem, use of the decimal fraction, Gaussian elimination to solve linear equations, and continued fractions to find the roots of equations.
One of the Han's greatest mathematical advancements was the world's first use of negative numbers. Negative numbers first appeared in the Nine Chapters on the Mathematical Art as black counting rods, where positive numbers were represented by red counting rods. Negative numbers were also used by the Greek mathematician Diophantus around AD 275, and in the 7th-century Bakhshali manuscript of Gandhara, South Asia, but were not widely accepted in Europe until the 16th century.
The Han applied mathematics to various diverse disciplines. In musical tuning, Jing Fang (78–37 BC) realized that 53 perfect fifths was approximate to 31 octaves. He also created a musical scale of 60 tones, calculating the difference at 177147⁄176776 (the same value of 53 equal temperament discovered by the German mathematician Nicholas Mercator [1620–1687], i.e. 353/284).
Astronomy
Mathematics were essential in drafting the astronomical calendar, a lunisolar calendar that used the Sun and Moon as time-markers throughout the year. In the 5th century BC, during the Spring and Autumn period, the Chinese established the Sifen calendar (), which measured the tropical year at 365.25 days. This was replaced in 104 BC with the Taichu calendar () that measured the tropical year at (~ 365.25016) days and the lunar month at days. However, Emperor Zhang later reinstated the Sifen calendar.
Han dynasty astronomers made star catalogues and detailed records of comets that appeared in the night sky, including recording the appearance of the comet now known as Halley's Comet in 12 BC. They adopted a geocentric model of the universe, theorizing that it was a sphere surrounding the Earth in the centre. They assumed that the Sun, Moon, and planets were spherical and not disc-shaped. They also thought that the illumination of the Moon and planets was caused by sunlight, that lunar eclipses occurred when the Earth obstructed sunlight falling onto the Moon, and that a solar eclipse occurred when the Moon obstructed sunlight from reaching the Earth. Although others disagreed with his model, Wang Chong accurately described the water cycle of the evaporation of water into clouds.
Cartography, ships, and vehicles
Both literary and archaeological evidence has demonstrated that cartography in China predated the Han. Some of the oldest Han-era maps that have been discovered were written using ink on silk, and were found amongst the Mawangdui Silk Texts in a 2nd-century BC tomb in Hunan. The general Ma Yuan created the world's first known raised-relief map from rice in the 1st century. This date could be revised if the tomb of Qin Shi Huang is excavated and the Shiji account of a model map of the empire is proven to be true.
Although the use of graduated scales and grid references in maps was not thoroughly described prior to the work of Pei Xiu (AD 224–271), there is evidence that their use was introduced in the early 2nd century by the cartographer Zhang Heng.
The Han sailed in various types of ships that differed from those used in previous eras, such as the tower ship. The junk design was developed and realized during the Han era. Junk ships featured a square-ended bow and stern, a flat-bottomed hull or carvel-shaped hull with no keel or sternpost, and solid transverse bulkheads in the place of [structural ribs found in Western vessels. Moreover, Han ships were the first in the world to be steered using a rudder at the stern, in contrast to the simpler steering oar used for riverine transport, allowing them to sail on the high seas.
Although ox carts and chariots were previously used in China, the wheelbarrow was first used in Han China in the 1st century BC. Han artwork of horse-drawn chariots shows that the Warring-States-Era heavy wooden yoke placed around a horse's chest was replaced by the softer breast strap. Later, during the Northern Wei (386–534), the fully developed horse collar was invented.
Medicine
Han-era medical physicians believed that the human body was subject to the same forces of nature that governed the greater universe, namely the cosmological cycles of yin and yang and the five phases. Each organ of the body was associated with a particular phase. Illness was viewed as a sign that qi, or vital energy, channels leading to a certain organ had been disrupted. Thus, Han-era physicians prescribed medicine that was believed to counteract this imbalance.
For example, since the wood phase was believed to promote the fire phase, medicinal ingredients associated with the wood phase could be used to heal an organ associated with the fire phase. Besides dieting, Han physicians also prescribed moxibustion, acupuncture, and callisthenics as methods of maintaining one's health. When surgery was performed by the Chinese physician Hua Tuo (), he used anaesthesia to numb his patients' pain and prescribed a rubbing ointment that allegedly sped up the healing process for surgical wounds. The physician Zhang Zhongjing () is known to have written the Shanghan Lun ("Dissertation on Typhoid Fever"), and it is thought that he and Hua Tuo collaborated to compile the Shennong Bencaojing medical text.
See also
Comparative studies of the Roman and Han empires
Han Emperors family tree
Notes
References
Citations
Sources cited
(an abridgement of Joseph Needham's work)
Further reading
External links
"Han dynasty" by Emuseum – Minnesota State University, Mankato
Han dynasty art with video commentary, Minneapolis Institute of Arts
Early Imperial China: A Working Collection of Resources
"Han Culture," Hanyangling Museum Website
The Han Synthesis, BBC Radio 4 discussion with Christopher Cullen, Carol Michaelson & Roel Sterckx (In Our Time, Oct. 14, 2004)
Category:States and territories established in the 3rd century BC
Category:States and territories disestablished in the 3rd century
.
.
.
.
Category:200s BC establishments
Category:206 BC
Category:220 disestablishments
Category:3rd-century BC establishments in China
Category:3rd-century disestablishments in China
.
Category:Dynasties of China
Category:Former countries in Chinese history
|
ancient_medieval
| 11,312
|
43461
|
Qin dynasty
|
https://en.wikipedia.org/wiki/Qin_dynasty
|
The Qin dynasty ( ) was the first imperial dynasty of China. It is named for its progenitor state of Qin, a fief of the confederal Zhou dynasty (–256 BC). Beginning in 230 BC, the Qin under King Ying Zheng engaged in a series of wars conquering each of the rival states that had previously pledged fealty to the Zhou. This culminated in 221 BC with the successful unification of China under Qin, which then assumed an imperial prerogativewith Ying Zheng declaring himself to be Qin Shi Huang, the first emperor of China, and bringing an end to the Warring States period (–221 BC). This state of affairs lasted until 206 BC, when the dynasty collapsed in the years following Qin Shi Huang's death. The Qin dynasty's 14-year existence was the shortest of any major dynasty in Chinese history, with only two emperors. However, the succeeding Han dynasty (202 BC220 AD) largely continued the military and administrative practices instituted by the Qin; as a result, the Qin have been credited as the originators of the Chinese imperial system that would endure in some form until the Xinhai Revolution in 1911.
Qin was a minor power for the first several centuries of its existence; its strength greatly increased in the 4th century BC, in large part owing to the administrative and military reforms of Shang Yang. They sought to create a strong, centralised state and a large army supported by a stable economy, which were developed in the Qin homeland and implemented across China following its unification. Reforms included the standardisation of currency, weights, measures, and the writing system, along with innovations in weaponry, transportation and military tactics.
The central government sought to undercut aristocrats and landowners and administer the peasantry directly, who comprised the vast majority of the population. This enabled numerous large-scale construction projects involving the labour of hundreds of thousands of peasants and convictswhich included the connection of walls along the northern border into what would eventually become the Great Wall of China, a large national road system, and the city-sized Mausoleum of Qin Shi Huang guarded by the life-sized Terracotta Army. The state possessed an unprecedented capacity to transform the environment through the management of people and land; as a result, Qin's rise has been characterised as one of the most important events in East Asian environmental history.
When Qin Shi Huang died in 210 BC, two of his advisors placed an heir on the throne in an attempt to exert control over the dynasty and wield state power. These advisors squabbled among themselves, resulting in both of their deaths and that of the second Qin emperor. Popular revolt broke out, and the weakened empire soon fell to Chu generals Xiang Yu and Liu Bang, the latter of whom founded the Han dynasty.
History
Origin and development, 9th century – 230 BC
According to the Shiji (), during the 9th century BC, Feizisaid to be a descendant of the legendary political advisor Gao Yaowas granted rule over the settlement of Qin (; modern Qingshui County, Gansu). During the rule of King Xiao of Zhou, this area became known as the state of Qin. In 897 BC, during the Gonghe Regency, the area was allocated as a dependency dedicated to raising horses. In the late 8th century BC, one of Feizi's descendants, Duke Zhuang of Qin, was summoned by the Zhou to take part in a military campaign against the Western Rong; the effort was successful and Zhuang was rewarded with additional territory. In 770 BC, Zhuang's son Duke Xiang helped escort the Zhou court under King Ping in their emergency evacuation from Fenghao to Chengzhou under threat from the Western Rongmarking the divide between the Western and Eastern Zhou periodisations. As a reward, Duke Xiang was sent as the leader of an expedition against the Western Rong to recapture the territory they had taken, during which he formally established the Qin as a major vassal state, incorporating Fenghao and much of the territory previously under direct Zhou control and thus expanding Qin eastward.
The state of Qin began military expeditions into central China in 672 BC. They initially refrained from making serious incursions due to the threat still posed by neighbouring tribes; by the 4th century BC, they had all either been subdued or conquered, setting the stage for Qin expansionism.
Warring States period, c. 475–230 BC
During the Warring States period (–221 BC), the Qin statesman Shang Yang introduced a series of advantageous military reforms between 361 BC and his death in 338. He also helped to construct the Qin capital at Xianyang (near modern Xi'an, Shaanxi) on the Wei River near the former Zhou capital of Fenghaoa city which ultimately resembled the capitals of the other states. The Qin maintained a military that was superior in both doctrine and practice to that of the other Warring States. Its army was large, efficient, and staffed with capable generals. Unlike many of their enemies, the Qin utilised contemporary advancements in weapons technology and transportation, the latter of which enabled greater mobility across the different types of terrain throughout China.
The geography of Qin's core territorieslocated at the heart of a region known as the Guanzhongprovided additional advantages, including fertile farmland, and a strategic position protected by mountains that made it a natural stronghold. The Guanzhong was in contrast with the flat, open Yangtze valley (also known as the "Guandong") to its south-eastduring this period, Xianyang was the only capital city in China that did not require walls to be built around it. The legacy of Qin society within the Guanzhong inspired a Han-era adage that "Guanzhong produces generals, while Guandong produces ministers." The Qin's agricultural output, expanded via projects like the Wei River canal built in 246 BC, helped sustain their large army.
Qin engaged in practical and ruthless warfare. From the preceding Spring and Autumn period (), the prevailing philosophy had dictated war as a gentleman's activity; military commanders were instructed to respect what they perceived to be Heaven's laws in battle. For example, during a war Duke Xiang of Song was waging against Chu, he declined an opportunity to attack Chu forces that were crossing a river. After allowing them to cross and marshal their forces, he was decisively defeated in the ensuing battle. When he was admonished by his advisors for excessive courtesy to the enemy, he retorted, "The sage does not crush the feeble, nor give the order for attack until the enemy have formed their ranks." The Qin disregarded this military tradition, taking advantage of their enemy's weaknesses. A nobleman in the state of Wei accused Qin of being "avaricious, perverse, eager for profit, and without sincerity. It knows nothing about etiquette, proper relationships, and virtuous conduct, and if there be an opportunity for material gain, it will disregard its relatives as if they were animals." This, combined with strong leadership from long-lived rulers, an openness to employ talented men from other states, and a lack of internal opposition, contributed to the Qin's strong political base.
Unification and expansion, 230–210 BC
During the Warring States period, the seven major states vying for dominance were Qin, Yan, Zhao, Qi, Chu, Han, and Wei. The rulers of these states styled themselves as kings, as opposed to the titles of lower nobility they had previously held. However, none elevated himself to believe that he had the Mandate of Heaven as claimed by the kings of Zhou, nor that he had the right to offer sacrifices.
During the century that preceded the wars of unification, the Qin suffered several setbacks. Shang Yang was executed in 338 BC by King Huiwen due to a personal grudge harboured from his youth. There was also internal strife over the Qin succession in 307 BC, which decentralised Qin authority somewhat. The Qin was defeated by an alliance of the other states in 295 BC; this was soon followed by another defeat inflicted by Zhao, made possible by a majority of the Qin army already being occupied with defending against attacks by Qi. However, the aggressive became prime minister in 266 BC; after issues with the succession were resolved, Fan pursued an expansionist policy that had its origins in Jin and Qi, in which they endeavoured to conquer the other states.
The Qin first attacked the Han directly to their east, and took their capital city of Xinzheng in 230 BC. They then struck the state of Zhao to their north, who surrendered in 228 BC, followed by the northernmost state of Yan in 226. Next, Qin launched assaults to the east and south; they took the Wei capital of Daliang (modern Kaifeng) in 225, and forced Chu to surrender in 223. They then deposed the Zhou dynasty's remnants at Luoyang; finally, they conquered Qi, taking their capital at Linzi in 221 BC.
With the completion of Qin's conquests in 221 BC, King Zhengwho had acceded to the throne of Qin at age ninebecame the effective ruler of China. The subjugation of the six states was done by King Zheng who had used efficient persuasion and exemplary strategy. He solidified his position as sole ruler with the abdication of his prime minister, Lü Buwei. The states made by the emperor were assigned to officials dedicated to the task rather than place the burden on people from the royal family. He then combined the titles of the earlier Three Sovereigns and Five Emperors into the new name "Shi Huangdi", meaning 'First Emperor'. The newly declared emperor ordered all weapons not in the possession of the Qin to be confiscated and melted down. The resulting metal was sufficient to build twelve large ornamental statues at the Qin's newly declared capital at Xianyang.
Southward expansion, 214–206 BC
In 214 BC, Qin Shi Huang secured his boundaries to the north with a fraction (roughly 100,000 men) of his large army, and sent the majority (500,000 men) of his army to conquer the territory to their south, which was inhabited by the Baiyue peoples. Prior to Qin's campaigns unifying the former Zhou territories, the Baiyue had gained possession of much of Sichuan to their southwest. The Qin army was unfamiliar with the jungle terrain, and it was defeated by the southern tribes' guerrilla warfare tactics with over 100,000 men lost. However, in the defeat Qin was successful in building a canal to the south, which they used heavily for supplying and reinforcing their troops during their second attack to the south. Building on these gains, the Qin armies conquered the coastal lands surrounding Guangzhou, and took the provinces of Fuzhou and Guilin. They may have struck as far south as Hanoi. After these victories in the south, Qin Shi Huang moved over 100,000 prisoners and exiles to colonise the newly conquered area. In terms of extending the boundaries of his empire, Qin Shi Huang was extremely successful in the south.
Campaign against the Xiongnu, 215 BC
The Qin collectively referred to the peoples living on their northern border as the Five Barbarians; while sporadically subject to imperial rule, they remained free from it for the majority of the Qin's existence. Prohibited from engaging in trade with local Qin peasantry, the Xiongnu inhabiting the Ordos Desert to the Qin's north-west frequently raided them instead. In retaliation, a military campaign was led by the Qin general Meng Tian. The region was conquered in 215 BC, and agriculture was established; however, the local peasants were discontented and later revolted.
Collapse and aftermath, 210–202 BC
In total, three assassination attempts were made on Qin Shi Huangone in 227 BC by Jing Ke, and the other two around 218 BC. Owing in part to these incidents, the emperor became paranoid and obsessed with immortality. While on a trip to the eastern frontiers in 210 BC, Qin Shi Huang died in an attempt to procure an elixir of immortality from Taoist magicians, who claimed the elixir was stuck on an island guarded by a sea monster. The chief eunuch, Zhao Gao, and the prime minister, Li Si, hid the news of his death upon their return until they were able to alter his will. It is understood that his eldest son Fusu was intended to inherit the throne; however, Li and Zhao conspired to transmit a fabricated order for Fusu to commit suicide, and instead elevated the former emperor's son Huhai to the throne, taking the name of Qin Er Shi. They believed that they would be able to manipulate Huhai to their own ends, effectively allowing them to exert control over the empire. As expected, Qin Er Shi proved inept: he executed many ministers and imperial princes, continued massive building projectsone of the most extravagant was the lacquering of the city's wallsenlarged the army, increased taxes, and arrested messengers who delivered bad news. As a result, men from all over China revolted, attacking officials, raising armies, and declaring themselves kings of seized territories.
During this time, Li Si and Zhao Gao came into conflict with one another, which culminated in Zhao persuading Qi Er Shi to put Li on trial, where he was ultimately executed. The worsening military situation then caused the emperor to blame Zhao for the rebellion; this pivot frightened Zhao, who engineered another conspiracy to deceive Qin Er Shi into believing hostile forces had arrived at the capital. The emperor's quarters were invaded, and Qin Er Shi was forced to commit suicide for his incompetence after being cornered by Zhao's co-conspirator and son-in-law . Ziying, a son of Fusu, ascended to the throne, and immediately executed Zhao Gao. Unrest continued to spread among the peoplecaused in large part by regional differences, which had persisted despite Qin's attempts to impose uniformityand many local officials had declared themselves kings. In this climate, Ziying attempted to cling to his throne by declaring himself as merely one king among all the others. He was undermined by his ineptitude, and popular revolt broke out in 209 BC. When Chu rebels under the lieutenant Liu Bang attacked, a state in such turmoil could not hold for long. Ziying surrendered to Liu Bang upon the latter's arrival in Xianyang in 207 BC; while initially spared by Liu, he was executed shortly thereafter by the Chu leader Xiang Yu. In 206 BC, Xianyang was destroyed, marking what historians consider to be the end of the imperial Qin dynasty. With the former Qin territories temporarily divided into the Eighteen Kingdoms, Liu Bang then betrayed Xiang Yu, beginning the Chu–Han Contention from which he ultimately emerged victorious atop a reunited realmon 28 February 202 BC, he declared himself emperor of the newly founded Han dynasty.
Culture and society
The Qin ruled over territories roughly corresponding to the extent at the time of Chinese culture, as well as that of what would later be understood as the Han Chinese ethnic group. On the empire's frontiers were diverse groups with cultures foreign to the Qin; even areas under the control of the Qin military remained culturally distinct.
The Qin aristocracy were largely similar to the Zhou in culture and daily life, with regional variation generally considered a symbol of the lower classesand ultimately as contrary to the unification that the government strove to achieve.
Commoners and rural villagers, who comprised more than 90% of the population, rarely left the villages or farmsteads where they were born. While various other forms of employment existed depending on the region, as with other settled peoples in antiquity the overwhelming majority of people throughout Qin were engaged predominately in agriculture. Other professions were hereditary; a father's employment was passed to his eldest son after he died. The Lüshi Chunqiu ()a text named for Lü Buwei, the prime minister who sponsored itgave examples of how, when commoners are obsessed with material wealth, instead of the idealism of a man who "makes things serve him", they were "reduced to the service of things".
Agriculture
Qin agriculture was mainly based on cereal cultivation, with millet, wheat, and barley being the staple crops that comprised most of peasants' diets. The amount of land available for use as pasture was limited, with livestock raised mostly for household use of byproducts like milk. Consumption of meat was generally restricted to the wealthy. The state of Qin under Shang Yang pioneered a policy of maximising the area of land under cultivation, resulting in states clearing most of the forest in the Yellow River valley and converting it into farmland. This land was divided into household-sized allotments, and inhabitants were forcibly relocated to work them. Another emphasis of Shang Yang's agricultural policy was the use of hoes to weed the soil, which improved its ability to retain moisture and provide nutrients to crops.
Religion
The predominant form of religious belief in China during the early imperial period focused on shen (roughly meaning 'spirits'), yin (), and the realm they were understood to inhabit. Spirits were classified as one of three types: 'human dead' (), 'heavenly spirits' () such as Shangdi, and 'earthly spirits' () corresponding to natural features like mountains and rivers. The spirit world was believed to be parallel to the earthly one: animal sacrifices were offered in order to make contact with it, and the spirits of people were thought to move there upon death. In general, ritual served two purposes: to receive blessings from the spirit realm, and to ensure the dead journeyed to and stayed there.
A ritual concept introduced under the Qin that would be continued by the Han was the official touring of ritual sites across the realm by the emperor, which served to reinforce notions of the emperor as a semi-divine figure.
The Qin also practised forms of divinationincluding that previously used by the Shang, where bones and turtle shells were heated in order to divine knowledge of the future from the cracks that formed. Observation of astronomical and weather phenomena were also common, with comets, eclipses, and droughts commonly considered omens.
Government and military
The Qin government was highly bureaucratic, and was administered by a hierarchy of officials serving the emperor. The Qin put into practice the teachings of Han Fei, allowing the state to administer all of its territories, including those recently conquered. All aspects of life were standardised, from measurements and language to more practical details, such as the length of chariot axles.
The empire was divided into 36 commanderies, which were further subdivided into more than 1000 districts. The states made by the emperor were assigned to officials dedicated to the task rather than placing the burden on people from the royal family. Zheng and his advisors also introduced new laws and practices that ended aristocratic rule in China, fully replacing it with a centralised, bureaucratic government. A supervisory system, the Censorate was introduced to monitor and check the powers of administrators and officials at each level of government. The Qin instituted a permanent system of ranks and rewards, consisting of twenty ranks based on the number of enemies killed in battle or commanding victorious units. Ranks were not hereditary unless a soldier died heroically in battle, whereby the soldier's rank will be inherited by his family. Each rank was assigned a specific allotment of dwellings, slaves and land, and ranks could be used to remit judicial punishments.
Instances of abuse were recorded. In one example from the Records of Officialdom, a commander named Hu ordered his men to attack peasants in an attempt to increase the number of "bandits" he had killed; his superiors, likely eager to inflate their records as well, allowed this.
Economy
The Qin conception of political economy reflected the ideas of Shang Yang and Li Kui: labour was identified as the realm's primary resource, and commerce was understood in general to be "inherently sterile". The merchant class that had emerged during the Warring States period was considered a direct threat to the state, due to merchants' incentives to pursue individual profits and self-aggrandisement. After unification, the imperial state targeted their wealth and political power; a 214 BC law allowed for merchants to be impressed into the military and deported for service on the realm's frontiers. Reinforced by its distinct legal status, the merchant profession became increasingly hereditary in nature.
During the 330s BC, the state of Qin began minting banliang coins, which were round, made mostly of bronze, and marked to indicate a nominal weight of around though the actual weight varied in reality. After unification, banliang were given official status across the empire, replacing previous regional currencies like spade money and knife money to become the first standardised currency used throughout all of China. Unlike the Han, who initially continued the use of banliang, the Qin did not allow additional coins to be minted by the private sector, and considered those that were to be counterfeit.
Construction projects
Qin Shi Huang developed plans to fortify Qin's northern border, to protect against nomadic invasions. The resulting construction formed the base of what later became the Great Wall of China, which joined and strengthened the walls made by feudal lords. Another project built during his rule was the Terracotta Army, intended to protect the emperor after his death. The Terracotta Army was inconspicuous due to its underground location, and was not discovered until 1974.
Registration system
During the 4th century BC, the state of Qin introduced a registration system for its population, which initially collated the names of individuals, and later began keeping track of entire households. The system, unique in its scope among Qin's contemporaries, is thought to have been established in 375 BC. It was expanded later in the century at the direction of Shang Yang, with passages of The Book of Lord Shang referencing the system likely reflecting the words of Shang Yang himself. The oldest lists to be discovered, excavated at Shuihudi in Hubei and Liye in Hunan, date to the late 3rd century BC. Adapting a concept originally used within the military to society at-large, Qin households were organised into 'groups of five' (), wherein the heads of each household were made mutually responsible for reporting any wrongdoing committed by other members of the group. Under the orders of King Ying Zheng, the state began recording the ages of adult men in 231 BC.
Writing reform
The Zhou inherited the writing system of Chinese characters used by the preceding Shang dynasty () and first attested in oracle bone inscriptions . Writing was adopted throughout the Zhou cultural sphere during the first half of the 1st millennium BC, with the shapes and forms of characters in the script gradually evolving over time. With the Warring States period, distinct regional writing styles began to diverge from one another; compared to that of other Zhou states, the script used in Qin generally changed the least during this time. The standard writing style in the state of Qin was consolidated under Qin Shi Huang into what is known as small seal script. While the Book of Han (111 AD) states that Li Si distributed detailed instructions for writing in small seal script to scribes in 221 BC, these instructions have been lost. However, many contemporary inscriptions on monuments meant to demonstrate small seal character forms have survived. While the regional divergences across China were reduced considerably, the use of variant characters remained frequent among Qin scribes; the traditional idea of a strict standardisation of small seal script appears to be a later notion introduced by the Han.
Penal policy
Qin law was articulated alongside ritual practice. Writing itself was seen as part of ritual in Ancient China, with Confucius and Laozi portrayed as sages, ritual masters and archivists. The Shiji presents writing and ritual as still being connected during the Qin dynasty. The First Emperor erected stone stele invoking divine protection, which proclaim the establishment of the government, referring to the First Emperor as a sage. Their inscription was supervised by the court, influencing the Han and later dynasties. The Tang dynasty established a requirement for court approval of stele as a safeguard.
Qin law was primarily administrative. Like most ancient societies, the early imperial Chinese state did not have separate structures of administration and jurisprudence. Qin penal practice included concepts such as intent, defendant rights, judicial procedure, requests for retrials, and the distinction between common and statutory law. Comparative model manuals guided penal legal procedures based on real-life situations, with publicly named wrongs linked to punishments.
Shang Yang's code likely drew on Li Kui's Canon of Laws, which considered dealing with thieves and robbers the most urgent legal matter of its time. The Qin dynasty's penal code similarly focuses primarily on theft, though there were certain statutes dealing specifically with infanticide and other unsanctioned harm against children. However, almost all of the Book of Lord Shang was likely composed before Qin unification. Qin law diverged significantly from the time of Shang Yang, and earlier ideas espoused in the Book of Lord Shang.
Anti-Confucianistic statements in the Book of Lord Shang are isolated to a few early chapters. While retaining Shang Yang's reforms, the Qin abandoned his anti-Confucianism, his strict, harsh penal policy, and ultimately his heavy emphasis on agriculture. After Shang Yang, the Lushi Chunqiu attests King Huiwen of Qin as having pardoned the death penalty in a case involving murder, based on Confucianistic ethics. While emphasizing law and order, the Shiji's Qin Shi Huang praises himself as a "sage ruler of benevolence and righteousness ... who cares for and pities the common people". Still including mutilating punishments, Qin law would generally considered harsh by modern standards, though they were "not extraordinarily severe for their time". With tattooing considered already a heavy mutilating punishment, tattooing was the most common heavy mutilating punishment.
Targeting unsanctioned punishment, the extremes of Qin law are directed more against ministerial abuse. Showing up in cattle companies, the Qin dynasty did not completely eliminate group responsibility established by Shang Yang, but does not emphasize it either. Mutilating group punishment was directed against more extreme cases of group robbery by policing officials themselves. Smaller administrative crimes only received fines or reprimands, while petty theft committed by individual commoners was punished with a month's labour service. Recovered Qin legal practice only mentioned cases of nose cutting or foot a few times. Infrequent capital punishment targeted taboos of incest or temple destruction.
The Qin dynasty instead took to using convict labour. This aimed to avoid agricultural instability that using corvée for labour would have caused, with farmers only otherwise available for a month at a time during off-seasons. Using paid labour wasn't entirely feasible because the travel expenses of deploying and accommodating even convict labour were already "potentially ruinous." While hard labour became the most common heavy criminal punishment, the Qin often tried to pardon it into lesser punishments. Though the projects were well planned, the lives of those finally so sentenced were not viewed as valuable as peasants, so that many still died from hard labour. Those sentenced to hard labour generally performed public works inside the country, mainly in road and canal construction. Only a minority were sent to build the great wall.
Punishment often went unenforced. Criminals were sometimes given amnesties, only incurring punishment upon recidivism, and were often pardoned in exchange for fines, labour, or a demotion in aristocratic rank, even for capital offences. While The Book of Lord Shang recommended harsh punishment, it also "laments" insufficient population for its territories, and the Qin attempted to limit emigration out of the country. Rather than physically punish criminals, they were frequently resettled in frontier colonies. Those sentenced to hard labour were sometimes sent to join frontier defences if given amnesty. Men in the colonies sentenced to death were then recruited for expeditionary armies.
The Han-era writer Dong Zhongshu (179–104 BC) considered Qin officials and taxes severe, but did not characterise punishments as such; in fact, Dong criticised the Qin system for its inability to punish criminals; though exile as a heavy punishment in China dates to at least the Spring and Autumn period.
Legacy
The Qin, despite existing for only 14 years, are credited with inaugurating the Chinese imperial system, which would persist in some form throughout Chinese history until it was ultimately overthrown by the Xinhai Revolution in 1911.
During the 2nd and 1st centuries BC, Han dynasty scholars began portraying the Qin as a monolithic, legalist tyranny, often invoked as an example of bad governance in contemporary debates about imperial policy. In particular, purges in 213 and 212 BC collectively known as the burning of books and burying of scholars are frequently cited to this end; however, the earliest account of these events is contained in the Shiji (), and its veracity is disputed by some modern scholars. The Qin were deliberately contrasted with what was characterised as the virtuous rule of the Han. However, the Han essentially inherited the administrative state built by the Qin, including the household registration system. Owing to this continuity, medieval and modern historians have often grouped the Qin and Han together, with the establishment of the Han treated "mainly as a change in ruling houses rather than a system or method of rule".
Etymology of China
Qin is the likeliest origin for the modern name China and its equivalents in many European languages. The term likely first appeared in the Indo-Aryan languages, attested in Sanskrit as both and , and subsequently entered Greek as or . From there it entered the vernacular languages of Europe, e.g. as China in English and in French. This etymology is questioned by some scholars, who suggest that appears in Sanskrit centuries before the Qin dynasty's founding. Other hypothesised origins include the Zhou-era state of Jin that existed prior to the 4th century BC, and Jing (), another name for the state of Chu.: "This thesis also helps explain the existence of Cīna in the Indic Laws of Manu and the Mahabharata, likely dating well before Qin Shihuangdi".
Sovereigns
Posthumous name Personal name Reign 1 Shi Huangdi Zheng () 221–210 BC 2 Er Shi Huangdi Huhai () 210–207 BC 3 — Ziying () 207 BC
Imperial family tree
Notes
References
Citations
Works cited
Translated from
Further reading
External links
Category:Dynasties of China
Category:Iron Age Asia
Category:Former countries in East Asia
Category:States and territories established in the 3rd century BC
Category:221 BC
Category:220s BC establishments
Category:States and territories disestablished in the 3rd century BC
Category:3rd-century BC disestablishments in China
Category:Qin Shi Huang
Category:Former monarchies of East Asia
Category:Former empires
|
ancient_medieval
| 5,070
|
43619
|
Great white shark
|
https://en.wikipedia.org/wiki/Great_white_shark
|
The great white shark (Carcharodon carcharias), also known as the white shark, white pointer, or simply great white, is a species of large mackerel shark which can be found in the coastal surface waters of all the major oceans. It is the only known surviving species of its genus Carcharodon. The great white shark is notable for its size, with the largest preserved female specimen measuring in length and around in weight at maturity. However, most are smaller; males measure , and females measure on average. According to a 2014 study, the lifespan of great white sharks is estimated to be as long as 70 years or more, well above previous estimates, making it one of the longest lived cartilaginous fishes currently known. A year later another study found that male great white sharks take 26 years to reach sexual maturity, while the females take 33 years to be ready to produce offspring. Great white sharks can swim at speeds of 25 km/h (16 mph) for short bursts and to depths of .
The great white shark is arguably the world's largest-known extant macropredatory fish, and is one of the primary predators of marine mammals, such as pinnipeds and dolphins. The great white shark is also known to prey upon a variety of other animals, including fish, other sharks, and seabirds. It has only one recorded natural predator, the orca.
The species faces numerous ecological challenges which has resulted in international protection. The International Union for Conservation of Nature lists the great white shark as a vulnerable species, and it is included in Appendix II of CITES. It is also protected by several national governments, such as Australia (as of 2018). Due to their need to travel long distances for seasonal migration and extremely demanding diet, it is not logistically feasible to keep great white sharks in captivity; because of this, while attempts have been made to do so in the past, there are no aquariums in the world known to house a live specimen.
The great white shark is depicted in popular culture as a ferocious man-eater, largely as a result of the novel Jaws by Peter Benchley and its subsequent film adaptation by Steven Spielberg. While humans are not a preferred prey, this species is nonetheless responsible for the largest number of reported and identified fatal unprovoked shark attacks on humans. However, attacks are rare, typically occurring fewer than 10 times per year globally.
Etymology and naming
The most common English names for the species include 'great white shark', 'white shark', and Australian variant 'white pointer'. These names are thought to refer to its white underside, which is noticeable in dead sharks lying upside down. Colloquial use favours the name 'great white shark' or simply 'great white', with 'great' perhaps emphasizing the size and power of the species. Scientists typically use 'white shark', as there is no "lesser white shark" to be compared to, though some use 'white shark' to refer to all members of the Lamnidae.
The scientific genus name Carcharodon is a portmanteau of two Ancient Greek words: the prefix carchar- is derived from κάρχαρος (kárkharos), which means "sharp". The suffix -odon derives from ὀδών (odṓn), a which translates to "tooth". The specific name carcharias is from the καρχαρίας (karkharías), the Ancient Greek word for shark. The great white shark was one of the species originally described by Carl Linnaeus in his 1758 10th edition of Systema Naturae and assigned the scientific name Squalus carcharias, Squalus being the genus that he placed all sharks in. By the 1810s, it was recognized that the shark should be placed in a new genus, but it was not until 1838 when Sir Andrew Smith coined the name Carcharodon as the new genus.
There have been a few attempts to describe and classify the white shark before Linnaeus. One of its earliest mentions in literature as a distinct type of animal appears in Pierre Belon's 1553 book De aquatilibus duo, cum eiconibus ad vivam ipsorum effigiem quoad ejus fieri potuit, ad amplissimum cardinalem Castilioneum. In it, he illustrated and described the shark under the name Canis carcharias based on the ragged nature of its teeth and its alleged similarities with dogs. Another name used for the white shark around this time was Lamia, first coined by Guillaume Rondelet in his 1554 book Libri de Piscibus Marinis, who also identified it as the fish that swallowed the prophet Jonah in biblical texts.
Taxonomy and evolution
The white shark is the sole recognized extant species in the genus Carcharodon, and is one of five extant species belonging to the family Lamnidae. Other members of this family include the mako sharks, porbeagle, and salmon shark. The family belongs to the Lamniformes, the order of mackerel sharks.
Phylogeny
The modern clade of the Lamnidae is estimated to have emerged between 65 and 46 million years ago (mya) based on a 1996 molecular clock using the mitochondrial DNA gene cytochrome b. Most phylogenetic analyses based on molecular data or anatomical features place the great white shark as the sister species to the mako shark clade with the Lamna clade as the most basal in the family. Under this topology, the 1996 clock estimated the great white shark's divergence from the makos to have occurred between 60 and 43 mya. A more recent 2024 clock using genome-wide autosomal single nucleotide polymorphisms estimated a later alternate divergence between the shortfin mako and great white shark at 41.6 mya. A minority of analyses recovered an alternate placement of the great white shark as the most basal member. A 2025 clock using the whole mitogenome with this topology estimated the divergence between the great white shark and other lamnids at 47.4 mya.
Fossil history
The great white shark first unambiguously appears in the fossil record in the Pacific basin about 5.3 mya at the beginning of the Pliocene. Although there are few claims of fossils dated as early as 16 mya, their validity is doubted as mislabeled or misidentified. Like all sharks, the great white's skeleton is made primarily of soft cartilage that does not preserve well. The overwhelming majority of fossils as a result are teeth. Nevertheless, paleontologists have confidently traced the emergence of the great white shark and its immediate ancestry to a large extinct shark known as Carcharodon hastalis. This species appeared worldwide during the Early Miocene (~23 mya) and had teeth alike to the modern great white shark's, except that the cutting edges lacked serrations. The form was probably derived from an ancient lineage of large white sharks that arose in the early Eocene (~56-48 mya) from a primitive mako-like shark. C. hastalis occupied a middle to high trophic position in its ecosystems and was probably piscivorous (fish-eating) with some addition of marine mammals to its diet.
Around 8 mya, a Pacific stock of C. hastalis evolved into C. hubbelli. This divergent lineage, sometimes described as a chronospecies, was characterized by a gradual development of serrations over the next few million years. They were initially fine and sparse but a mosaic of fossils throughout the Pacific basin document an increase in quantity and coarseness over time, eventually becoming fully serrated as the great white shark's by 5.3 mya. Serrations are more effective at cutting prey than non-serrated edges, facilitating further specialization towards a mammal diet. It is likely the ancestral unserrated stock had already been regularly targeting marine mammals for millions of years, and therefore maintained an environment favoring rapid selection towards increasingly serrated teeth once a mutation for incipient serrations appeared. Teeth from the same strata may exhibit significant variation in serration development and morphology, which may be indicative of persistent interbreeding with C. hastalis for at least some time. The great white shark dispersed as soon as it emerged, with fossils in the Mediterranean, North Sea Basin, and South Africa occurring as early as 5.3-5 mya. Colonization of the northwestern Atlantic appeared to have delayed, with fossils absent until 3.3 mya.
Appearance and anatomy
The great white shark has a stocky, torpedo-shaped body with a short, cone-shaped snout; long gill slits that do not reach around the head; a large triangular first dorsal fin, which partly lines up with the pectoral fins, and tiny second dorsal fin; a caudal fin with similarly sized lopes and one keel; and a tiny anal fin. The species has a countershaded coloration; being dark on top, usually blue-gray or gray-brown, with a white underside. It also has black tips on the underside of the pectoral fins. There is evidence that the species can change pigments, adding melanin to blotches of white. The skin is covered in dermal denticles which are smaller than in other sharks, with a three ridged surface; each ridge having tips which point backwards.
Size
The great white shark is considered to be the largest macropredatory shark and fish. Females are generally larger than males; the former measure on average in length while the latter average in length. Similarly, females are recorded to weigh compared to for males. The maximum size of the white shark has been debated. Its reputation has led to exaggerated and discredited claims of specimens up to during the 19th and 20th centuries.Castro, J. L. "A Summary of Observations on the Maximum Size Attained by the White Shark, Carcharodon carcharias" in pp. 85–89 Biologists Richard Ellis and John E. McCosker wrote that "These giants seem to disappear or shrink when a responsible observer approaches with tape measure".
According to shark expert J. E. Randall, the largest white shark reliably measured was a specimen reported from Ledge Point, Western Australia in 1987. He stated, "Undoubtedly Carcharodon carcharias exceeds in length, but as yet there is no authenticated record of such a size".Castro, J. L. "A Summary of Observations on the Maximum Size Attained by the White Shark, Carcharodon carcharias" in p. 89 A 2014 study of white shark catch records in the northwest Pacific concluded that the longest reliably measured shark was in total length and the heaviest weighed . A complete female great white shark specimen caught in the Mediterranean and displayed in the Museum of Zoology in Lausanne, Switzerland, measured in total body length with the caudal fin in its depressed position, and is estimated to have weighed making it the largest preserved specimen.
Teeth and jaws
The triangular teeth are lined with serrations and can reach . They are broader on the upper jaw and overall more slender in juveniles. The teeth are arranged in rows like a conveyor belt, with teeth in the back moving to replace those in front. An open mouth exposes roughly 26 and 24 front row teeth on the front row of the upper and lower jaw respectively, with a total of 300 teeth in the mouth. The jaws are separate from the skull, and are connected to the body entirely by muscles and tendons; allowing them to project in and out. The jaws can reach a gape of 150 degrees.
A 2008 study using a computer scan of a long and juvenile white shark determined that the specimen could exert a bite force of in the front and in the back. From this, the researchers deduced that a specimen long and massing could exert a bite force of in the front and in the back. The jaws are strengthened by mineralized cartilage; this is lacking in young white sharks which have to eat softer food.
Senses
As with other sharks, white sharks use five senses when hunting; eyesight, hearing, olfaction (smell), electroreception (via pits called Ampullae of Lorenzini) and water movement detection (via a lateral line). Analysis of the brain and cranial nerves suggest that sight and smell are the most developed.Demski, L. S.; Northcutt, R. G. "The Brain and Cranial Nerves of the White Shark: An Evolutionary Perspective" in p. 129 The eyes of the white shark can appear solid black but have blue irises, and the pupil is more horizontal than in other species. The eyes have a relatively low ratio of rods to cones, indicating daytime vision. They lack nictitating membranes but possess well developed muscles that allow them to roll around to keep track of prey and roll back to avoid attacks. The white shark has a relatively large olfactory bulb, an adaptation for smelling across open ocean; it can detect potential prey from . The vomeronasal system, located in the roof of the mouth, also appears to play a role in olfactory sensing. It's lateral line can detect disruptions in the water from away.
Internal physiology
The great white shark is an obligate ram ventilator; to breathe it must swim constantly so water flows through the gills. Spiracles, extra breathing holes behind the eyes common in bottom dwelling sharks, is reduced or absent in this species. It has a large, double-lobed, liver that can be almost 30% of its body weight, and stores lipids, fatty acids and oils. The liver helps keep the shark from sinking, as the oil has six times the buoyancy of the surrounding water. The lipids and fatty acids provide the shark energy for travel and fuel for reproduction and growth. One study concluded that a white shark liver is more energy-rich than whale blubber. White sharks appear to have strong immune systems and can tolerate high amounts of toxic heavy metals in their blood, moreso than other vertebrates. They are also documented to heal relatively quickly from even serve wounds,Towner, A.; Smale, M. J.; Jewell, O. "Boat-Strike Wound Healing in Carcarodon carcharias" in p. 92 and the species' genome shows "positive selection in key genes involved in the wound-healing process...".
Unlike most other fish, white sharks are endothermic ('warm-blooded'). Their bodies can maintain temperatures warmer than the surrounding water, which allows them to be active and hunt in cool waters. One study found that stomach temperatures ranged from in waters . White sharks maintain a warm body temperature via a complex blood vessel system known as a rete mirabile, were warm blood generated from the dark slow-twitch muscles is constantly supplied to other parts of the body within a countercurrent exchange system. Heat is retained within the body rather than exiting via the gills. Warm blood can also be redirected from the liver to the body core via a vascular shunt, which can open and close. In addition, the species has an enlarged, thickened heart and its blood contains more red-blood cells and hemoglobin than even most mammals and birds.
Distribution and habitat
Great white sharks range from tropical to temperate and even colder waters around the world, with major populations in the northeastern Pacific, western North Atlantic, the Mediterranean, southern African waters, northwestern Pacific, Oceania and both the Pacific and Atlantic coasts of South America. Shark expert Greg Skomal lists the Farallon Islands, California, Guadalupe Island, Mexico, Cape Cod, Massachusetts, Western Cape, South Africa, Neptune Islands, Australia, and both Stewart Island and the Chatham Islands, New Zealand as major coastal feeding aggregations. Researchers have also identified an offshore feeding aggregation between western North America and Hawaii dubbed the White Shark Café.
White sharks can be found both along the coast and in the open ocean, and may dive to depths of up to but are typically closer to the surface. Deeper dives are more common in the open ocean. Coastal habitats used include nearshore archipelagos, offshore reefs, banks and shoals, and headlands. A 2018 study indicated that white sharks will congregate in anticyclonic eddies in the open ocean. Juvenile white sharks are more limited to shallow coastal waters with temperatures between . Increased observation of young sharks in areas they were not previously common, such as Monterey Bay on the central California coast, suggest climate change may be forcing juveniles towards the poles.
Migrations
White sharks go on vast migrations; one individual that was tagged off the South African coast swam to the southern coast of Australia and back within a year. Another white shark from South Africa was tracked and documented swimming to Australia's northwestern coast and back, a journey of in under nine months. In May 2024, a satellite tag was recovered from an Indonesian fisherman which was determined to have come from a subadult female great white shark tagged off the South African coast in May 2012 which swam to and got killed off the Indonesian coast in November 2016.
In the northeastern Pacific, white sharks travel between the coastal US and Mexico and the Hawaiian Archipelago; they feed along the coast mostly during fall and winter, and farther out to sea during spring and summer. In the western North Atlantic, white shark congregate between the Gulf of Maine and Cape Hatteras during spring and summer, and shift farther south towards Florida and around to the Gulf of Mexico during the fall and winter. In fall, winter and spring, some sharks disperse widely across the ocean, reaching as far east as the Azores.
Behavior and ecology
Great white sharks are more active during the daytime; how they sleep is not well understood. At nighttime, one individual was recorded swimming slowly in one direction along a current with its mouth open. White sharks typically swim at around but can sprint up to . One individual was recorded cruising at a sustained speed of while migrating, which is fast for a shark and more similar to fast-swimming tuna. White sharks display various surface behaviors, such as poking its head out or spyhop to observe an object above the water, as well as 'Repetitive Aerial Gaping' where a spy-hopping shark repeatedly gaping its mouths while belly-up, possibly as a sign of frustration after missing a bait.
The white shark is generally considered to be a solitary species, though aggregations do occur. A 2016 study of sharks around Mossel Bay, South Africa concluded that white shark associations are generally random with few social interactions. By contrast, a 2019 study found that sharks around Neptune Islands gathered in non-random aggregations. Similarly a 2022 study of white sharks at Guadalupe Island suggests that individuals may associate so that they can learn from others where to find prey or carcasses to scavenged. White shark aggregations can also differ in composition of individuals based on age and sex. At Neptune Islands, sightings of subadult females peak during April and May, subadult males in February and again in September, adult females in June and adult males in September.Robbins, R. L.; Booth, D. J. "Seasonal Sexual and Size Segregation of White Shark, Carcharodon carcharias, at the Neptune Islands, South Australia" in p. 292–293
Diet and feeding
The great white shark is an apex predator that opportunistically feeds on fish, cephalopods, marine mammals, sea birds and sea turtles. Diet differs based on size and age; individuals that have reached can feed on marine mammals, while juveniles are limited to smaller prey like fish and cephalopods. Great white sharks are said to prefer prey with high fat content, but even large individuals are recorded to eat low-fat foods.Hammerschlag, N.; Martin, R. A.; Fallows, C.; Collier, R. S.; Lawrence, R. "Investigatory Behavior towards Surface Objects and Nonconsumptive Strikes on Seabirds by White Sharks, Carcharodon carcharias at Seal Island, South Africa (1997-2010) in pp. 96–97
Marine mammals preyed on include pinnipeds and cetaceans. They are also recorded to bite sea otters but do not usually consume them. The seasonal availability of pinnipeds drives white shark migration to certain locations. Targeted species include harbor seals, northern elephant seals and California sea lions off western North America; harbor seals and gray seals off eastern North America; Cape fur seals off South Africa; Cape fur seals (Australian subspecies), New Zealand fur seals, and Australian sea lions off Australia; and New Zealand fur seals off New Zealand. White sharks mainly hunt pinnipeds by ambush and often target newly weaned young as they have thick blubber but are still small, inexperienced and vulnerable. Adults are more difficult to overpower and can injure the shark with their teeth and claws;Fallows, C.; Martin, R . A.; Hammerschlag, N. "Comparisons between White Shark-Pinniped Interactions at Seal Island (South Africa) with Other Sites in California" in pp. 111–112 adult male elephant seals are particularly formidable as they can grow as massive as adult white sharks. Some species will mob the shark.Fallows, C.; Martin, R . A.; Hammerschlag, N. "Comparisons between White Shark-Pinniped Interactions at Seal Island (South Africa) with Other Sites in California" in p. 106
Observations off California find that white sharks ambush pinnipeds near the surface from below, seizing and dragging them under. Earless seals, like elephant seals, are more likely to be struck in front of the hind flippers or the head—even leading to decapitation—while sea lions were more likely to be grabbed behind the torso. With their large fore-flippers, sea lions are usually able to break free from the first bite but are weakened and usually recaptured. Prey is released after it dies of blood loss, and the shark feeds on the carcass after it floats to the surface.Klimley, A. P.; Pyle, P.; Anderson, S. D. "The Behavior of White Sharks and Their Pinniped Prey during Predatory Attacks" in pp. 181, 191 In 1984, Tricas and McCosker suggested that white sharks bite pinnipeds, release them and then wait for them to bleed to death before eating, though this has been refuted. Off South Africa, ambushes on Cape fur seals usually involve the shark leaping or breaching out of the water. To breach, a shark starts at around below the surface and ascends quickly towards its target, increasing its tail movements and pitch angle. Sharks may breach partially or entirely out of the water at different angles, clearing up to when airborne. Missed seals may be chased after; such pursuits involve the prey using its speed and agility to escape as the shark employs various maneuvers to catch them. The longer the chase, the less likely the shark succeeds. Sharks commonly consume fur seals quickly after they are killed.Fallows, C.; Martin, R . A.; Hammerschlag, N. "Comparisons between White Shark-Pinniped Interactions at Seal Island (South Africa) with Other Sites in California" in p. 108 White sharks in Cape Cod hunt seals in shallow water, relying on the murkiness of the water for concealment and striking them from the sides.
Cetacean species recorded as prey include small toothed whales like bottlenose dolphins, common dolphins, Indo-Pacific humpback dolphins, striped dolphins, Risso's dolphins and harbor porpoises.Long, D. J.; Jones, R. E. "White Shark Predation and Scavenging on Cetaceans in the Eastern North Pacific Oceans" in pp. 293 Bite wounds from white sharks have also been documented on dusky dolphins, dwarf sperm whales, pygmy sperm whales and even beaked whales. White sharks typically attack them from behind—beyond the prey's echolocation—and target the tail, underside or dorsal area.Long, D. J.; Jones, R. E. "White Shark Predation and Scavenging on Cetaceans in the Eastern North Pacific Ocean" in pp. 297, 305 There are two records of white sharks managing to kill a small humpback whale, one involved two sharks working as a pair. In both cases the whale was weakened by net entanglement, and the sharks employed strategic biting and as well as drowning. White sharks are more likely to scavenge large whales. Multiple sharks will gorge themselves on a single whale carcass, biting into it and ripping off chunks by shaking their heads side-to-side. They may spit out pieces, possibly judging them to be too low in energy using their teeth as mechanoreceptors. The sharks do not appear to act aggressively towards each other, but accidental bites can occur. Eventually, the sharks become lethargic, they can no longer lift their heads out of the water nor can they get in a good bite as they bump into the dead whale.
White sharks feed on numerous fish species, including other sharks. One 2023 study found that juvenile and subadult white sharks off the east coast of Australia fed primarily on ray-finned fishes, particularly flathead grey mullets, Japanese scads and various species of porgies, mackrels and tuna. Off California, white sharks will eat cobezons, white seabasses, lingcod, halibut, leopard sharks, smooth-hounds, spiny dogfishes, school sharks, stingrays, bat rays and skates. In the Mediterranean, they consume Atlantic bluefin tunas, bullet tunas, Atlantic bonitos, swordfishes, blue sharks, shortfin makos and stingrays. An ocean sunfish was also recorded in a white shark stomach. Off the northeastern US, juveniles commonly eat bottom-dwelling fish like hake, while off South Africa they often prey on dusky sharks. The remains of an adult whale shark was found in a white shark, though whether this is active hunting or scavenging could not be determined.
They are also recorded to consume cephalopods as evidenced by beaks found in their stomachs. Off South Africa, white sharks under were found with remains of coastal and bottom-dwelling species like certain octopus species, as well as species of the genera Sepia and Loligo, while sharks over that length seem to prefer more open ocean species like those of the genera Ancistrocheirus, Octopoteuthis, Lycoteuthis, Ornithoteuthis, Chiroteuthis and Argonauta.Smhale, M. J.; Cliff, G. "White Sharks and Cephalopod Prey Indicators of Habitat Use? in p. 53 Near Guadalupe, white sharks have been documented with scars which appear to have been caused by neon flying squids, jumbo squids and giant squids. Both fish and cephalopods may be important food sources at the White Shark Café.
Other animals recorded as prey include sea turtles. The shells of green sea turtles and loggerhead sea turtles have been found in white shark stomachs in the Mediterranean, and bites have been recorded on leatherback sea turtles off central California.Long, D. J. "Records of White Shark-Bitten Leatherback Sea Turtles along the Central California Coast" in pp. 317–319 Around Seal Island, South Africa, white sharks are recorded to attack and kill seabirds like Cape cormorants, white-breasted cormorants, kelp gulls, Cape gannets, brown skuas, sooty shearwaters, and African penguins, but rarely consume them.Hammerschlag, N.; Martin, R. A.; Fallows, C.; Collier, R. S.; Lawrence, R. "Investigatory Behavior towards Surface Objects and Nonconsumptive Strikes on Seabirds by White Sharks, Carcharodon carcharias at Seal Island, South Africa (1997-2010) in pp. 93, 96
Social communication
Great white sharks communicate with each other through a complex array of body language. Most behaviors have been observed at aggregations around seal rookeries shortly after peak hunting periods, where sharks then engage in extensive socializing. At least 20 unique forms of body language are known, most of which consist of two sharks swimming with or around each other in passing, parallel, or in circles to examine the other in a ritualized manner. Occasionally one shark will openly show off its body in a lateral display to the other. It is hypothesized that the main purpose of these interactions is to establish social rank by size to avoid competition. Indeed, observations by Sperone and colleagues in 2010 found display behaviors to be more common between individuals of similar size where differences are not immediately obvious. There is no evidence that sex is a significant factor in behavioral patterns. With dominance established, the smaller shark then acts submissively towards the larger shark by yielding during subsequent encounters or simply avoiding confrontation. Body language is less frequent in California and Australia compared to Dyer Island in South Africa. It is thought that this is because the former locations are less densely populated, and so sharks there are more readily familiar with each other's hierarchy.
Direct violence is extremely rare, as individuals typically end conflicts through peaceful means. Splash fights appear to be the most common way of resolving ownership disputes over prey. Here, one shark slaps the surface with its tail to splash water at the competing shark. The competitor either withdraws or responds with a tail splash of its own. Usually one or two splashes are exchanged per shark, though individuals will sometimes persist with more.Klimley, A. P.; Pyle, P.; Anderson, S. D. "Tail Slap and Breach: Agonistic Displays among White Sharks?" in pp. 241–255 The contest is "won" by the shark that compels the other to concede via the most tenacious splashing, which appears to be determined by a cumulative signal strength of vigor and strength. Larger body size does not always secure superior signal strength, on occasion the smaller shark emerges victorious. Great white sharks have also been observed employing tail splashing to intimidate tiger sharks around a whale carcass, and even against boats and shark cages which were likely perceived as competitors.
Reproduction and growth
Little is known of the reproductive behavior of the great white shark. There are two anecdotal accounts of the species possibly mating, one in 1991 and a second one in 1997, both off New Zealand. These testimonies both report belly to belly rolling during copulation. It is assumed that the male bites onto the female's head or fin while inserting one of his claspers, as is the case in other shark species. The accounts also suggest that that white sharks mate in shallow water away from feeding areas. Females at Guadalupe and Cape Cod have been seen with scarring that may have been the result of copulation, possible evidence that these areas are used for mating. Conversely other studies have concluded that white sharks may mate offshore; males were found to gather in the White Shark Café during spring and where followed by some females, suggesting a lek mating system where females move through and choose their partners. In 2013, it was proposed that whale carcasses are an important location for sexually mature sharks to meet for mating.
Some pregnant females have been caught and have provided information on the species' reproductive biology. The great white shark is ovoviviparous; fertilized eggs hatch within the female, and the embryos continue to develop within each uterus. Their nourishment comes in three stages; they first feed on their yolk sacs, followed by a milky substance secreted by the uterus known as lipid histotrophy, and finally switch to consuming unfertilized eggs. After around 12 months, the female gives live birth to two–to–ten pups. Birth intervals last two or three years. A 2024 metastudy concluded that white sharks give birth during spring and summer in shallow waters surrounding islands with temperatures of . White sharks are born at a length of . In July 2024, a possible newborn white shark was filmed for the first time, off the coast of southern California (just off Carpinteria), measuring an estimated and with a pale complexion, originally attributed to histotrophy. A follow-up study confirmed the Carpinteria shark being a newborn, but suggests that the paleness is embryonic epithelium that covers the shark's skin denticles, known to exist in the related salmon shark, and rubs off shortly after birth.
Bands in the shark's vertebrate are used to determine the animal's age and growth. Early studies determined that the species grows relatively quickly; a 1985 study concluded that white sharks reach maturity nine to ten years of age at a length of . Conversely, a 2015 study concluded that white sharks are a slow growing and long lived species. Males were found to reach maturity at around 26 years at a length of around while females take 33 years to reach maturity at a length of around . Their growth rate levels off after the age of 40.
Mortality and health
Great white sharks are estimated to reach over 70 years of age. A 2018 study of sharks off eastern Australia and New Zealand found that juveniles had a survival rate of over 70%, while adults survived at a rate of over 90%.
White sharks are sometimes preyed on by orcas, which they also likely compete with for food. The first recorded orca predation occurred at the Farallon Islands in 1997 when an estimated female orca killed an estimated white shark. Another similar attack apparently occurred there in 2000, but its outcome is not clear. Subsequently, orca predation on white sharks would be documented off South Africa and Australia. Around South Africa, orcas typically hunt white sharks in groups of two–to–six. These cetaceans consume the energy-rich liver of the sharks and dead white sharks washed ashore are found with these organs removed. In 2017, a live white shark was seen with purported orca teeth marks, the first piece of evidence for the species surviving an attack. The arrival of orcas in an area can cause white sharks to flee and forage elsewhere for the rest of the year, as has been documented both off South Africa and California. In addition to orcas, white sharks may also fall prey to other sharks as pups and juveniles, including older white sharks.
There are two recorded instances of the ectoparasitic cookiecutter shark targeting subadult white sharks off Guadalupe. However, the relative dearth of predation records indicates that white sharks are not a common food source for them. The great white shark is the definitive host of two species of tapeworms from the genus Clistobothrium, these being Clistobothrium carcharodoni and Clistobothrium tumidum. The former is believed to be transmitted to great whites through the consumption of infected cetacean prey which serve as intermediary or paratenic hosts of the tapeworm. The latter species of tapeworm's transmission vector is currently unknown. The intensity of C. carcharodoni infestations in affected great whites is extremely high; in one case, up to 2,533 specimens were recovered from the spiral intestine of a single individual.
Relationship with humans
Prior to the 1970s, the great white shark as a species was known mostly to biologists and fishermen. The release of the 1971 documentary Blue Water, White Death is crediting with bringing the shark to public attention. The white shark's popularity would increase further with the 1974 novel Jaws written by Peter Benchley, and its 1975 film adaptation directed by Steven Spielberg. The novel and film helped create the image of the species as a dangerous maneater. Benchley would later express regret stating "I cannot rewrite Jaws, nor make an ignoble monster of this magnificent animal.".
Compared to other fish, the great white shark was not an important species for fishermen. Their meat was considered tasty but was it was not considered worth it due to the difficulty of hauling them in. Nevertheless, their reputation and size made them targets for sport fishing. The species was lured by chumming, and them presented with a hooked bait. Port Lincoln, South Australia was an epicenter of white shark fishing starting in the 1950s. In 1959, a fisherman named Alf Dean caught a shark, and was given the record for being the largest fish caught by rod and reel. A larger white shark was caught in Streaky Bay but was disqualified based on the bait used.
Bites
Of all shark species, the great white shark is responsible for the largest number of recorded shark bite incidents on humans, with 351 documented unprovoked bite incidents on humans since 1580 as of 2024. The majority of them have been non-fatal, while 59 have been fatal. White sharks do not appear to find humans suitable as prey, though cases of humans being consumed have been reported.
In 1984, Tricas and McCosker proposed that white sharks attack humans out of mistaken identity; surfboards in particular may have a similar silhouette to seals and sea lions. A 2021 study concludes that the sharks are likely colorblind and cannot see in fine enough detail to determine whether the silhouette above them is a pinniped or a swimming human, potentially vindicating the hypothesis. Other studies have disputed the 'mistaken identity' hypothesis and have instead proposed that shark bites are actually exploratory bites. A 2016 study finds that most shark bites on surfers are too superficial to kill a pinniped and compares them to the test bites they make on different objects. Similarly, a 2023 paper criticized the 'mistaken identity' hypothesis for focusing too much on vision and not considering the shark's other senses. The authors conclude that "sharks don't make 'mistakes' but instead continually explore their environments and routinely investigate novel objects as potential prey by biting them".
Great white sharks infrequently bite boats. Tricas and McCosker's underwater observations suggest that sharks are attracted to boats by the electrical fields they generate, which are picked up by the ampullae of Lorenzini.
Captivity
The great white shark is difficult keep in captivity due to its large size and migratory nature.Ezcurra, J. M.; Lowe, C. G.; Mollet, H. F.; Ferry, L. A.; O'Sullivan, J. B. "Captive Feeding and Growth of Young-of-the-Year White Shark, Carcharodon carcharias, at the Monterey Bay Aqurarium" in p. 4 Attempts had been made since 1955, in facilities in North America, Hawaii, Australia and South Africa. The sharks survived only for days during the earliest attempts, while the early 1980s, aquariums like Steinhart Aquarium, Sea World San Diego, and Marineland of the Pacific were able to keep juvenile white sharks for weeks before releasing them. A major contributor to the morality of captive white sharks was the poor transport; many were accidentally captured by commercial gillnets and kept on fishing lines or in a tank before being handed over to aquarium staff, causing them stress.Weng, K. C.; O'Sullivan, J. B.; Lowe, C. G.; Winkler, C. E.; Blasius, M. E.; Loke-Smith, K. A.; Sippel, T. J.; Ezcurra, J. M.; Jorgensen, S. J.; Murray, M. J. "Back to the Wild: Release of Juvenile White Sharks for the Monterey Bay Aquarium in p. 443 One famous shark named 'Sandy, who measured , was kept at Steinhart for five days in August 1980 and was released for bumping into the walls.
The most successful attempts at keeping the species occurred at the Monterey Bay Aquarium (MBA), where six white sharks were displayed between 2004 and 2011. Researchers at universities in California attributed the aquarium's success at exhibiting white sharks to the use of a net pen, which gave the sharks time to recover from capture prior to transport. A portable tank used to transport the fish to the exhibit allowed the sharks to swim continuously. The sharks started at but grew too big and had to be released; one shark was kept for 198 days and attracted one million visitors. Having gained enough information on the species, MBA discontinued keeping white sharks.
Tourism
Areas were white sharks gather have been sites for ecotourism; operators allow guests to view them from boats or from inside shark cages. Most operators allow chumming to attract the sharks. Cited benefits of ecotours include education, funding for research and increasing the value of living sharks. One study in southern Australia found that shark tours had positive effects on the participants knowledge and awareness of the animals and support for their conservation.
There is some fear that interactions with tourists could affect the sharks' behavior. At Neptune Islands, it was found that white sharks used more energy during encounters with cage divers. The researchers note, however, that cage-diving can have a minimal effect on shark populations as long as they limit interactions with individual sharks. In the same area, excessive boats drove away many sharks, though the implementation in 2012 of new regulations on the number of licensed boat operators and number of operating days per week allowed for the population to recover. There is also no strong evidence that chumming alters the feeding behavior of white sharks or habituates them to being fed by humans. In January 2023, the Mexican government banned white shark tourism at Guadalupe; due to reports of swimming outside cages, mishandling chum, littering and two incidents of sharks getting stuck and harmed by the cages, one of which may have resulted in death.
Conservation
As of 2025, the great white shark is classified as vulnerable worldwide by the International Union for Conservation of Nature (IUCN), due to a population decline of 30–49% over the past 159 years. It was also given a green status of "moderately depleted" with a recovery score of 59%. The biggest threat to white shark populations is accidental catching in fishing nets and, in Australia and South Africa, beach protection programs, where are caught in protective drum-lines and gillnets. They nevertheless have a high survival rate when released from nets. The species is included in Appendix II of CITES, meaning that international trade in the species (including parts and derivatives) requires a permit.
Europe and the Mediterranean
The IUCN listed the species as critically endangered around Europe in 2015 and in the Mediterranean in 2016. Factors that contributed to his listing include; its genetic isolation in this region, its slow growth rate, its decline in numbers along with those of other large shark species, and its negative public perception. The IUCN suggests that no more than 250 mature white sharks inhabit the waters around Europe, mostly in the Mediterranean.
A 2017 study suggested a decrease in average size of Mediterranean white sharks which may be a sign of a declining population. In 2020 study examined records of white sharks from 1860 to 2016, and concluded that white shark sightings peaked in the 1880s and again in the 1980s, but detected a 61% decrease since 1975. Similarly a 2025 study found that only four white sharks were seen in the past ten years, in contrast to around ten sightings per year between 1985 and 1995. Causes of the decline including fishing. While there are no fisheries industry based on this species, it have been deliberately caught and harpooned in response to attacks or media reports. They are also accidently captured or intendedly killed when stealing from nets, longlines or hooks. Other possible causes include the decline of prey species like bluefin tuna and Mediterranean monk seals.
The great white shark is protected as an endangered species under the Barcelona Convention of 1978 (amended in 1995), of which every coastal Mediterranean nation is a signatory. These parties are required to conserve the shark and other vulnerable species. In 2009, white sharks were also given legal protections from fishing and capturing by then European Commission specifically Regulation No 43/2009. An EU funded program managed to successfully release a by-caught juvenile white shark around Lampedusa in 2023. Researchers have highlighted this as an example how cooperation between scientists and local fisherman is important for the conservation of the white shark in Mediterranean waters.
South Africa
The species has been protected in South Africa since 1991; the law bans both deliberate killing and selling. The province of KwaZulu-Natal, via the KwaZulu-Natal Sharks Board (KZN), allows for the use of nets around protected beaches to reduce the risk of shark attacks but not at major aggregation sights.Curtis, T. H.; Bruce, B. D.; Cliff, G.; Dudley, S. F. J.; Klimley, A. P.; Kock, A.; Lea, R. N.; Lowe, C. G. "Responding to the Risk of White Shark Attack Updated Statistics, Prevention, Control Methods, and recommendation in p. 492 A 1996 study estimated the average population size between 1989 and 1993 to be 1,279, while a 2004 estimated 1,953 individuals post-protection.Cliff, G.; Van Der Elst, R. P.; Govender, A.; Witthuhn, T. K.; Bullen, E. M. "First estimates of mortality and population size of white sharks on the South African coast" in p. 399 A 2023 study concluded that white shark numbers off South Africa have remained stable since 1991. While sightings of sharks at major aggregation sites in Western Cape have declined since the early 2010s, the researchers have attributed this to shifting their distribution further east, possibly in response to attacks by orcas. The results of this study are disputed; in 2024 it was noted that catches of white sharks in KZN have declined since 2010, suggesting they have not moved eastward.
Oceania
The great white shark population is estimated to be 2,500–6,750 individuals around eastern Australia and New Zealand. The species was given legal protection by the Australian Government under the Environmental Protection and Biodiversity Conservation (EPBC) Act of 1999 and declared vulnerable in 2000. Similar protections are given at the state level; some of which have protected the species before the national government. New South Wales, Tasmania and Western Australia list the species as Vulnerable while Victoria lists it as endangered. In 2002, the Australian government created the White Shark Recovery Plan, implementing government-mandated conservation research and monitoring for conservation in addition to federal protection and stronger regulation of shark-related trade and tourism activities. An updated recovery plan was published in 2013 to review progress, research findings, and to implement further conservation actions. The report found that the 2002 plan had some success, having completed 14 of 34 tasks listed. A study in 2012 revealed that Australia's white shark population was separated by Bass Strait into genetically distinct eastern and western populations, indicating a need for the development of regional conservation strategies.
The causes of decline prior to protection included mortality from commercial and sport fishing harvests, as well as being caught in beach protection netting. In 2013, it was reported that deaths from commercial fishing have reduced and no incidental takes from sports fishing, though the population did not fully recover. In spite of official protections in Australia, great white sharks continue to be killed in state "shark control" programs within the country. The states of Queensland and New South Wales have implemented "shark control" programs (shark culling) to reduce shark attacks at beaches. These programs kill great white sharks (as well as other marine life) using shark nets and drum lines with baited hooks. Partly because of these programs, shark numbers in eastern Australia have decreased. Critics have disputed that these programs reduce shark-related fatalities, and have proposed alternatives like helicopter patrols as well as tagging and displaying the location of individual shark via social media. Western Australia implemented a shark cull program in 2013, but discontinued the following year in response to a recommendation by the Environmental Protection Authority.
In April 2007, great white sharks were given full protection in New Zealand waters from land, as well as from New Zealand-flagged boats outside this range. Violations of the law could carry as much as a $250,000 fine and up to six months in prison. In June 2018 the New Zealand Department of Conservation classified the great white shark under the New Zealand Threat Classification System as "Nationally Endangered". The species meets the criteria for this classification as there exists a small, stable population of between 250 and 1000 mature individuals. This classification has the qualifiers "Data Poor" and "Threatened Overseas".
United States
California
In addition to existing federal regulations, great white sharks have been protected under California state law since January 1, 1994. Under this law, catching, hunting, pursuit, capturing, and/or killing of great whites in California waters is strictly prohibited up to offshore, though exceptions exist for great whites caught for scientific research or unintentionally caught as bycatch. In both cases, a special permit is required in order to legally take them.
In 2013, great white sharks were added to California's Endangered Species Act. From data collected, the population of great whites in the North Pacific was estimated to be fewer than 340 individuals. Research also reveals these sharks are genetically distinct from other members of their species elsewhere in Africa, Australia, and the east coast of North America, having been isolated from other populations.
A 2014 study estimated the population of great white sharks along the California coastline to be approximately 2,400.
In September 2019, California governor Gavin Newsom signed Assembly Bill 2109 into law, banning the use of shark bait, shark lures, and chumming to attract great whites in California waters, and prohibiting their usage within one nautical mile of any shoreline, pier, or jetty when a great white is visible or known to be present in the area.
Massachusetts
In June 2015, Massachusetts banned catching, cage diving, feeding, towing decoys, or baiting and chumming for its significant and highly predictable migratory great white population without an appropriate research permit. However, these restrictions apply to only activities within state waters, which extend three miles from shore. Therefore, there are over a dozen tour operators offering cage diving and some do bait and/or chum.
See also
List of sharks
List of threatened sharks
Outline of sharks
Shark culling
Books
The Devil's Teeth by Susan Casey
Close to Shore by Michael Capuzzo about the Jersey Shore shark attacks of 1916
Twelve Days of Terror by Richard Fernicola about the same events
Chasing Shadows: My Life Tracking the Great White Shark by Greg Skomal
Notes
References
Bibliography
External links
Atlantic White Shark Conservancy
White Shark Conservation Trust, New Zealand
Category:Apex predators
Category:Articles containing video clips
Category:Carcharodon
Category:Cosmopolitan fish
Category:Critically endangered biota of Europe
Category:Extant Miocene first appearances
Category:Fish described in 1758
Category:Ovoviviparous fish
Category:Scavengers
Category:Animal taxa named by Carl Linnaeus
|
nature_wildlife
| 8,085
|
44303
|
Leopard
|
https://en.wikipedia.org/wiki/Leopard
|
The leopard (Panthera pardus) is one of the five extant cat species in the genus Panthera. It has a pale yellowish to dark golden fur with dark spots grouped in rosettes. Its body is slender and muscular, reaching a length of with a long tail and a shoulder height of . Males typically weigh , and females .
The leopard was first described in 1758, and several subspecies were proposed in the 19th and 20th centuries. Today, eight subspecies are recognised in its wide range in Africa and Asia. It initially evolved in Africa during the Early Pleistocene, before migrating into Eurasia around the Early–Middle Pleistocene transition. Leopards were formerly present across Europe, but became extinct in the region at around the end of the Late Pleistocene-early Holocene.
The leopard is adapted to a variety of habitats ranging from rainforest to steppe, including arid and montane areas. It is an opportunistic predator, hunting mostly ungulates and primates. It relies on its spotted pattern for camouflage as it stalks and ambushes its prey, which it sometimes drags up a tree. It is a solitary animal outside the mating season and when raising cubs. Females usually give birth to a litter of 2–4 cubs once in 15–24 months. Both male and female leopards typically reach sexual maturity at the age 2–2.5 years.
Listed as Vulnerable on the IUCN Red List, leopard populations are currently threatened by habitat loss and fragmentation, and are declining in large parts of the global range. Leopards have had cultural roles in Ancient Greece, West Africa and modern Western culture. Leopard skins are popular in fashion.
Etymology
The English name "leopard" comes from Old French or Middle French , that derives from Latin and ancient Greek (). could be a compound of (), meaning , and (), meaning . The word originally referred to a cheetah (Acinonyx jubatus).
"Panther" is another common name, derived from Latin and ancient Greek (); The generic name Panthera originates in Latin , a hunting net for catching wild beasts to be used by the Romans in combats. is the masculine singular form.
Taxonomy
Felis pardus was the scientific name proposed by Carl Linnaeus in 1758.
The generic name Panthera was first used by Lorenz Oken in 1816, who included all the known spotted cats into this group.
Oken's classification was not widely accepted, and Felis or Leopardus was used as the generic name until the early 20th century.
The leopard was designated as the type species of Panthera by Joel Asaph Allen in 1902.
In 1917, Reginald Innes Pocock also subordinated the tiger (P. tigris), lion (P. leo), and jaguar (P. onca) to Panthera.
Living subspecies
Following Linnaeus' first description, 27 leopard subspecies were proposed by naturalists between 1794 and 1956. Since 1996, only eight subspecies have been considered valid on the basis of mitochondrial analysis. Later analysis revealed a ninth valid subspecies, the Arabian leopard.
In 2017, the Cat Classification Task Force of the Cat Specialist Group recognized the following eight subspecies as valid taxa:
Subspecies Distribution ImageAfrican leopard (P. p. pardus) It is the most widespread leopard subspecies and is native to most of Sub-Saharan Africa, but likely locally extinct in Mauritania, Togo, Morocco, Algeria, Tunisia and Libya and most likely also in Gambia and Lesotho.framelessIndian leopard (P. p. fusca) It occurs in the Indian subcontinent, Myanmar and southern Tibet. It is listed as Near Threatened.framelessJavan leopard (P. p. melas) It is native to Java in Indonesia and has been assessed as Endangered in 2021.framelessArabian leopard (P. p. nimr) It is the smallest leopard subspecies and considered endemic to the Arabian Peninsula. As of 2023, the population was estimated to comprise 100–120 individuals in Oman and Yemen; it was therefore assessed as Critically Endangered in 2023. It is locally extinct in Syria, Lebanon, Israel, Palestine, Jordan, Kuwait and the United Arab Emirates.framelessP. p. tulliana It occurs from eastern Turkey and the Caucasus to the Iranian Plateau and the Hindu Kush into the western Himalayas. It is listed as Endangered. It is locally extinct in Uzbekistan and Tajikistan.
The Balochistan leopard population in the south of Iran, Afghanistan and Pakistan is separated from the northern population by the Dasht-e Kavir and Dasht-e Lut deserts.framelessAmur leopard (P. p. orientalis) It is native to the Russian Far East and northern China, but is locally extinct in the Korean peninsula.framelessIndochinese leopard (P. p. delacouri) It occurs in mainland Southeast Asia and southern China, and is listed as Critically Endangered. It is locally extinct in Hong Kong, Singapore, Laos and Vietnam.framelessSri Lankan leopard (P. p. kotiya) It is native to Sri Lanka and listed as Vulnerable.frameless
Results of an analysis of molecular variance and pairwise fixation index of 182 African leopard museum specimens showed that some African leopards exhibit higher genetic differences than Asian leopard subspecies.
Evolution
Results of phylogenetic studies based on nuclear DNA and mitochondrial DNA analysis showed that the last common ancestor of the Panthera and Neofelis genera is thought to have lived about . Neofelis diverged about from the Panthera lineage. The tiger diverged about , followed by the snow leopard about and the leopard about . The leopard is a sister taxon to a clade within Panthera, consisting of the lion and the jaguar.
Results of a phylogenetic analysis of chemical secretions amongst cats indicated that the leopard is closely related to the lion.
The geographic origin of the Panthera is most likely northern Central Asia. The leopard-lion clade was distributed in the Asian and African Palearctic since at least the early Pliocene. The leopard-lion clade diverged 3.1–1.95 million years ago. Additionally, a 2016 study revealed that the mitochondrial genomes of the leopard, lion and snow leopard are more similar to each other than their nuclear genomes, indicating that their ancestors hybridized with the snow leopard at some point in their evolution.
The oldest unambiguous fossils of the leopard are from Eastern Africa, dating to around 2 million years ago.
Leopard-like fossil bones and teeth possibly dating to the Pliocene were excavated in Perrier in France, northeast of London, and in Valdarno, Italy. Until 1940, similar fossils dating back to the Pleistocene were excavated mostly in loess and caves at 40 sites in Europe, including Furninha Cave near Lisbon, Genista Caves in Gibraltar, and Santander Province in northern Spain to several sites across France, Switzerland, Italy, Austria, Germany, in the north up to Derby in England, in the east to Přerov in the Czech Republic and the Baranya in southern Hungary.
Leopards arrived in Eurasia during the late Early to Middle Pleistocene around 1.2 to 0.6 million years ago.
Four European Pleistocene leopard subspecies were proposed. P. p. begoueni from the beginning of the Early Pleistocene was replaced about by P. p. sickenbergi, which in turn was replaced by P. p. antiqua around 0.3 million years ago. P. p. spelaea is the most recent subspecies that appeared at the beginning of the Late Pleistocene and survived until about 11,000 years ago and possibly into the early Holocene in the Iberian Peninsula.
Leopards depicted in cave paintings in Chauvet Cave provide indirect evidence of leopard presence in Europe.
Leopard fossils dating to the Late Pleistocene were found in Biśnik Cave in south-central Poland.
Fossil remains were also excavated in the Iberian and Italian Peninsula, and in the Balkans.
Leopard fossils dating to the Pleistocene were also excavated in the Japanese archipelago. Leopard fossils were also found in Taiwan.
Hybrids
In 1953, a male leopard and a female lion were crossbred in Hanshin Park in Nishinomiya, Japan. Their offspring known as a leopon was born in 1959 and 1961, all cubs were spotted and bigger than a juvenile leopard. Attempts to mate a leopon with a tigress proved unsuccessful.
Characteristics
The leopard's fur is generally soft and thick, notably softer on the belly than on the back. Its skin colour varies between individuals from pale yellowish to dark golden with dark spots grouped in rosettes. Its underbelly is white and its ringed tail is shorter than its body. Its pupils are round. Leopards living in arid regions are pale cream, yellowish to ochraceous and rufous in colour; those living in forests and mountains are much darker and deep golden. Spots fade toward the white underbelly and the insides and lower parts of the legs. Rosettes are circular in East African leopard populations, and tend to be squarish in Southern African and larger in Asian leopard populations. The fur tends to be grayish in colder climates, and dark golden in rainforest habitats. Rosette patterns are unique in each individual. This pattern is thought to be an adaptation to dense vegetation with patchy shadows, where it serves as camouflage.
Its white-tipped tail is about long, white underneath and with spots that form incomplete bands toward the end of the tail.
The guard hairs protecting the basal hairs are short, in face and head, and increase in length toward the flanks and the belly to about . Juveniles have woolly fur that appear to be dark-coloured due to the densely arranged spots.
Its fur tends to grow longer in colder climates.
The leopard's rosettes differ from those of the jaguar, which are darker and with smaller spots inside. The leopard has a diploid chromosome number of 38.
Melanistic leopards are also known as black panthers. Melanism in leopards is caused by a recessive allele and is inherited as a recessive trait.
In India, nine pale and white leopards were reported between 1905 and 1967.
Leopards exhibiting erythrism were recorded between 1990 and 2015 in South Africa's Madikwe Game Reserve and in Mpumalanga. The cause of this morph known as a "strawberry leopard" or "pink panther" is not well understood.
Size
The leopard is a slender and muscular cat, with relatively short limbs and a broad head. It is sexually dimorphic with males larger and heavier than females. Males stand at the shoulder, while females are tall. The head-and-body length ranges between with a long tail. Sizes vary geographically. Males typically weigh , and females . Occasionally, large males can grow up to . Leopards from the Cape Province in South Africa are generally smaller, reaching only in males.
The heaviest wild leopard in Southern Africa weighed around , and it measured . In 2016, an Indian leopard killed in Himachal Pradesh measured with an estimated weight of ; it was perhaps the largest known wild leopard in India.
The largest recorded skull of a leopard was found in India in 1920 and measured in basal length, in breadth, and weighed . The skull of an African leopard measured in basal length, and in breadth, and weighed .
Distribution and habitat
The leopard has the largest distribution of all wild cats, occurring widely in Africa and Asia, although populations are fragmented and declining. It inhabits foremost savanna and rainforest, and areas where grasslands, woodlands and riparian forests remain largely undisturbed. It also persists in urban environments, if it is not persecuted, has sufficient prey and patches of vegetation for shelter during the day.
The leopard's range in West Africa is estimated to have drastically declined by 95%, and in the Sahara desert by 97%. In sub-Saharan Africa, it is still numerous and surviving in marginal habitats where other large cats have disappeared. In southeastern Egypt, an individual found killed in 2017 was the first sighting of the leopard in this area in 65 years.
In West Asia, the leopard inhabits the areas of southern and southeastern Anatolia.
Leopard populations in the Arabian Peninsula are small and fragmented.
In the Indian subcontinent, the leopard is still relatively abundant, with greater numbers than those of other Panthera species. Some leopard populations in India live quite close to human settlements and even in semi-developed areas. Although adaptable to human disturbances, leopards require healthy prey populations and appropriate vegetative cover for hunting for prolonged survival and thus rarely linger in heavily developed areas. Due to the leopard's stealth, people often remain unaware that it lives in nearby areas. As of 2020, the leopard population within forested habitats in India's tiger range landscapes was estimated at 12,172 to 13,535 individuals. Surveyed landscapes included elevations below in the Shivalik Hills and Gangetic plains, Central India and Eastern Ghats, Western Ghats, the Brahmaputra River basin and hills in Northeast India.
In Nepal's Kanchenjunga Conservation Area, a melanistic leopard was photographed at an elevation of by a camera trap in May 2012.
In Sri Lanka, leopards were recorded in Yala National Park and in unprotected forest patches, tea estates, grasslands, home gardens, pine and eucalyptus plantations.
In Myanmar, leopards were recorded for the first time by camera traps in the hill forests of Myanmar's Karen State. The Northern Tenasserim Forest Complex in southern Myanmar is considered a leopard stronghold. In Thailand, leopards are present in the Western Forest Complex, Kaeng Krachan-Kui Buri, Khlong Saeng-Khao Sok protected area complexes and in Hala Bala Wildlife Sanctuary bordering Malaysia. In Peninsular Malaysia, leopards are present in Belum-Temengor, Taman Negara and Endau-Rompin National Parks.
In Laos, leopards were recorded in Nam Et-Phou Louey National Biodiversity Conservation Area and Nam Kan National Protected Area.
In Cambodia, leopards inhabit deciduous dipterocarp forest in Phnom Prich Wildlife Sanctuary and Mondulkiri Protected Forest.
In southern China, leopards were recorded only in the Qinling Mountains during surveys in 11 nature reserves between 2002 and 2009.
In Java, leopards inhabit dense tropical rainforests and dry deciduous forests at elevations from sea level to . Outside protected areas, leopards were recorded in mixed agricultural land, secondary forest and production forest between 2008 and 2014.
In the Russian Far East, it inhabits temperate coniferous forests where winter temperatures reach a low of .
Behaviour and ecology
The leopard is a solitary and territorial animal. It is typically shy and alert when crossing roadways and encountering oncoming vehicles, but may be emboldened to attack people or other animals when threatened. Adults associate only in the mating season. Females continue to interact with their offspring even after weaning and have been observed sharing kills with their offspring when they cannot obtain any prey. They produce a number of vocalizations, including growls and snarls. Cubs call their mother with meows and an urr-urr sound. The most notable vocalization is the 'sawing' roar, which consists of deep, repeated strokes. This likely functions in establishing territories and attracting mates.
The whitish spots on the back of its ears are thought to play a role in communication.
It has been hypothesized that the white tips of their tails may function as a 'follow-me' signal in intraspecific communication. However, no significant association were found between a conspicuous colour of tail patches and behavioural variables in carnivores.
Leopards are mainly active from dusk till dawn and will rest for most of the day and some hours at night in thickets, among rocks or over tree branches. Leopards have been observed walking up to across their range at night; wandering up to if disturbed. In western African forests, they have been observed to be largely diurnal and hunting during twilight, when their prey animals are active; activity patterns vary between seasons.
Leopards can climb trees quite skillfully, often resting on tree branches and descending headfirst.
They can run at over , leap over horizontally, and jump up to vertically.
Social spacing
In Kruger National Park, most leopards tend to keep apart. Males occasionally interact with their partners and cubs, and exceptionally this can extend beyond to two generations. Aggressive encounters are rare, typically limited to defending territories from intruders. In a South African reserve, a male was wounded in a male–male territorial battle over a carcass.
Males occupy home ranges that often overlap with a few smaller female home ranges, probably as a strategy to enhance access to females. In the Ivory Coast, the home range of a female was completely enclosed within a male's. Females live with their cubs in home ranges that overlap extensively, probably due to the association between mothers and their offspring. There may be a few other fluctuating home ranges belonging to young individuals. It is not clear if male home ranges overlap as much as those of females do. Individuals try to drive away intruders of the same sex.
A study of leopards in the Namibian farmlands showed that the size of home ranges was not significantly affected by sex, rainfall patterns or season; the higher the prey availability in an area, the greater the leopard population density and the smaller the size of home ranges, but they tend to expand if there is human interference.
Sizes of home ranges vary geographically and depending on habitat and availability of prey. In the Serengeti, males have home ranges of and females of ; but males in northeastern Namibia of and females of . They are even larger in arid and montane areas. In Nepal's Bardia National Park, male home ranges of and female ones of are smaller than those generally observed in Africa.
Hunting and diet
The leopard is a carnivore that prefers medium-sized prey with a body mass ranging from . Prey species in this weight range tend to occur in dense habitat and to form small herds. Species that prefer open areas and have well-developed anti-predator strategies are less preferred. More than 100 prey species have been recorded. The most preferred species are ungulates, such as impala, bushbuck, common duiker and chital. Primates preyed upon include white-eyelid mangabeys, guenons and gray langurs. Leopards also kill smaller carnivores like black-backed jackal, bat-eared fox, genet and cheetah. In urban environments, domestic dogs provide an important food source. The largest prey killed by a leopard was reportedly a male eland weighing . A study in Wolong National Nature Reserve in southern China demonstrated variation in the leopard's diet over time; over the course of seven years, the vegetative cover receded, and leopards opportunistically shifted from primarily consuming tufted deer to pursuing bamboo rats and other smaller prey.
The leopard depends mainly on its acute senses of hearing and vision for hunting. It primarily hunts at night in most areas. In western African forests and Tsavo National Park, they have also been observed hunting by day. They usually hunt on the ground. In the Serengeti, they have been seen to ambush prey by descending on it from trees. It stalks its prey and tries to approach as closely as possible, typically within of the target, and, finally, pounces on it and kills it by suffocation. It kills small prey with a bite to the back of the neck, but holds larger animals by the throat and strangles them. It caches kills up to apart. It is able to take large prey due to its powerful jaw muscles, and is therefore strong enough to drag carcasses heavier than itself up into trees; an individual was seen to haul a young giraffe weighing nearly up into a tree. It eats small prey immediately, but drags larger carcasses over several hundred metres and caches it safely in trees, bushes or even caves; this behaviour allows the leopard to store its prey away from rivals, and offers it an advantage over them. The way it stores the kill depends on local topography and individual preferences, varying from trees in Kruger National Park to bushes in the plain terrain of the Kalahari. Before their extirpation from Europe, leopards there cached their meat in caves, as evidenced by fossilised bone accumulations in caves such as Los Rincones in the Province of Zaragoza, Spain.
Leopards are known to drop from trees onto Impalas, which is probably an opportunistic hunting behaviour. A leopard falling from a height of 2.69 metres onto the back of its prey (3.55 metres total height), takes 0.7 seconds to fall and reaches a terminal velocity of 25 km/h; this hunting technique requires that the prey be unaware of the predator's attack and it also requires great precision to avoid falling on the horns of males, which allows for a safe attack.
Average daily consumption rates of were estimated for males and of for females. In the southern Kalahari Desert, leopards meet their water requirements by the bodily fluids of prey and succulent plants; they drink water every two to three days and feed infrequently on moisture-rich plants such as gemsbok cucumbers, watermelon and Kalahari sour grass.
Enemies and competitors
Across its range, the leopard coexists with a number of other large predators. In Africa, it is part of a large predator guild with lions, cheetahs, spotted and brown hyenas, and African wild dogs. The leopard is dominant only over the cheetah while the others have the advantage of size, pack numbers or both. Lions pose a great mortal threat and can be responsible for 22% of leopard deaths in Sabi Sand Game Reserve. Spotted hyenas are less threatening but are more likely to steal kills, being the culprits of up to 50% of stolen leopard kills in the same area. To counter this, leopards store their kills in the trees and out of reach. Lions have a high success rate in fetching leopard kills from trees. Leopards do not seem to actively avoid their competitors but rather difference in prey and habitat preferences appear to limit their spatial overlap. In particular, leopards use heavy vegetation regardless of whether lions are present in an area and both cats are active at the same time of day.
In Asia, the leopard's main competitors are tigers and dholes. Both the larger tiger and pack-living dhole dominate leopards during encounters. Interactions between the three predators involve chasing, stealing kills and direct killing. Tigers appear to inhabit the deep parts of the forest while leopards and dholes are pushed closer to the fringes. The three predators coexist by hunting different sized prey. In Nagarhole National Park, the average size for a leopard kill was compared to for tigers and for dholes. At Kui Buri National Park, following a reduction in prey numbers, tigers continued to feed on favoured prey while leopards and dholes had to increase their consumption of small prey. Leopards can live successfully in tiger habitat when there is abundant food and vegetation cover. Otherwise, they appear to be less common where tigers are numerous. The recovery of the tiger population in Rajaji National Park during the 2000s led to a reduction in leopard population densities.
Reproduction and life cycle
In some areas, leopards mate all year round. In Manchuria and Siberia, they mate during January and February. On average, females begin to breed between the ages of 2½ and three, and males between the ages of two and three. The female's estrous cycle lasts about 46 days, and she is usually in heat for 6–7 days. Gestation lasts for 90 to 105 days. Cubs are usually born in a litter of 2–4 cubs. The mortality rate of cubs is estimated at 41–50% during the first year. Predators are the biggest cause for leopard cub mortality during their first year. Male leopards are known to cause infanticide, in order to bring the female back into heat. Intervals between births average 15 to 24 months, but can be shorter, depending on the survival of the cubs.
Females give birth in a cave, crevice among boulders, hollow tree or thicket. Newborn cubs weigh , and are born with closed eyes, which open four to nine days after birth. The fur of the young tends to be longer and thicker than that of adults. Their pelage is also more gray in colour with less defined spots. They begin to eat meat at around nine weeks. Around three months of age, the young begin to follow the mother on hunts. At one year of age, cubs can probably fend for themselves, but will remain with the mother for 18–24 months. After separating from their mother, sibling cubs may travel together for months. Both male and female leopards typically reach sexual maturity at 2–2⅓ years.
The generation length of the leopard is 9.3 years.
The average life span of a leopard is 12–17 years.
The oldest leopard was a captive female that died at the age of 24 years, 2 months and 13 days.
Conservation
The leopard is listed on CITES Appendix I, and hunting is banned in Botswana and Afghanistan; in 11 sub-Saharan countries, trade is restricted to skins and body parts of 2,560 individuals.
In 2007, a leopard reintroduction programme was initiated in the Russian Caucasus, where captive bred individuals are reared and trained in large enclosures in Sochi National Park; six individuals released into Caucasus Nature Reserve and Alaniya National Park in 2018 survived as of February 2022.
Threats
The leopard is primarily threatened by habitat fragmentation and conversion of forest to agriculturally used land, which lead to a declining natural prey base, human–wildlife conflict with livestock herders and high leopard mortality rates. It is also threatened by trophy hunting and poaching. Contemporary records suggest that the leopard occurs in only 25% of its historical range.
Between 2002 and 2012, at least four leopards were estimated to have been poached per week in India for the illegal wildlife trade of its skins and bones.
In spring 2013, 37 leopard skins were found during a 7-week long market survey in major Moroccan cities. In 2014, 43 leopard skins were detected during two surveys in Morocco. Vendors admitted to have imported skins from sub-Saharan Africa.
Surveys in the Central African Republic's Chinko area revealed that the leopard population decreased from 97 individuals in 2012 to 50 individuals in 2017. In this period, transhumant pastoralists from the border area with Sudan moved in the area with their livestock. Rangers confiscated large amounts of poison in the camps of livestock herders who were accompanied by armed merchants. They engaged in poaching large herbivores, sale of bushmeat and trading leopard skins in Am Dafok.
In Java, the leopard is threatened by illegal hunting and trade. Between 2011 and 2019, body parts of 51 Javan leopards were seized including six live individuals, 12 skins, 13 skulls, 20 canines and 22 claws.
Human relations
Cultural significance
Leopards have been featured in art, mythology and folklore of many countries. In Greek mythology, it was a symbol of the god Dionysus, who was depicted wearing leopard skin and using leopards as means of transportation. In one myth, the god was captured by pirates but two leopards rescued him. Numerous Roman mosaics from North African sites depict fauna now found only in tropical Africa. During the Benin Empire, the leopard was commonly represented on engravings and sculptures and was used to symbolise the power of the king or oba, since the leopard was considered the king of the forest. The Ashanti people also used the leopard as a symbol of leadership, and only the king was permitted to have a ceremonial leopard stool. Some African cultures considered the leopard to be a smarter, better hunter than the lion and harder to kill.
In Rudyard Kipling's "How the Leopard Got His Spots", one of his Just So Stories, a leopard with no spots in the Highveld lives with his hunting partner, the Ethiopian. When they set off to the forest, the Ethiopian changed his brown skin, and the leopard painted spots on his skin. A leopard played an important role in the 1938 Hollywood film Bringing Up Baby. African chiefs, European queens, Hollywood actors and burlesque dancers wore coats made of leopard skins.
The leopard is a frequently used motif in heraldry, most commonly as passant. The heraldic leopard lacks spots and sports a mane, making it visually almost identical to the heraldic lion, and the two are often used interchangeably. Naturalistic leopard-like depictions appear on the coat of arms of Benin, Malawi, Somalia, the Democratic Republic of the Congo and Gabon, the last of which uses a black panther.
Attacks on people
The Leopard of Rudraprayag killed more than 125 people; the Panar Leopard was thought to have killed over 400 people. Both were shot by British hunter Jim Corbett. The spotted devil of Gummalapur killed about 42 people in Karnataka, India.
In captivity
The ancient Romans kept leopards in captivity to be slaughtered in hunts as well as execute criminals. In Benin, leopards were kept and paraded as mascots, totems and sacrifices to deities. Several leopards were kept in a menagerie originally established by King John of England at the Tower of London in the 13th century; around 1235, three of these animals were given to Henry III by Holy Roman Emperor Frederick II. In modern times, leopards have been trained and tamed in circuses.
See also
Leopard pattern
List of largest cats
Panther (legendary creature)
References
Further reading
External links
IUCN/SSC Cat Specialist Group: Panthera pardus in Africa and Panthera pardus in Asia
Category:Articles containing video clips
Category:Big cats
Category:Felids of Africa
Category:Felids of Asia
Category:Mammals described in 1758
Category:National symbols of Benin
Category:National symbols of Malawi
Category:National symbols of Somalia
Category:National symbols of the Democratic Republic of the Congo
Category:Panthera
Category:Animal taxa named by Carl Linnaeus
Category:Apex predators
|
nature_wildlife
| 4,831
|
45609
|
Cheetah
|
https://en.wikipedia.org/wiki/Cheetah
|
The cheetah (Acinonyx jubatus) is a large cat and the fastest land animal. It has a tawny to creamy white or pale buff fur that is marked with evenly spaced, solid black spots. The head is small and rounded, with a short snout and black tear-like facial streaks. It reaches at the shoulder, and the head-and-body length is between . Adults weigh between . The cheetah is capable of running at ; it has evolved specialized adaptations for speed, including a light build, long thin legs and a long tail.
The cheetah was first scientifically described in the late 18th century. Four subspecies are recognised today that are native to Africa and central Iran. An African subspecies was introduced to India in 2022. It is now distributed mainly in small, fragmented populations in northwestern, eastern and southern Africa and central Iran. It lives in a variety of habitats such as savannahs in the Serengeti, arid mountain ranges in the Sahara, and hilly desert terrain.
The cheetah lives in three main social groups: females and their cubs, male "coalitions", and solitary males. While females lead a nomadic life searching for prey in large home ranges, males are more sedentary and instead establish much smaller territories in areas with plentiful prey and access to females. The cheetah is active during the day, with peaks during dawn and dusk. It feeds on small- to medium-sized prey, mostly weighing under , and prefers medium-sized ungulates such as impala, springbok and Thomson's gazelles. The cheetah typically stalks its prey within before charging towards it, trips it during the chase and bites its throat to suffocate it to death. It breeds throughout the year. After a gestation of nearly three months, females give birth to a litter of three or four cubs. Cheetah cubs are highly vulnerable to predation by other large carnivores. They are weaned at around four months and are independent by around 20 months of age.
The cheetah is threatened by habitat loss, conflict with humans, poaching and high susceptibility to diseases. The global cheetah population was estimated at 6,517 individuals in 2021; it is listed as Vulnerable on the IUCN Red List. It has been widely depicted in art, literature, advertising, and animation. It was tamed in ancient Egypt and trained for hunting ungulates in the Arabian Peninsula and India. It has been kept in zoos since the early 19th century.
Etymology
The vernacular name "cheetah" is derived from Hindustani and (). This in turn comes from () meaning 'variegated', 'adorned' or 'painted'. In the past, the cheetah was often called "hunting leopard" because they could be tamed and used for coursing. The generic name Acinonyx probably derives from the combination of two Greek words: () meaning 'unmoved' or 'motionless', and () meaning 'nail' or 'hoof'. A rough translation is "immobile nails", a reference to the cheetah's limited ability to retract its claws. A similar meaning can be obtained by the combination of the Greek prefix a– (implying a lack of) and () meaning 'to move' or 'to set in motion'. The specific name is Latin for 'crested, having a mane'.
A few old generic names such as Cynailurus and Cynofelis allude to the similarities between the cheetah and canids.
Taxonomy
In 1777, Johann Christian Daniel von Schreber described the cheetah based on a skin from the Cape of Good Hope and gave it the scientific name Felis jubatus. Joshua Brookes proposed the generic name Acinonyx in 1828. In 1917, Reginald Innes Pocock placed the cheetah in a subfamily of its own, Acinonychinae, given its striking morphological resemblance to the greyhound and significant deviation from typical felid features; the cheetah was classified in Felinae in later taxonomic revisions.
In the 19th and 20th centuries, several cheetah zoological specimens were described; some were proposed as subspecies.
A South African specimen with notably dense fur was proposed as (Felis lanea) by Philip Sclater in 1877 and became known as the "woolly cheetah". Its classification as a species was mostly disputed. There has been considerable confusion in the nomenclature of the cheetah and leopard (Panthera pardus) as authors often confused the two; some considered "hunting leopards" an independent species, or equal to the leopard.
Subspecies
In 1975, five cheetah subspecies were considered valid taxa: A. j. hecki, A. j. jubatus, A. j. raineyi, A. j. soemmeringii and A. j. venaticus. In 2011, a phylogeographic study found minimal genetic variation between A. j. jubatus and A. j. raineyi; only four subspecies were identified. In 2017, the Cat Classification Task Force of the IUCN Cat Specialist Group revised felid taxonomy and recognised these four subspecies as valid. Their details are tabulated below:
Subspecies Details Image Range mapSoutheast African cheetah(A. j. jubatus) syn. A. j. raineyiThe nominate subspecies; it genetically diverged from the Asiatic cheetah 67,000–32,000 years ago. As of 2016, the largest population of nearly 4,000 individuals is sparsely distributed in Angola, Botswana, Mozambique, Namibia, South Africa and Zambia.frameless|alt=Southeast African cheetah in Masai Mara, KenyaframelessAsiatic cheetah(A. j. venaticus) This subspecies is confined to central Iran, and is the only surviving cheetah population in Asia. As of 2022, only 12 individuals were estimated to survive in Iran, nine of which are males and three of which are females. framelessframelessNortheast African cheetah(A. j. soemmeringii) This subspecies occurs in the northern Central African Republic, Chad, Ethiopia and South Sudan in small and heavily fragmented populations; in 2016, the largest population of 238 individuals occurred in the northern CAR and southeastern Chad. It diverged genetically from the southeast African cheetah 72,000–16,000 years ago.frameless|alt=Northeast African cheetah resting on the ground in Djibouti City, DjiboutiframelessNorthwest African cheetah(A. j. hecki) This subspecies occurs in Algeria, Benin, Burkina Faso, Mali and Niger. In 2016, the largest population of 191 individuals occurred in Adrar des Ifoghas, Ahaggar and Tassili n'Ajjer in south-central Algeria and northeastern Mali. It is listed as critically endangered on the IUCN Red List.frameless|alt=Northwest African cheetah resting on the ground in Idlès, Algeriaframeless
Phylogeny and evolution
The cheetah's closest relatives are the cougar (Puma concolor) and the jaguarundi (Herpailurus yagouaroundi). Together, these three species form the Puma lineage, one of the eight lineages of the extant felids; the Puma lineage genetically diverged from the rest 6.7 mya. The sister group of the Puma lineage is a clade of smaller Old World cats that includes the genera Felis, Otocolobus and Prionailurus.
The oldest cheetah fossils, excavated in eastern and southern Africa, date to 3.5–3 mya; the earliest known specimen from South Africa is from the lowermost deposits of the Silberberg Grotto (Sterkfontein). Though incomplete, these fossils indicate forms larger but less cursorial than the modern cheetah. The first occurrence of the modern species A. jubatus in Africa may come from Cooper's D, a site in South Africa dating back to 1.5 to 1.4 Ma, during the Calabrian stage. Fossil remains from Europe are limited to a few Middle Pleistocene specimens from Hundsheim (Austria) and Mosbach Sands (Germany). Cheetah-like cats are known from as late as 10,000 years ago from the Old World. The giant cheetah (A. pardinensis), significantly larger and slower compared to the modern cheetah, occurred in Eurasia and eastern and southern Africa in the Villafranchian period roughly 3.8–1.9 mya. In the Middle Pleistocene a smaller cheetah, A. intermedius, ranged from Europe to China. The modern cheetah appeared in Africa around 1.9 mya; its fossil record is restricted to Africa.
Extinct North American cheetah-like cats had historically been classified in Felis, Puma or Acinonyx; two such species, F. studeri and F. trumani, were considered to be closer to the puma than the cheetah, despite their close similarities to the latter. Noting this, palaeontologist Daniel Adams proposed Miracinonyx, a new subgenus under Acinonyx, in 1979 for the North American cheetah-like cats; this was later elevated to genus rank. Adams pointed out that North American and Old World cheetah-like cats may have had a common ancestor, and Acinonyx might have originated in North America instead of Eurasia. However, subsequent research has shown that Miracinonyx is phylogenetically closer to the cougar than the cheetah; the similarities to cheetahs have been attributed to parallel evolution.
The three species of the Puma lineage may have had a common ancestor during the Miocene (roughly 8.25 mya). Some suggest that North American cheetahs possibly migrated to Asia via the Bering Strait, then dispersed southward to Africa through Eurasia at least 100,000 years ago; some authors have expressed doubt over the occurrence of cheetah-like cats in North America, and instead suppose the modern cheetah to have evolved from Asian populations that eventually spread to Africa. The cheetah is thought to have experienced two population bottlenecks that greatly decreased the genetic variability in populations; one occurred about 100,000 years ago that has been correlated to migration from North America to Asia, and the second 10,000–12,000 years ago in Africa, possibly as part of the Late Pleistocene extinction event.
Genetics
The diploid number of chromosomes in the cheetah is 38, the same as in most other felids. The cheetah was the first felid observed to have unusually low genetic variability among individuals, which has led to poor breeding in captivity, increased spermatozoal defects, high juvenile mortality and increased susceptibility to diseases and infections. A prominent instance was the deadly feline coronavirus outbreak in the cheetah breeding facility at Wildlife Safari in Winston, Oregon in 1983 which had a mortality rate of 60%, higher than that recorded for previous epizootics of feline infectious peritonitis in any felid. The remarkable homogeneity in cheetah genes has been demonstrated by experiments involving the major histocompatibility complex (MHC); unless the MHC genes are highly homogeneous in a population, skin grafts exchanged between a pair of unrelated individuals would be rejected. Skin grafts exchanged between unrelated cheetahs are accepted well and heal, as if their genetic makeup were the same.
The low genetic diversity is thought to have been created by two population bottlenecks from about 100,000 years and about 12,000 years ago, respectively. The resultant level of genetic variation is around 0.1–4% of average living species, lower than that of Tasmanian devils, Virunga gorillas, Amur tigers, and even highly inbred domestic cats and dogs.
Selective retention of gene duplication has been found in 10 gene candidates to explain energetics and anabolism related to muscle specialization in cheetahs:
Regulation of muscle contraction (Five genes: ADORA1, ADRA1B, CACNA1C, RGS2, SCN5A).
Physiological stress response (Two genes: ADORA1, TAOK2).
Negative regulation of catabolic process (Four genes: APOC3, SUFU, DDIT4, PPARA).
This gene duplication may have allowed new functions to arise for the aforementioned genes; this selective pressure may also have contributed to the low genetic diversity in this species.
Potentially harmful mutations has been found in a gene related to spermatogenesis (AKAP4). This could explain the high proportion of abnormal sperma in male cheetahs and poor reproductive success in the species.
King cheetah
The king cheetah is a variety of cheetah with a rare mutation for cream-coloured fur marked with large, blotchy spots and three dark, wide stripes extending from the neck to the tail. In Manicaland, Zimbabwe, it was known as nsuifisi and thought to be a cross between a leopard and a hyena. In 1926, Major A. Cooper wrote about a cheetah-like animal he had shot near modern-day Harare, with fur as thick as that of a snow leopard and spots that merged to form stripes. He suggested it could be a cross between a leopard and a cheetah. As more such individuals were observed it was seen that they had non-retractable claws like the cheetah.
In 1927, Pocock described these individuals as a new species by the name of Acinonyx rex ("king cheetah"). However, in the absence of proof to support his claim, he withdrew his proposal in 1939. Abel Chapman considered it a colour morph of the normally spotted cheetah. Since 1927, the king cheetah has been reported five more times in the wild in Zimbabwe, Botswana and northern Transvaal; one was photographed in 1975.
In 1981, two female cheetahs that had mated with a wild male from Transvaal at the De Wildt Cheetah and Wildlife Centre (South Africa) gave birth to one king cheetah each; subsequently, more king cheetahs were born at the centre. In 2012, the cause of this coat pattern was found to be a mutation in the gene for transmembrane aminopeptidase (Taqpep), the same gene responsible for the striped "mackerel" versus blotchy "classic" pattern seen in tabby cats. The appearance is caused by reinforcement of a recessive allele; hence if two mating cheetahs are heterozygous carriers of the mutated allele, a quarter of their offspring can be expected to be king cheetahs.
Characteristics
The cheetah is a lightly built, spotted cat characterised by a small rounded head, a short snout, black tear-like facial streaks, a deep chest, long thin legs and a long tail. Its slender, canine-like form is highly adapted for speed, and contrasts sharply with the robust build of the genus Panthera. Cheetahs typically reach at the shoulder and the head-and-body length is between . The weight can vary with age, health, location, sex and subspecies; adults typically range between . Cubs born in the wild weigh at birth, while those born in captivity tend to be larger and weigh around . The cheetah is sexually dimorphic, with males larger and heavier than females, but not to the extent seen in other large cats; females have a much lower body mass index than males. Studies differ significantly on morphological variations among the subspecies.
The coat is typically tawny to creamy white or pale buff (darker in the mid-back portion). The chin, throat and underparts of the legs and the belly are white and devoid of markings. The rest of the body is covered with around 2,000 evenly spaced, oval or round solid black spots, each measuring roughly . Each cheetah has a distinct pattern of spots which can be used to identify unique individuals. Besides the clearly visible spots, there are other faint, irregular black marks on the coat. Newly born cubs are covered in fur with an unclear pattern of spots that gives them a dark appearance—pale white above and nearly black on the underside. The hair is mostly short and often coarse, but the chest and the belly are covered in soft fur; the fur of king cheetahs has been reported to be silky. There is a short, rough mane, covering at least along the neck and the shoulders; this feature is more prominent in males. The mane starts out as a cape of long, loose blue to grey hair in juveniles. Melanistic cheetahs are rare and have been seen in Zambia and Zimbabwe. In 1877–1878, Sclater described two partially albino specimens from South Africa. A tabby cheetah was photographed in Kenya in 2012.
The head is small and more rounded compared to big cats. Saharan cheetahs have canine-like slim faces. The ears are small, short and rounded; they are tawny at the base and on the edges and marked with black patches on the back. The eyes are set high and have round pupils. The whiskers, shorter and fewer than those of other felids, are fine and inconspicuous. The pronounced tear streaks (or malar stripes), unique to the cheetah, originate from the corners of the eyes and run down the nose to the mouth. The role of these streaks is not well understood—they may protect the eyes from the sun's glare (a helpful feature as the cheetah hunts mainly during the day), or they could be used to define facial expressions. The exceptionally long and muscular tail, with a bushy white tuft at the end, measures . While the first two-thirds of the tail are covered in spots, the final third is marked with four to six dark rings or stripes.
The cheetah is superficially similar to the leopard, which has a larger head, fully retractable claws, rosettes instead of spots, lacks tear streaks and is more muscular. Moreover, the cheetah is taller than the leopard. The serval also resembles the cheetah in physical build, but is significantly smaller, has a shorter tail and its spots fuse to form stripes on the back. The cheetah appears to have evolved convergently with canids in morphology and behaviour; it has canine-like features such as a relatively long snout, long legs, a deep chest, tough paw pads and blunt, semi-retractable claws. The cheetah has often been likened to the greyhound, as both have similar morphology and the ability to reach tremendous speeds in a shorter time than other mammals, but the cheetah can attain much higher maximum speeds.
Internal anatomy
The cheetah shows several specialized adaptations for prolonged chases to catch prey at some of the fastest speeds reached by land animals. Its light, streamlined body makes it well-suited to short, explosive bursts of speed, rapid acceleration, and an ability to execute extreme changes in direction while moving at high speed. The large nasal passages, accommodated well due to the smaller size of the canine teeth, ensure fast flow of sufficient air, and the enlarged heart and lungs allow the enrichment of blood with oxygen in a short time. This allows cheetahs to rapidly regain their stamina after a chase. During a typical chase, their respiratory rate increases from 60 to 150 breaths per minute.
The cheetah has a fast heart rate, averaging 126–173 beats per minute at resting without arrhythmia. Moreover, the reduced viscosity of the blood at higher temperatures (common in frequently moving muscles) could ease blood flow and increase oxygen transport.
The slightly curved claws are shorter and more straight than those of other cats, lack a protective sheath and are partly retractile. The limited retraction of the cheetah's claws may result from the earlier truncation of the development of the middle phalanx bone in cheetahs. The claws are blunt due to lack of protection, but the large and strongly curved dewclaw is sharp.
The protracted claws increase grip over the ground, while rough paw pads make the sprint more convenient over tough ground. The limbs of the cheetah are longer than what is typical for other cats its size; the thigh muscles are large, and the tibia and fibula are held close together making the lower legs less likely to rotate. This reduces the risk of losing balance during runs, but compromises the cat's ability to climb trees. The highly reduced clavicle is connected through ligaments to the scapula, whose pendulum-like motion increases the stride length and assists in shock absorption. The extension of the vertebral column can add as much as to the stride length.
While running, the cheetah uses its tail as a rudder-like means of steering that enables them to make sharp turns, necessary to outflank antelopes which often change direction to escape during a chase.
Muscle tissue analysed shows little differences in type IIx muscle fiber concentration, anaerobic lactate dehydrogenase enzyme activity, and glycogen concentration between sexes.
The cheetah resembles the smaller cats in cranial features, and in having a long and flexible spine, as opposed to the stiff and short one in other large felids. The roughly triangular skull has light, narrow bones, and the sagittal crest is poorly developed, possibly to reduce weight and enhance speed. The mouth can not be opened as widely as in other cats given the shorter length of muscles between the jaw and the skull.
It has less developed digit flexor muscles in its forearms than other felids, possibly due to its specialisation for running, losing grip ability; its flexor digitorum profundus muscle constitutes 1.2–1.3 % of the combined fore and hind limb muscle mass, compared to 1.6–2.6 % of other felids.
The cheetah has 30 teeth; the dental formula is . The small, flat canines are used to bite the throat and suffocate the prey. A study gave the bite force quotient (BFQ) of the cheetah as 119, close to that for the lion (112), suggesting that adaptations for a lighter skull may not have reduced the power of the cheetah's jaws for their size. Unlike other cats, the cheetah's canines have no gap or diastema behind them when the jaws close, as the top and bottom cheek teeth show extensive overlap. Cheetahs have relatively elongated, blade-like shape carnassial teeth, with reduced lingual cusps; this may have been an adaptation to consume quickly the flesh of a prey before more heavy-built predators from other species arrive to take it from them.
The cheetah's concentration of nerve cells is arranged in a band in the centre of the eyes, a visual streak, the most efficient among felids. This significantly sharpens the vision and enables the cheetah to swiftly locate prey against the horizon. The cheetah is unable to roar due to the presence of a sharp-edged vocal fold within the larynx.
In stressful situations, the cheetah has a lower cortisol level than the leopard, indicating better stress response; it also has lower immunoglobulin G and Serum amyloid A levels but a higher lysozyme level and a higher bacterial killing capacity than the leopard, indicating a poorer adaptive and induced innate immune systems but a better constitutive innate immune system; its constitutive innate immune system compensates for its low variation of major histocompatibility complex and poorer immune adaptability.
The cheetah's urine is odourless as it contains principally elemental sulfur, which may help the cheetah to remain undetected by its prey and other predators.
Speed and acceleration
The cheetah is the world's fastest land animal. Estimates of the maximum speed attained range from . A commonly quoted value is , recorded in 1957, but this measurement is disputed. In 2012, an 11-year-old cheetah from the Cincinnati Zoo set a world record by running in 5.95 seconds over a set run, recording a maximum speed of .
Cheetahs equipped with GPS collars hunted at speeds during most of the chase much lower than the highest recorded speed; their run was interspersed with a few short bursts of a few seconds when they attained peak speeds. The average speed recorded during the high speed phase was , or within the range including error. The highest recorded value was . A hunt consists of two phases, an initial fast acceleration phase when the cheetah tries to catch up with the prey, followed by slowing down as it closes in on it, the deceleration varying by the prey in question. The initial linear acceleration observed was 13 m/s², more than twice than 6 m/s² of horses and greater than 10 m/s² of greyhounds. Cheetahs can increase up 3 m/s (10.8 km/h) and decrease up 4 m/s (14.4 km/h) in a single stride. Speed and acceleration values for a hunting cheetah may be different from those for a non-hunter because while engaged in the chase, the cheetah is more likely to be twisting and turning and may be running through vegetation. The speeds of more than 100 km/h attained by the cheetah may be only slightly greater than those achieved by the pronghorn at and the springbok at , but the cheetah additionally has an exceptional acceleration, can go from 0–97 km/h (0–60 mph) in less than 3 seconds, "faster than a Ferrari". For comparison, polo horses can go from 0 to 36 km/h in 3.6 seconds.
One stride of a galloping cheetah measures ; the stride length and the number of jumps increases with speed. It has been estimated that a cheetah at full speed could take 4 strides per second. During more than half the duration of the sprint, the cheetah has all four limbs in the air, increasing the stride length. Running cheetahs can retain up to 90% of the heat generated during the chase. A 1973 study suggested the length of the sprint is limited by excessive build-up of body heat when the body temperature reaches . However, a 2013 study recorded the average temperature of cheetahs after hunts to be , suggesting high temperatures need not cause hunts to be abandoned.
The running speed of of the cheetah was obtained as an result of a single run of one individual by dividing the distance traveled for time spent. The run lasted 2.25 seconds and was supposed to have been long, but was later found to have been long. It was therefore discredited for a faulty method of measurement.
Cheetahs have subsequently been measured at running at a speed of as the fastest speed from three runs including in opposite direction, for a single individual, over a marked course, even starting the run behind the start line, starting the run already running on the course. Again dividing the distance by time, but this time to determine the maximum sustained speed, completing the runs in a time of 7.0, 6.9 and 7.2 seconds. This test was made in 1965 but published in 1997. In 2010, the running speed of 15 cheetahs was measured by means of high speed camera stationed on a tripod and placed at specific points on a track; the cheetahs were chasing a lure and there were several attempts per individual, and their length from the nose to base of the tail was used as a scale. The speed was estimated from the time the tip of the nose appeared until it was no longer visible on camera. The maximum speed recorded was 100.1 km/h for one individual. Subsequently, with GPS-IMU collars, in 2011 and 2012, running speed was measured for wild cheetahs during hunts with turns and manoeuvres, and the maximum speed recorded was 58 mph (93 km/h) sustained for 1–2 seconds. The speed was obtained by dividing the length by the time between footfalls of a stride.
There are indirect ways to measure how fast a cheetah can run. One case is known of a cheetah that overtook a young male pronghorn. Cheetahs can overtake a running antelope with a head start. Both animals were clocked at by speedometer reading while running alongside a vehicle at full speed. Cheetahs can easily capture gazelles galloping at full speed ().
The physiological reasons for speed in cheetahs are:
Small head and long lumbar region of the spine, 36.8% of the presacral vertebral column.
A tibia and radius longer than the femur and humerus, with a femorotibial index of 105 and a humeroradial index of 103.3.
Long bones of the limbs remarkably long for their body mass. Pelvis is elongated, specially the ischium.
Enlarged respiratory passages and frontal sinuses that allow to cool inhaled and exhaled air with each breath, which helps dissipate body heat.
Type IIx muscle fibers concentration is of 50.1% in hindlimb muscles, 40% in neck and trunk muscles, and 36% in forelimb muscles. A lactate dehydrogenase enzyme activity nine times higher than in humans is indicative of a principally anaerobic muscle metabolism.
Most of the locomotor muscle mass is concentrated proximally close to the body in shoulders, thighs and spine, and is reduced in shins and forearms. Long tendons finish off the distal locomotor muscles.
Large hindlimb muscles that form 19.8% of their body mass, whereas that forelimb muscles form 15.1%. Thigh muscles are 50% larger than predicted for their body mass.
Enlarged Betz cells for their brain mass in the motor cortex, to innervate its predominant type IIx muscle fibers and powerful muscles.
Distribution and habitat
In eastern and southern Africa, the cheetah occurs mostly in savannas like the Kalahari and Serengeti. In central, northern and western Africa, it inhabits arid mountain ranges and valleys; in the harsh climate of the Sahara, it prefers high mountains, which receive more rainfall than the surrounding desert. The vegetation and water resources in these mountains support antelopes. In Iran, it occurs in hilly terrain of deserts at elevations up to , where annual precipitation is generally below ; the primary vegetation in these areas is thinly distributed shrubs, less than tall.
The cheetah inhabits a variety of ecosystems and appears to be less selective in habitat choice than other felids; it prefers areas with greater availability of prey, good visibility and minimal chances of encountering larger predators. It seldom occurs in tropical forests. It has been reported at the elevation of . An open area with some cover, such as diffused bushes, is probably ideal for the cheetah because it needs to stalk and pursue its prey over a distance. This also minimises the risk of encountering larger carnivores. The cheetah tends to occur in low densities typically between 0.3 and 3.0 adults per ; these values are 10–30% of those reported for leopards and lions.
Historical range
In prehistoric times, the cheetah was distributed throughout Africa, Asia and Europe. It gradually fell to extinction in Europe, possibly because of competition with the lion. Today the cheetah has been extirpated in most of its historical range; the numbers of the Asiatic cheetah had begun plummeting since the late 1800s, long before the other subspecies started their decline. As of 2017, cheetahs occur in just nine per cent of their erstwhile range in Africa, mostly in unprotected areas.
In the past until the mid-20th century, the cheetah ranged across vast stretches in Asia, from the Arabian Peninsula in the west to the Indian subcontinent in the east, and as far north as the Aral and Caspian Seas. A few centuries ago the cheetah was abundant in India, and its range coincided with the distribution of major prey like the blackbuck. However, its numbers in India plummeted from the 19th century onward; Divyabhanusinh of the Bombay Natural History Society notes that the last three individuals in the wild were killed by Maharaja Ramanuj Pratap Singh of Surguja in 1947. The last confirmed sighting in India was of a cheetah that drowned in a well near Hyderabad in 1957.
Vladimir Geptner wrote that the cheetah's range in the Soviet Union encompassed the "desert plains of Middle Asia and southern Kazakhstan and the eastern Trans-Caucasus". During the Middle Ages, the cheetah ranged as far as western Georgia and probably survived in the Kura-Aras lowland and central Aras valley as recently as the eighteenth century, though it went extinct in the region following the decline of goitered gazelles and due to human persecution. By the mid-20th century, the cheetah was reportedly "still quite extensively if sparsely distributed throughout the region west of the Amu-Darya and Aral Sea, but has been vanishing very rapidly".
In Iraq, cheetahs were reported from Basra in the 1920s. In Iran there were around 400 cheetahs before World War II, distributed across deserts and steppes to the east and the borderlands with Iraq to the west; the numbers were falling because of a decline in prey. Conservation efforts in the 1950s stabilised the population, but prey species declined again in the wake of the Iranian Revolution (1979) and the Iran–Iraq War (1980–1988), leading to a significant contraction of the cheetah's historical range in the region.
In 1975, the cheetah population was estimated at 15,000 individuals throughout Sub-Saharan Africa, following the first survey in this region by Norman Myers. The range covered most of eastern and southern Africa, except for the desert region on the western coast of modern-day Angola and Namibia. In the following years, cheetah populations across the region have become smaller and more fragmented as their natural habitat has been modified dramatically.
Natural cheetah mummies dating back thousands of years have been found in a cave system in Saudi Arabia.
Present distribution
The cheetah occurs mostly in eastern and southern Africa; its presence in Asia is limited to the central deserts of Iran, though there have been unconfirmed reports of sightings in Afghanistan, Iraq and Pakistan in the last few decades. The global population of cheetahs was estimated at nearly 7,100 mature individuals in 2016. The Iranian population appears to have decreased from 60 to 100 individuals in 2007 to 43 in 2016, distributed in three subpopulations over less than in Iran's central plateau. The largest population of nearly 4,000 individuals is sparsely distributed over Angola, Botswana, Mozambique, Namibia, South Africa and Zambia. Another population in Kenya and Tanzania comprises about 1,000 individuals. All other cheetahs occur in small, fragmented groups of less than 100 individuals each. Populations are thought to be declining.
Ecology and behaviour
Cheetahs are active mainly during the day, whereas other carnivores such as leopards and lions are active mainly at night; These larger carnivores can kill cheetahs and steal their kills; hence, the diurnal tendency of cheetahs helps them avoid larger predators in areas where they are sympatric, such as the Okavango Delta. In areas where the cheetah is the major predator (such as farmlands in Botswana and Namibia), activity tends to increase at night. This may also happen in highly arid regions such as the Sahara, where daytime temperatures can reach . The lunar cycle can also influence the cheetah's routine—activity might increase on moonlit nights as prey can be sighted easily, though this comes with the danger of encountering larger predators. Hunting is the major activity throughout the day, with peaks during dawn and dusk. Groups rest in grassy clearings after dusk. Cheetahs often inspect their vicinity at observation points such as elevations to check for prey or larger carnivores; even while resting, they take turns at keeping a lookout.
Social organisation
Cheetahs have a flexible and complex social structure and tend to be more gregarious than several other cat species. Individuals typically avoid one another but are generally amicable; males may fight over territories or access to females in oestrus, and on rare occasions such fights can result in severe injury and death. Females are not social and have minimal interaction with other individuals, barring the interaction with males when they enter their territories or during the mating season. Some females, generally mother and offspring or siblings, may rest beside one another during the day. Females tend to lead a solitary life or live with offspring in undefended home ranges; young females often stay close to their mothers for life but young males leave their mother's range to live elsewhere.
Some males are territorial, and group together for life, forming coalitions that collectively defend a territory which ensures maximum access to females—this is unlike the behaviour of the male lion who mates with a particular group (pride) of females. In most cases, a coalition will consist of brothers born in the same litter who stayed together after weaning, but biologically unrelated males are often allowed into the group; in the Serengeti, 30% of members in coalitions are unrelated males. If a cub is the only male in a litter, he will typically join an existing group, or form a small group of solitary males with two or three other lone males who may or may not be territorial. In the Kalahari Desert around 40% of the males live in solitude.
Males in a coalition are affectionate toward each other, grooming mutually and calling out if any member is lost; unrelated males may face some aversion in their initial days in the group. All males in the coalition typically have equal access to kills when the group hunts together, and possibly also to females who may enter their territory. A coalition generally has a greater chance of encountering and acquiring females for mating; however, its large membership demands greater resources than do solitary males. A 1987 study showed that solitary and grouped males have a nearly equal chance of coming across females, but the males in coalitions are notably healthier and have better chances of survival than their solitary counterparts.
Male cheetahs seem to be more tolerant to cubs that are not their offspring than other felids, and supposed evidence of infanticide is considered circumstantial.
Home ranges and territories
Unlike many other felids, among cheetahs, females tend to occupy larger areas compared to males. Females typically disperse over large areas in pursuit of prey, but they are less nomadic and roam in a smaller area if prey availability in the area is high. As such, the size of their home range depends on the distribution of prey in a region. In central Namibia, where most prey species are sparsely distributed, home ranges average , whereas in the woodlands of the Phinda Game Reserve (South Africa), which have plentiful prey, home ranges are in size. Cheetahs can travel long stretches overland in search of food; a study in the Kalahari Desert recorded an average displacement of nearly every day and walking speeds ranged between .
Males are generally less nomadic than females; often males in coalitions (and sometimes solitary males staying far from coalitions) establish territories. Whether males settle in territories or disperse over large areas forming home ranges depends primarily on the movements of females. Territoriality is preferred only if females tend to be more sedentary, which is more feasible in areas with plenty of prey. Some males, called floaters, switch between territoriality and nomadism depending on the availability of females. A 1987 study showed territoriality depended on the size and age of males and the membership of the coalition. The ranges of floaters averaged in the Serengeti to in central Namibia. In the Kruger National Park (South Africa) territories were much smaller. A coalition of three males occupied a territory measuring , and the territory of a solitary male measured . When a female enters a territory, the males will surround her; if she tries to escape, the males will bite or snap at her. Generally, the female can not escape on her own; the males themselves leave after they lose interest in her. They may smell the spot she was sitting or lying on to determine if she was in oestrus.
Communication
The cheetah is a vocal felid with a broad repertoire of calls and sounds; the acoustic features and the use of many of these have been studied in detail. The vocal characteristics, such as the way they are produced, are often different from those of other cats. For instance, a study showed that exhalation is louder than inhalation in cheetahs, while no such distinction was observed in the domestic cat. Listed below are some commonly recorded vocalisations observed in cheetahs:
Chirping: A chirp (or a "stutter-bark") is an intense bird-like call and lasts less than a second. Cheetahs chirp when they are excited, for instance, when gathered around a kill. Other uses include summoning concealed or lost cubs by the mother, or as a greeting or courtship between adults. The cheetah's chirp is similar to the soft roar of the lion, and its churr as the latter's loud roar. A similar but louder call ('yelp') can be heard from up to away; this call is typically used by mothers to locate lost cubs, or by cubs to find their mothers and siblings.
Churring (or churtling): A churr is a shrill, staccato call that can last up to two seconds. Churring and chirping have been noted for their similarity to the soft and loud roars of the lion. It is produced in similar context as chirping, but a study of feeding cheetahs found chirping to be much more common.
Purring: Similar to purring in domestic cats but much louder, it is produced when the cheetah is content, and as a form of greeting or when licking one another. It involves continuous sound production alternating between egressive and ingressive airstreams.
Agonistic sounds: These include bleating, coughing, growling, hissing, meowing and moaning (or yowling). A bleat indicates distress, for instance when a cheetah confronts a predator that has stolen its kill. Growls, hisses and moans are accompanied by multiple, strong hits on the ground with the front paw, during which the cheetah may retreat by a few metres. A meow, though a versatile call, is typically associated with discomfort or irritation.
Other vocalisations: Individuals can make a gurgling noise as part of a close, amicable interaction. A "nyam nyam" sound may be produced while eating. Apart from chirping, mothers can use a repeated "ihn ihn" is to gather cubs, and a "prr prr" is to guide them on a journey. A low-pitched alarm call is used to warn the cubs to stand still. Bickering cubs can let out a "whirr"—the pitch rises with the intensity of the quarrel and ends on a harsh note.
Another major means of communication is by scent—the male will often raise his tail and spray urine on elevated landmarks such as a tree trunks, stumps or rocks; other cheetahs will sniff these landmarks and repeat the ritual. Females may also show marking behaviour but less prominently than males do. Females in oestrus will show maximum urine-marking, and their excrement can attract males from far off. In Botswana, cheetahs are frequently captured by ranchers to protect livestock by setting up traps in traditional marking spots; the calls of the trapped cheetah can attract more cheetahs to the place. An analysis of camera traps at scent-marking sites in north-central Namibia found that cheetahs defecate on marking sites much more frequently than leopards.
Touch and visual cues are other ways of signalling in cheetahs. Social meetings involve mutual sniffing of the mouth, anus and genitals. Individuals will groom one another, lick each other's faces and rub cheeks. However, they seldom lean on or rub their flanks against each other. The tear streaks on the face can sharply define expressions at close range. Mothers probably use the alternate light and dark rings on the tail to signal their cubs to follow them.
Diet and hunting
The cheetah is a carnivore that hunts small to medium-sized prey weighing , but mostly less than . Its primary prey are medium-sized ungulates. They are the major component of the diet in certain areas, such as Dama and Dorcas gazelles in the Sahara, impala in the eastern and southern African woodlands, springbok in the arid savannas to the south and Thomson's gazelle in the Serengeti. Smaller antelopes like the common duiker are frequent prey in the southern Kalahari. Larger ungulates are typically avoided, though nyala, whose males weigh around , were found to be the major prey in a study in the Phinda Game Reserve. In Namibia cheetahs are the major predators of livestock. The diet of the Asiatic cheetah consists of chinkara, desert hare, goitered gazelle, urial, wild goats, and livestock; in India cheetahs used to prey mostly on blackbuck.
Prey preferences and hunting success vary with the age, sex and number of cheetahs involved in the hunt and on the vigilance of the prey. Generally, only groups of cheetahs (coalitions or mother and cubs) try to kill larger prey; mothers with cubs especially look out for larger prey and tend to be more successful than females without cubs. Individuals on the periphery of the prey herd are common targets; vigilant prey which would react quickly on seeing the cheetah are not preferred.
The cheetah is one of the most iconic pursuit predators, hunting primarily throughout the day, sometimes with peaks at dawn and dusk; it tends to avoid larger predators like the primarily nocturnal lion.
One record of a cheetah is known that cooperated with black-backed jackals in bringing down prey.
Cheetahs in the Sahara and Maasai Mara in Kenya hunt after sunset to escape the high temperatures of the day. Cheetahs use their vision to hunt instead of their sense of smell; they keep a lookout for prey from resting sites or low branches. The cheetah stalks its prey, trying to conceal itself in cover, and approach as close as possible, often within of the prey (or even closer for less alert prey). Alternatively the cheetah can lie hidden in cover and wait for the prey to come nearer. A stalking cheetah assumes a partially crouched posture, with the head lower than the shoulders; it moves slowly and remains still at times. In areas of minimal cover, the cheetah may approach within of the prey and start the chase. The chase typically lasts 37.9 seconds in average. In a 2013 study, the length of chases averaged , and the longest run measured . The cheetah can give up the hunt attempt if it is detected by the prey early or if it cannot make a kill quickly. Having less muscled forearms than other predators, cheetahs lack the claw strength to tackle down the prey, and instead catch the prey using their well-developed dewclaw in the forepaws to hook the limbs or rump of the prey in full pursuit to disrupt its balance and cause it to fall, allowing the cheetah to pounce on the prey. Such a fall during a high-speed chase may cause the prey to collapse hard enough to break some of its limbs.
Cheetahs can decelerate dramatically toward the end of the hunt, slowing down from to in just three strides, and can easily follow any twists and turns the prey makes as it tries to flee. To kill medium- to large-sized prey, the cheetah bites the prey's throat to strangle it, maintaining the bite for around five minutes, within which the prey succumbs to asphyxiation and stops struggling. A bite on the nape of the neck or the snout (and sometimes on the skull) suffices to kill smaller prey. Cheetahs have an average hunting success rate of 25–40%, higher for smaller and more vulnerable prey.
Once the hunt is over, the prey is taken near a bush or under a tree; the cheetah, highly exhausted after the chase, rests beside the kill and pants heavily for five to 55 minutes. Meanwhile, cheetahs nearby, who did not take part in the hunt, might feed on the kill immediately. Groups of cheetah consume the kill peacefully, though minor noises and snapping may be observed. Cheetahs can consume large quantities of food; a cheetah at the Etosha National Park (Namibia) was found to consume as much as within two hours. However, on a daily basis, a cheetah feeds on around of meat. Cheetahs, especially mothers with cubs, remain cautious even as they eat, pausing to look around for vultures and predators who may steal the kill.
Cheetahs move their heads from side to side so the blade-like shape carnassial teeth tear the flesh, which can then be swallowed without chewing. They typically begin with the hindquarters where the tissue is the softest, and then progress toward the abdomen and the spine. Ribs are chewed on at the ends, and the limbs are not generally torn apart while eating. Unless the prey is very small, the skeleton is left almost intact after feeding on the meat. Cheetahs have been reported to lose 9–14% of their kills to larger and stronger predators. Unlike African wild dogs, cheetahs could cope with a 25% loss of kills, needing to spend only 4 hours per day hunting to recover the energy wasted; their high speed and short duration pursuits would make them energetically flexible. To defend itself or its prey, a cheetah holds its body low to the ground and snarls with its mouth wide open, the eyes staring threateningly ahead and the ears folded backward. This may be accompanied by moans, hisses and growls, and hitting the ground with the forepaws. Although uncommon, cases of wild cheetahs scavenging carcasses that they did not hunt themselves have been observed; even one case of a cheetah mother and her three 15 month old cubs stealing a kill from a spotted hyena (a topi) is known. Causes of this scavenging behavior are unclear.
Cheetahs appear to have a comparatively higher hunting success rate than other predators. Their success rate for hunting Thomson gazelles is 70%, whereas the success rate of African wild dogs is 57%, of spotted hyenas 33%, and of lions 26%. Their success rate for hunting impalas is 26%, but of African wild dogs only 15.5%.
Reproduction and life cycle
The cheetah breeds throughout the year; females are polyestrous and induced ovulators with an estrous cycle of 12 days on average that can vary from three days to a month. They have their first litter at two to three years of age and can conceive again after 17 to 20 months from giving birth, or even sooner if a whole litter is lost. Males can breed at less than two years of age in captivity, but this may be delayed in the wild until the male acquires a territory. A 2007 study showed that females who gave birth to more litters early in their life often died younger, indicating a trade-off between longevity and yearly reproductive success.
Urine-marking in males can become more pronounced when a female in their vicinity comes into estrus. Males, sometimes even those in coalitions, fight among one another to secure access to the female. Often one male will eventually win dominance over the others and mate with the female, though a female can mate with different males. Mating begins with the male approaching the female, who lies down on the ground; individuals often chirp, purr or yelp at this time. No courtship behaviour is observed; the male immediately secures hold of the female's nape, and copulation takes place. The pair then ignore each other, but meet and copulate a few more times three to five times a day for the next two to three days before finally parting ways.
After a gestation of nearly three months, a litter of one to eight cubs is born (though those of three to four cubs are more common). Births take place at 20–25 minute intervals in a sheltered place such as thick vegetation. The eyes are shut at birth, and open in four to 11 days. Newborn cubs might spit a lot and make soft churring noises; they start walking by two weeks. Their nape, shoulders and back are thickly covered with long bluish-grey hair, called a mantle, which gives them a mohawk-type appearance; this fur is shed as the cheetah grows older. A study suggested that this mane gives a cheetah cub the appearance of a honey badger, and could act as camouflage from attacks by these badgers or predators that tend to avoid them.
Compared to other felids, cheetah cubs are highly vulnerable to several predators during the first few weeks of their life. Mothers keep their cubs hidden in dense vegetation for the first two months and nurse in the early morning. The mother is extremely vigilant at this stage; she stays within of the lair, frequently visits her cubs, moves them every five to six days, and remains with them after dark. Despite trying to make minimal noise, she cannot generally defend her litter from predators. Predation is the leading cause of mortality in cheetah cubs; a study showed that in areas with a low density of predators (such as Namibian farmlands) around 70% of the cubs make it beyond the age of 14 months, whereas in areas like the Serengeti National Park, where several large carnivores exist, the survival rate was just 17%. Deaths also occur from starvation if their mothers abandon them, fires, or pneumonia because of exposure to bad weather. Generation length of the cheetah is six years. The overall juvenile survival rate for cheetahs is 35.7% in the Kgalagadi Transfrontier Park and 34.3% in the Kalahari, compared to a juvenile survival rate of 37% for leopards in the Sabi Sand Game Reserve; high juvenile mortality appears to be a natural part of population dynamics among predators.
Cubs start coming out of the lair at two months of age, trailing after their mother wherever she goes. At this point the mother nurses less and brings solid food to the cubs; they retreat away from the carcass in fear initially, but gradually start eating it. The cubs might purr as the mother licks them clean after the meal. Weaning occurs at four to six months. To train her cubs in hunting, the mother will catch and let go of live prey in front of her cubs. Cubs' play behaviour includes chasing, crouching, pouncing and wrestling; there is plenty of agility, and attacks are seldom lethal. Playing can improve catching skills in cubs, though the ability to crouch and hide may not develop remarkably.
Cubs as young as six months try to capture small prey like hares and young gazelles. However, they may have to wait until as long as 15 months of age to make a successful kill on their own. At around 20 months, offspring become independent; mothers might have conceived again by then. Siblings may remain together for a few more months before parting ways. While females stay close to their mothers, males move farther off. The lifespan of wild cheetahs is 14 to 15 years for females, and their reproductive cycle typically ends by 12 years of age; males generally live as long as ten years.
Competition
Although cheetahs and spotted hyenas favor different prey, the latter will nevertheless steal cheetah kills with no difficulty, with George Schaller observing that cheetahs in the Serengeti lost 4% of their kills to them. Cheetahs, particularly females with cubs, may attempt to protect their kills from hyenas by making threatening vocalizations and lunges, but may retreat if the larger carnivores persist, though exceptions have occurred.
In Iran, cheetahs compete with leopards for chinkara, bezoar ibex and urial. One study undertaken in the Bafq Protected Area found that cheetahs avoided leopards by occupying lower elevations, though one cheetah was nevertheless killed by a leopard during the study. Cheetah mothers have however been observed to drive off leopards threatening their cubs. In north-central Namibia, cheetahs and leopards sometimes visit the same scent marking sites, though they avoid interacting with each other by marking at different times, with cheetahs visiting such sites at night, while leopards do so in daylight.
Threats
The cheetah is threatened by several factors, like habitat loss and fragmentation of populations. Habitat loss is caused mainly by the introduction of commercial land use, such as agriculture and industry. It is further aggravated by ecological degradation, like woody plant encroachment, which is common in southern Africa. Moreover, the species apparently requires a sizeable area to live in as indicated by its low population densities. Shortage of prey and conflict with other species such as humans and large carnivores are other major threats. The cheetah appears to be less capable of coexisting with humans than the leopard. With 76% of its range consisting of unprotected land, the cheetah is often targeted by farmers and pastoralists who attempt to protect their livestock, especially in Namibia. Illegal wildlife trade and trafficking is another problem in some places (like Ethiopia). Some tribes, like the Maasai people in Tanzania, have been reported to use cheetah skins in ceremonies. Roadkill is a threat in areas where roads have been constructed in natural habitats or through protected areas; roadkilled cheetahs were found in Kalmand, Touran National Park and Bafq in Iran. The reduced genetic variability makes cheetahs more vulnerable to diseases; however, the threat posed by infectious diseases may be minor, given the low population densities and hence a reduced chance of infection.
Conservation
The cheetah has been classified as Vulnerable on the IUCNRed List; it is listed under AppendixI of the Convention on the Conservation of Migratory Species of Wild Animals and AppendixI of the Convention on International Trade in Endangered Species. The Endangered Species Act enlists the cheetah as Endangered.
In Africa
Until the 1970s, cheetahs and other carnivores were frequently killed to protect livestock in Africa. Gradually the understanding of cheetah ecology increased and their falling numbers became a matter of concern. The De Wildt Cheetah and Wildlife Centre was set up in 1971 in South Africa to provide care for wild cheetahs regularly trapped or injured by Namibian farmers. By 1987, the first major research project to outline cheetah conservation strategies was underway. The Cheetah Conservation Fund, founded in 1990 in Namibia, put efforts into field research and education about cheetahs on the global platform. The CCF runs a cheetah genetics laboratory, the only one of its kind, in Otjiwarongo (Namibia); "Bushblok" is an initiative to restore habitat systematically through targeted bush thinning and biomass utilisation. Several more cheetah-specific conservation programmes have since been established, like Cheetah Outreach in South Africa.
The Global Cheetah Action Plan Workshop in 2002 laid emphasis on the need for a range-wide survey of wild cheetahs to demarcate areas for conservation efforts and on creating awareness through training programs. The Range Wide Conservation Program for Cheetah and African Wild Dogs began in 2007 as a joint initiative of the IUCN Cat and Canid Specialist Groups, the Wildlife Conservation Society and the Zoological Society of London. National conservation plans have been developed for several African countries. In 2014, the CITES Standing Committee recognised the cheetah as a "species of priority" in their strategies in northeastern Africa to counter wildlife trafficking. In December 2016, the results of an extensive survey detailing the distribution and demography of cheetahs throughout the range were published; the researchers recommended listing the cheetah as Endangered on the IUCN Red List.
The cheetah was reintroduced in Malawi in 2017.
In Asia
In 2001, the Iranian government collaborated with the CCF, the IUCN, Panthera Corporation, United Nations Development Programme and the Wildlife Conservation Society on the Conservation of Asiatic Cheetah Project (CACP) to protect the natural habitat of the Asiatic cheetah and its prey. In 2004, the Iranian Centre for Sustainable Development (CENESTA) conducted an international workshop to discuss conservation plans with local stakeholders. Iran declared 31August as National Cheetah Day in 2006. The Iranian Cheetah Strategic Planning meet in 2010 formulated a five-year conservation plan for Asiatic cheetahs. The CACP Phase II was implemented in 2009, and the third phase was drafted in 2018.
During the early 2000s scientists from the Centre for Cellular and Molecular Biology (Hyderabad) proposed a plan to clone Asiatic cheetahs from Iran for reintroduction in India, but Iran denied the proposal. In September 2009, the Minister of Environment and Forests assigned the Wildlife Trust of India and the Wildlife Institute of India with examining the potential of importing African cheetahs to India. Kuno Wildlife Sanctuary and Nauradehi Wildlife Sanctuary were suggested as reintroduction sites for the cheetah because of the high prey density. However, plans for reintroduction were stalled in May 2012 by the Supreme Court of India because of a political dispute and concerns over introducing a non-native species to the country. Opponents stated the plan was "not a case of intentional movement of an organism into a part of its native range". On 28 January 2020, the Supreme Court allowed the central government to introduce cheetahs to a suitable habitat in India on an experimental basis to see if they can adapt to it. In 2020, India signed a memorandum of understanding with Namibia as part of Project Cheetah. In July 2022, it was announced that eight cheetahs would be transferred from Namibia to India in August. The eight cheetahs were released into Kuno National Park on 17 September 2022. Since their introduction, they gave birth to 17 cubs. However, by September 2024, eight adult cheetahs and four cubs already died.
Interaction with humans
Taming
The cheetah shows little aggression toward humans, and can be tamed easily, as it has been since antiquity. The earliest known depictions of the cheetah are from the Chauvet Cave in France, dating back to 32,000–26,000 BC. According to historians such as Heinz Friederichs and Burchard Brentjes, the cheetah was first tamed in Sumer and this gradually spread out to central and northern Africa, from where it reached India. The evidence for this is mainly pictorial; for instance, a Sumerian seal dating back to , featuring a long-legged leashed animal has fueled speculation that the cheetah was first tamed in Sumer. However, Thomas Allsen argues that the depicted animal might be a large dog. Other historians, such as Frederick Zeuner, have opined that ancient Egyptians were the first to tame the cheetah, from where it gradually spread into central Asia, Iran and India.
In comparison, theories of the cheetah's taming in Egypt are stronger and include timelines proposed on this basis. Mafdet, one of the ancient Egyptian deities worshiped during the First Dynasty (3100–2900BC), was sometimes depicted as a cheetah. Ancient Egyptians believed the spirits of deceased pharaohs were taken away by cheetahs. Reliefs in the Deir el-Bahari temple complex tell of an expedition by Egyptians to the Land of Punt during the reign of Hatshepsut (1507–1458BC) that fetched, among other things, animals called "panthers". During the New Kingdom (16th to 11th centuries BC), cheetahs were common pets for royalty, who adorned them with ornate collars and leashes. Rock carvings depicting cheetahs dating back to 2000–6000 years ago have been found in Twyfelfontein; little else has been discovered in connection to the taming of cheetahs (or other cats) in southern Africa.
Hunting cheetahs are known in pre-Islamic Arabic art from Yemen. Hunting with cheetahs became more prevalent toward the seventh centuryAD. In the Middle East, the cheetah would accompany the nobility to hunts in a special seat on the back of the saddle. Taming was an elaborate process and could take a year to complete. The Romans may have referred to the cheetah as the () or (), believing it to be a hybrid between a leopard and a lion because of the mantle seen in cheetah cubs and the difficulty of breeding them in captivity. A Roman hunting cheetah is depicted in a 4th-century mosaic from Lod, Israel. Cheetahs continued to be used into the Byzantine period of the Roman Empire, with "hunting leopards" being mentioned in the Cynegetica (283/284 AD).
In eastern Asia, records are confusing as regional names for the leopard and the cheetah may be used interchangeably. The earliest depiction of cheetahs from eastern Asia dates back to the Tang dynasty (7th to 10th centuriesAD); paintings depict tethered cheetahs and cheetahs mounted on horses. Chinese emperors would use cheetahs and caracals as gifts. In the 13th and 14th centuries, the Yuan rulers bought numerous cheetahs from the western parts of the empire and from Muslim merchants. According to the , the subsequent Ming dynasty (14th to 17th centuries) continued this practice. Tomb figurines from the Mongol empire, dating back to the reign of Kublai Khan (1260–1294AD), represent cheetahs on horseback. The Mughal ruler Akbar the Great (1556–1605AD) is said to have kept as many as 1000 khasa (imperial) cheetahs. His son Jahangir wrote in his memoirs, Tuzk-e-Jahangiri, that only one of them gave birth. Mughal rulers trained cheetahs and caracals in a similar way as the western Asians, and used them to hunt game, especially blackbuck. The rampant hunting severely affected the populations of wild animals in India; by 1927, cheetahs had to be imported from Africa.
In captivity
The first cheetah to be brought into captivity in a zoo was at the Zoological Society of London in 1829. Early captive cheetahs showed a high mortality rate, with an average lifespan of 3–4 years. After trade of wild cheetahs was delimited by the enforcement of CITES in 1975, more efforts were put into breeding in captivity; in 2014 the number of captive cheetahs worldwide was estimated at 1730 individuals, with 87% born in captivity.
Mortality under captivity is generally high; in 2014, 23% of the captive cheetahs worldwide died under one year of age, mostly within a month of birth. Although a comparative study from 1985 found that cheetah cub mortality at 24 % was generally lower than the average of 33 % for 17 carnivore mammals, including 9 felids; cheetah cub mortality was the second lowest among felids. Deaths result from several reasons—stillbirths, birth defects, cannibalism, hypothermia, maternal neglect, and infectious diseases. Compared to other felids, cheetahs need specialised care because of their higher vulnerability to stress-induced diseases; this has been attributed to their low genetic variability and factors of captive life. Common diseases of cheetahs include feline herpesvirus, feline infectious peritonitis, gastroenteritis, glomerulosclerosis, leukoencephalopathy, myelopathy, nephrosclerosis and veno-occlusive disease. High density of cheetahs in a place, closeness to other large carnivores in enclosures, improper handling, exposure to public and frequent movement between zoos can be sources of stress for cheetahs. Recommended management practices for cheetahs include spacious and ample access to outdoors, stress minimisation by exercise and limited handling, and following proper hand-rearing protocols (especially for pregnant females).
Wild cheetahs are far more successful breeders than captive cheetahs; this has also been linked to increased stress levels in captive individuals. In a study in the Serengeti, females were found to have a 95% success rate in breeding, compared to 20% recorded for North American captive cheetahs in another study. On 26 November 2017, a female cheetah gave birth to eight cubs at the Saint Louis Zoo, setting a record for the most births recorded by the Association of Zoos and Aquariums. Chances of successful mating in captive males can be improved by replicating social groups such as coalitions observed in the wild.
Attacks on humans
There are no documented records of lethal attacks on humans by wild cheetahs. However, there have been instances of people being fatally mauled by captive cheetahs. In 2007, a 37-year-old woman from Antwerp was killed by a cheetah in a Belgian zoo after sneaking into its cage outside of visiting hours.Woman killed by cheetah in Belgian zoo . Sydney Morning Herald. 13 February 2007. Retrieved 23 October 2023 In 2017, a three-year-old child was attacked by a captive cheetah on a farm in Philippolis, South Africa. Despite being airlifted to a hospital in Bloemfontein, the boy died from his injuries.
In culture
The cheetah has been widely portrayed in a variety of artistic works. In Bacchus and Ariadne, an oil painting by the 16th-century Italian painter Titian, the chariot of the Greek god Dionysus (Bacchus) is depicted as being drawn by two cheetahs. The cheetahs in the painting were previously considered to be leopards. In 1764, English painter George Stubbs commemorated the gifting of a cheetah to George III by the English Governor of Madras, Sir George Pigot in his painting Cheetah with Two Indian Attendants and a Stag. The painting depicts a cheetah, hooded and collared by two Indian servants, along with a stag it was supposed to prey upon. The 1896 painting The Caress by the 19th-century Belgian symbolist painter Fernand Khnopff is a representation of the myth of Oedipus and the Sphinx and portrays a creature with a woman's head and a cheetah's body.
According to theologian Philip Schaff, the "leopard" mentioned in Habakkuk 1: 8 could actually be a cheetah.
Two cheetahs are depicted standing upright and supporting a crown in the coat of arms of the Free State (South Africa).
In 1969, Joy Adamson, of Born Free fame, wrote The Spotted Sphinx, a biography of her pet cheetah Pippa. Hussein, An Entertainment, a novel by Patrick O'Brian set in the British Raj period in India, illustrates the practice of royalty keeping and training cheetahs to hunt antelopes. The book How It Was with Dooms tells the true story of a family raising an orphaned cheetah cub named Dooms in Kenya. The 2005 film Duma was based loosely on this book. The animated series ThunderCats had a character named "Cheetara", an anthropomorphic cheetah, voiced by Lynne Lipton. Comic book heroine Wonder Woman's chief adversary is Barbara Ann Minerva alias The Cheetah.
The Bill Thomas Cheetah American racing car, a Chevrolet-based coupe first designed and driven in 1963, was never homologated for competition beyond prototype status; its production ended in 1966. In 1986, Frito-Lay introduced Chester Cheetah, an anthropomorphic cheetah, as the mascot for their snack food Cheetos. The Mac OS X 10.0 was code-named "Cheetah".
See also
List of largest cats
References
Further reading
External links
An Emotional Support Dog Is the Only Thing That Chills Out a Cheetah August 19, 2019, Atlas Obscura
Category:Acinonyx
Category:Mammals of Africa
Category:Mammals of Asia
Category:Extant Pleistocene first appearances
Category:Mammals described in 1775
Category:Taxa named by Johann Christian Daniel von Schreber
Category:Articles containing video clips
|
nature_wildlife
| 11,298
|
49033
|
Epigenetics
|
https://en.wikipedia.org/wiki/Epigenetics
|
Epigenetics is the study of changes in gene expression that occur without altering the DNA sequence. The Greek prefix epi- (ἐπι- "over, outside of, around") in epigenetics implies features that are "on top of" or "in addition to" the traditional DNA-sequence-based mechanism of inheritance. Epigenetics usually involves changes that persist through cell division, and affect the regulation of gene expression. Such effects on cellular and physiological traits may result from environmental factors, or be part of normal development.
The term also refers to the mechanism behind these changes: functionally relevant alterations to the genome that do not involve mutations in the nucleotide sequence. Examples of mechanisms that produce such changes are DNA methylation and histone modification, each of which alters how genes are expressed without altering the underlying DNA sequence. Further, non-coding RNA sequences have been shown to play a key role in the regulation of gene expression. Gene expression can be controlled through the action of repressor proteins that attach to silencer regions of the DNA. These epigenetic changes may last through cell divisions for the duration of the cell's life, and may also last for multiple generations, even though they do not involve changes in the underlying DNA sequence of the organism; instead, non-genetic factors cause the organism's genes to behave (or "express themselves") differently.
One example of an epigenetic change in eukaryotic biology is the process of cellular differentiation. During morphogenesis, totipotent stem cells become the various pluripotent cell lines of the embryo, which in turn become fully differentiated cells. In other words, as a single fertilized egg cell – the zygote – continues to divide, the resulting daughter cells develop into the different cell types in an organism, including neurons, muscle cells, epithelium, endothelium of blood vessels, etc., by activating some genes while inhibiting the expression of others.
Definitions
The term epigenesis has a generic meaning of "extra growth" that has been used in English since the 17th century.Oxford English Dictionary: "The word is used by W. Harvey, Exercitationes 1651, p. 148, and in the English Anatomical Exercitations 1653, p. 272. It is explained to mean 'partium super-exorientium additamentum', 'the additament of parts budding one out of another'." In scientific publications, the term epigenetics started to appear in the 1930s (see Fig. on the right). However, its contemporary meaning emerged only in the 1990s.
A definition of the concept of epigenetic trait as a "stably heritable phenotype resulting from changes in a chromosome without alterations in the DNA sequence" was formulated at a Cold Spring Harbor meeting in 2008, although alternate definitions that include non-heritable traits are still being used widely.
Waddington's canalisation, 1940s
The hypothesis of epigenetic changes affecting the expression of chromosomes was put forth by the Russian biologist Nikolai Koltsov.Morange M. La tentative de Nikolai Koltzoff (Koltsov) de lier génétique, embryologie et chimie physique, J. Biosciences. 2011. V. 36. P. 211-214 From the generic meaning, and the associated adjective epigenetic, British embryologist C. H. Waddington coined the term epigenetics in 1942 as pertaining to epigenesis, in parallel to Valentin Haecker's 'phenogenetics' ()."For the purpose of a study of inheritance, the relation between phenotypes and genotypes [...] is, from a wider biological point of view, of crucial importance, since it is the kernel of the whole problem of development." Epigenesis in the context of the biology of that period referred to the differentiation of cells from their initial totipotent state during embryonic development.See preformationism for historical background. Oxford English Dictionary:
"the theory that the germ is brought into existence (by successive accretions), and not merely developed, in the process of reproduction. [...] The opposite theory was formerly known as the 'theory of evolution'; to avoid the ambiguity of this name, it is now spoken of chiefly as the 'theory of preformation', sometimes as that of 'encasement' or 'emboîtement'."
When Waddington coined the term, the physical nature of genes and their role in heredity was not known. He used it instead as a conceptual model of how genetic components might interact with their surroundings to produce a phenotype; he used the phrase "epigenetic landscape" as a metaphor for biological development. Waddington held that cell fates were established during development in a process he called canalisation much as a marble rolls down to the point of lowest local elevation. Waddington suggested visualising increasing irreversibility of cell type differentiation as ridges rising between the valleys where the marbles (analogous to cells) are travelling.
In recent times, Waddington's notion of the epigenetic landscape has been rigorously formalized in the context of the systems dynamics state approach to the study of cell-fate. Cell-fate determination is predicted to exhibit certain dynamics, such as attractor-convergence (the attractor can be an equilibrium point, limit cycle or strange attractor) or oscillatory.
Contemporary
In 1990, Robin Holliday defined epigenetics as "the study of the mechanisms of temporal and spatial control of gene activity during the development of complex organisms."
More recent usage of the word in biology follows stricter definitions. As defined by Arthur Riggs and colleagues, it is "the study of mitotically and/or meiotically heritable changes in gene function that cannot be explained by changes in DNA sequence."
The term has also been used, however, to describe processes which have not been demonstrated to be heritable, such as some forms of histone modification. Consequently, there are attempts to redefine "epigenetics" in broader terms that would avoid the constraints of requiring heritability. For example, Adrian Bird defined epigenetics as "the structural adaptation of chromosomal regions so as to register, signal or perpetuate altered activity states." This definition would be inclusive of transient modifications associated with DNA repair or cell-cycle phases as well as stable changes maintained across multiple cell generations, but exclude others such as templating of membrane architecture and prions unless they impinge on chromosome function. Such redefinitions however are not universally accepted and are still subject to debate. The NIH "Roadmap Epigenomics Project", which ran from 2008 to 2017, uses the following definition: "For purposes of this program, epigenetics refers to both heritable changes in gene activity and expression (in the progeny of cells or of individuals) and also stable, long-term alterations in the transcriptional potential of a cell that are not necessarily heritable." In 2008, a consensus definition of the epigenetic trait, a "stably heritable phenotype resulting from changes in a chromosome without alterations in the DNA sequence," was made at a Cold Spring Harbor meeting.
The similarity of the word to "genetics" has generated many parallel usages. The "epigenome" is a parallel to the word "genome", referring to the overall epigenetic state of a cell, and epigenomics refers to global analyses of epigenetic changes across the entire genome. The phrase "genetic code" has also been adapted – the "epigenetic code" has been used to describe the set of epigenetic features that create different phenotypes in different cells from the same underlying DNA sequence. Taken to its extreme, the "epigenetic code" could represent the total state of the cell, with the position of each molecule accounted for in an epigenomic map, a diagrammatic representation of the gene expression, DNA methylation and histone modification status of a particular genomic region. More typically, the term is used in reference to systematic efforts to measure specific, relevant forms of epigenetic information such as the histone code or DNA methylation patterns.
Mechanisms
Covalent modification of either DNA (e.g. cytosine methylation and hydroxymethylation) or of histone proteins (e.g. lysine acetylation, lysine and arginine methylation, serine and threonine phosphorylation, and lysine ubiquitination and sumoylation) play central roles in many types of epigenetic inheritance. Therefore, the word "epigenetics" is sometimes used as a synonym for these processes. However, this can be misleading. Chromatin remodeling is not always inherited, and not all epigenetic inheritance involves chromatin remodeling. In 2019, a further lysine modification appeared in the scientific literature linking epigenetics modification to cell metabolism, i.e. lactylation.
Because the phenotype of a cell or individual is affected by which of its genes are transcribed, heritable transcription states can give rise to epigenetic effects. There are several layers of regulation of gene expression. One way that genes are regulated is through the remodeling of chromatin. Chromatin is the complex of DNA and the histone proteins with which it associates. If the way that DNA is wrapped around the histones changes, gene expression can change as well. Chromatin remodeling is accomplished through two main mechanisms:
The first way is post translational modification of the amino acids that make up histone proteins. Histone proteins are made up of long chains of amino acids. If the amino acids that are in the chain are changed, the shape of the histone might be modified. DNA is not completely unwound during replication. It is possible, then, that the modified histones may be carried into each new copy of the DNA. Once there, these histones may act as templates, initiating the surrounding new histones to be shaped in the new manner. By altering the shape of the histones around them, these modified histones would ensure that a lineage-specific transcription program is maintained after cell division.
The second way is the addition of methyl groups to the DNA, mostly at CpG sites, to convert cytosine to 5-methylcytosine. 5-Methylcytosine performs much like a regular cytosine, pairing with a guanine in double-stranded DNA. However, when methylated cytosines are present in CpG sites in the promoter and enhancer regions of genes, the genes are often repressed. When methylated cytosines are present in CpG sites in the gene body (in the coding region excluding the transcription start site) expression of the gene is often enhanced. Transcription of a gene usually depends on a transcription factor binding to a (10 base or less) recognition sequence at the enhancer that interacts with the promoter region of that gene (Gene expression#Enhancers, transcription factors, mediator complex and DNA loops in mammalian transcription). About 22% of transcription factors are inhibited from binding when the recognition sequence has a methylated cytosine. In addition, presence of methylated cytosines at a promoter region can attract methyl-CpG-binding domain (MBD) proteins. All MBDs interact with nucleosome remodeling and histone deacetylase complexes, which leads to gene silencing. In addition, another covalent modification involving methylated cytosine is its demethylation by TET enzymes. Hundreds of such demethylations occur, for instance, during learning and memory forming events in neurons.
There is frequently a reciprocal relationship between DNA methylation and histone lysine methylation. For instance, the methyl binding domain protein MBD1, attracted to and associating with methylated cytosine in a DNA CpG site, can also associate with H3K9 methyltransferase activity to methylate histone 3 at lysine 9. On the other hand, DNA maintenance methylation by DNMT1 appears to partly rely on recognition of histone methylation on the nucleosome present at the DNA site to carry out cytosine methylation on newly synthesized DNA. There is further crosstalk between DNA methylation carried out by DNMT3A and DNMT3B and histone methylation so that there is a correlation between the genome-wide distribution of DNA methylation and histone methylation.
Mechanisms of heritability of histone state are not well understood; however, much is known about the mechanism of heritability of DNA methylation state during cell division and differentiation. Heritability of methylation state depends on certain enzymes (such as DNMT1) that have a higher affinity for 5-methylcytosine than for cytosine. If this enzyme reaches a "hemimethylated" portion of DNA (where 5-methylcytosine is in only one of the two DNA strands) the enzyme will methylate the other half. However, it is now known that DNMT1 physically interacts with the protein UHRF1. UHRF1 has been recently recognized as essential for DNMT1-mediated maintenance of DNA methylation. UHRF1 is the protein that specifically recognizes hemi-methylated DNA, therefore bringing DNMT1 to its substrate to maintain DNA methylation.
Although histone modifications occur throughout the entire sequence, the unstructured N-termini of histones (called histone tails) are particularly highly modified. These modifications include acetylation, methylation, ubiquitylation, phosphorylation, sumoylation, ribosylation and citrullination. Acetylation is the most highly studied of these modifications. For example, acetylation of the K14 and K9 lysines of the tail of histone H3 by histone acetyltransferase enzymes (HATs) is generally related to transcriptional competence (see Figure).
One mode of thinking is that this tendency of acetylation to be associated with "active" transcription is biophysical in nature. Because it normally has a positively charged nitrogen at its end, lysine can bind the negatively charged phosphates of the DNA backbone. The acetylation event converts the positively charged amine group on the side chain into a neutral amide linkage. This removes the positive charge, thus loosening the DNA from the histone. When this occurs, complexes like SWI/SNF and other transcriptional factors can bind to the DNA and allow transcription to occur. This is the "cis" model of the epigenetic function. In other words, changes to the histone tails have a direct effect on the DNA itself.
Another model of epigenetic function is the "trans" model. In this model, changes to the histone tails act indirectly on the DNA. For example, lysine acetylation may create a binding site for chromatin-modifying enzymes (or transcription machinery as well). This chromatin remodeler can then cause changes to the state of the chromatin. Indeed, a bromodomain – a protein domain that specifically binds acetyl-lysine – is found in many enzymes that help activate transcription, including the SWI/SNF complex. It may be that acetylation acts in this and the previous way to aid in transcriptional activation.
The idea that modifications act as docking modules for related factors is borne out by histone methylation as well. Methylation of lysine 9 of histone H3 has long been associated with constitutively transcriptionally silent chromatin (constitutive heterochromatin) (see bottom Figure). It has been determined that a chromodomain (a domain that specifically binds methyl-lysine) in the transcriptionally repressive protein HP1 recruits HP1 to K9 methylated regions. One example that seems to refute this biophysical model for methylation is that tri-methylation of histone H3 at lysine 4 is strongly associated with (and required for full) transcriptional activation (see top Figure). Tri-methylation, in this case, would introduce a fixed positive charge on the tail.
It has been shown that the histone lysine methyltransferase (KMT) is responsible for this methylation activity in the pattern of histones H3 & H4. This enzyme utilizes a catalytically active site called the SET domain (Suppressor of variegation, Enhancer of Zeste, Trithorax). The SET domain is a 130-amino acid sequence involved in modulating gene activities. This domain has been demonstrated to bind to the histone tail and causes the methylation of the histone.
Differing histone modifications are likely to function in differing ways; acetylation at one position is likely to function differently from acetylation at another position. Also, multiple modifications may occur at the same time, and these modifications may work together to change the behavior of the nucleosome. The idea that multiple dynamic modifications regulate gene transcription in a systematic and reproducible way is called the histone code, although the idea that histone state can be read linearly as a digital information carrier has been largely debunked. One of the best-understood systems that orchestrate chromatin-based silencing is the SIR protein based silencing of the yeast hidden mating-type loci HML and HMR.
DNA methylation
DNA methylation often occurs in repeated sequences, and helps to suppress the expression and movement of 'transposable elements': Because 5-methylcytosine can spontaneously deaminate to thymidine(replacing nitrogen by oxygen), CpG sites are frequently mutated and have become rare in the genome, except at CpG islands where they typically remain unmethylated. Epigenetic changes of this type thus have the potential to direct increased frequencies of permanent genetic mutation. DNA methylation patterns are known to be established and modified in response to environmental factors by a complex interplay of at least three independent DNA methyltransferases, DNMT1, DNMT3A, and DNMT3B, the loss of any of which is lethal in mice. DNMT1 is the most abundant methyltransferase in somatic cells, localizes to replication foci, has a 10–40-fold preference for hemimethylated DNA and interacts with the proliferating cell nuclear antigen (PCNA).
By preferentially modifying hemimethylated DNA, DNMT1 transfers patterns of methylation to a newly synthesized strand after DNA replication, and therefore is often referred to as the 'maintenance' methyltransferase. DNMT1 is essential for proper embryonic development, imprinting and X-inactivation. To emphasize the difference of this molecular mechanism of inheritance from the canonical Watson-Crick base-pairing mechanism of transmission of genetic information, the term 'Epigenetic templating' was introduced. Furthermore, in addition to the maintenance and transmission of methylated DNA states, the same principle could work in the maintenance and transmission of histone modifications and even cytoplasmic (structural) heritable states.
RNA methylation
RNA methylation of N6-methyladenosine (m6A) as the most abundant eukaryotic RNA modification has recently been recognized as an important gene regulatory mechanism.
In 2011, it was demonstrated that the methylation of mRNA plays a critical role in human energy homeostasis. The obesity-associated FTO gene is shown to be able to demethylate N6-methyladenosine in RNA.
Histone modifications
Histones H3 and H4 can also be manipulated through demethylation using histone lysine demethylase (KDM). This recently identified enzyme has a catalytically active site called the Jumonji domain (JmjC). The demethylation occurs when JmjC utilizes multiple cofactors to hydroxylate the methyl group, thereby removing it. JmjC is capable of demethylating mono-, di-, and tri-methylated substrates.
Chromosomal regions can adopt stable and heritable alternative states resulting in bistable gene expression without changes to the DNA sequence. Epigenetic control is often associated with alternative covalent modifications of histones. The stability and heritability of states of larger chromosomal regions are suggested to involve positive feedback where modified nucleosomes recruit enzymes that similarly modify nearby nucleosomes. A simplified stochastic model for this type of epigenetics is found here.
It has been suggested that chromatin-based transcriptional regulation could be mediated by the effect of small RNAs. Small interfering RNAs can modulate transcriptional gene expression via epigenetic modulation of targeted promoters.
RNA transcripts
Sometimes, a gene, once activated, transcribes a product that directly or indirectly sustains its own activity. For example, Hnf4 and MyoD enhance the transcription of many liver-specific and muscle-specific genes, respectively, including their own, through the transcription factor activity of the proteins they encode. RNA signalling includes differential recruitment of a hierarchy of generic chromatin modifying complexes and DNA methyltransferases to specific loci by RNAs during differentiation and development. Other epigenetic changes are mediated by the production of different splice forms of RNA, or by formation of double-stranded RNA (RNAi). Descendants of the cell in which the gene was turned on will inherit this activity, even if the original stimulus for gene-activation is no longer present. These genes are often turned on or off by signal transduction, although in some systems where syncytia or gap junctions are important, RNA may spread directly to other cells or nuclei by diffusion. A large amount of RNA and protein is contributed to the zygote by the mother during oogenesis or via nurse cells, resulting in maternal effect phenotypes. A smaller quantity of sperm RNA is transmitted from the father, but there is recent evidence that this epigenetic information can lead to visible changes in several generations of offspring.
MicroRNAs
MicroRNAs (miRNAs) are members of non-coding RNAs that range in size from 17 to 25 nucleotides. miRNAs regulate a large variety of biological functions in plants and animals. So far, in 2013, about 2000 miRNAs have been discovered in humans and these can be found online in a miRNA database. Each miRNA expressed in a cell may target about 100 to 200 messenger RNAs(mRNAs) that it downregulates. Most of the downregulation of mRNAs occurs by causing the decay of the targeted mRNA, while some downregulation occurs at the level of translation into protein.
It appears that about 60% of human protein coding genes are regulated by miRNAs. Many miRNAs are epigenetically regulated. About 50% of miRNA genes are associated with CpG islands, that may be repressed by epigenetic methylation. Transcription from methylated CpG islands is strongly and heritably repressed. Other miRNAs are epigenetically regulated by either histone modifications or by combined DNA methylation and histone modification.
sRNAs
sRNAs are small (50–250 nucleotides), highly structured, non-coding RNA fragments found in bacteria. They control gene expression including virulence genes in pathogens and are viewed as new targets in the fight against drug-resistant bacteria. They play an important role in many biological processes, binding to mRNA and protein targets in prokaryotes. Their phylogenetic analyses, for example through sRNA–mRNA target interactions or protein binding properties, are used to build comprehensive databases. sRNA-gene maps based on their targets in microbial genomes are also constructed.
Long non-coding RNAs
Numerous investigations have demonstrated the pivotal involvement of long non-coding RNAs (lncRNAs) in the regulation of gene expression and chromosomal modifications, thereby exerting significant control over cellular differentiation. These long non-coding RNAs also contribute to genomic imprinting and the inactivation of the X chromosome.Ruffo, Paola, et al. "Long-noncoding RNAs as epigenetic regulators in neurodegenerative diseases." Neural Regeneration Research 18.6 (2023): 1243.
In invertebrates such as social insects of honey bees, long non-coding RNAs are detected as a possible epigenetic mechanism via allele-specific genes underlying aggression via reciprocal crosses.
Prions
Prions are infectious forms of proteins. In general, proteins fold into discrete units that perform distinct cellular functions, but some proteins are also capable of forming an infectious conformational state known as a prion. Although often viewed in the context of infectious disease, prions are more loosely defined by their ability to catalytically convert other native state versions of the same protein to an infectious conformational state. It is in this latter sense that they can be viewed as epigenetic agents capable of inducing a phenotypic change without a modification of the genome.
Fungal prions are considered by some to be epigenetic because the infectious phenotype caused by the prion can be inherited without modification of the genome. PSI+ and URE3, discovered in yeast in 1965 and 1971, are the two best studied of this type of prion. Prions can have a phenotypic effect through the sequestration of protein in aggregates, thereby reducing that protein's activity. In PSI+ cells, the loss of the Sup35 protein (which is involved in termination of translation) causes ribosomes to have a higher rate of read-through of stop codons, an effect that results in suppression of nonsense mutations in other genes. The ability of Sup35 to form prions may be a conserved trait. It could confer an adaptive advantage by giving cells the ability to switch into a PSI+ state and express dormant genetic features normally terminated by stop codon mutations.
Prion-based epigenetics has also been observed in Saccharomyces cerevisiae.
Molecular basis
Epigenetic changes modify the activation of certain genes, but not the genetic code sequence of DNA. The microstructure (not code) of DNA itself or the associated chromatin proteins may be modified, causing activation or silencing. This mechanism enables differentiated cells in a multicellular organism to express only the genes that are necessary for their own activity. Epigenetic changes are preserved when cells divide. Most epigenetic changes only occur within the course of one individual organism's lifetime; however, these epigenetic changes can be transmitted to the organism's offspring through a process called transgenerational epigenetic inheritance. Moreover, if gene inactivation occurs in a sperm or egg cell that results in fertilization, this epigenetic modification may also be transferred to the next generation.
Specific epigenetic processes include paramutation, bookmarking, imprinting, gene silencing, X chromosome inactivation, position effect, DNA methylation reprogramming, transvection, maternal effects, the progress of carcinogenesis, many effects of teratogens, regulation of histone modifications and heterochromatin, and technical limitations affecting parthenogenesis and cloning.
DNA damage
DNA damage can also cause epigenetic changes. DNA damage is very frequent, occurring on average about 60,000 times a day per cell of the human body (see DNA damage (naturally occurring)). These damages are largely repaired, however, epigenetic changes can still remain at the site of DNA repair. In particular, a double strand break in DNA can initiate unprogrammed epigenetic gene silencing both by causing DNA methylation as well as by promoting silencing types of histone modifications (chromatin remodeling - see next section). In addition, the enzyme Parp1 (poly(ADP)-ribose polymerase) and its product poly(ADP)-ribose (PAR) accumulate at sites of DNA damage as part of the repair process. This accumulation, in turn, directs recruitment and activation of the chromatin remodeling protein, ALC1, that can cause nucleosome remodeling. Nucleosome remodeling has been found to cause, for instance, epigenetic silencing of DNA repair gene MLH1. DNA damaging chemicals, such as benzene, hydroquinone, styrene, carbon tetrachloride and trichloroethylene, cause considerable hypomethylation of DNA, some through the activation of oxidative stress pathways.
Foods are known to alter the epigenetics of rats on different diets. Some food components epigenetically increase the levels of DNA repair enzymes such as MGMT and MLH1 and p53. Other food components can reduce DNA damage, such as soy isoflavones. In one study, markers for oxidative stress, such as modified nucleotides that can result from DNA damage, were decreased by a 3-week diet supplemented with soy. A decrease in oxidative DNA damage was also observed 2 h after consumption of anthocyanin-rich bilberry (Vaccinium myrtillius L.) pomace extract.
DNA repair
Damage to DNA is very common and is constantly being repaired. Epigenetic alterations can accompany DNA repair of oxidative damage or double-strand breaks. In human cells, oxidative DNA damage occurs about 10,000 times a day and DNA double-strand breaks occur about 10 to 50 times a cell cycle in somatic replicating cells (see DNA damage (naturally occurring)). The selective advantage of DNA repair is to allow the cell to survive in the face of DNA damage. The selective advantage of epigenetic alterations that occur with DNA repair is not clear.
Repair of oxidative DNA damage can alter epigenetic markers
In the steady state (with endogenous damages occurring and being repaired), there are about 2,400 oxidatively damaged guanines that form 8-oxo-2'-deoxyguanosine (8-OHdG) in the average mammalian cell DNA. 8-OHdG constitutes about 5% of the oxidative damages commonly present in DNA. The oxidized guanines do not occur randomly among all guanines in DNA. There is a sequence preference for the guanine at a methylated CpG site (a cytosine followed by guanine along its 5' → 3' direction and where the cytosine is methylated (5-mCpG)). A 5-mCpG site has the lowest ionization potential for guanine oxidation.
Oxidized guanine has mispairing potential and is mutagenic. Oxoguanine glycosylase (OGG1) is the primary enzyme responsible for the excision of the oxidized guanine during DNA repair. OGG1 finds and binds to an 8-OHdG within a few seconds. However, OGG1 does not immediately excise 8-OHdG. In HeLa cells half maximum removal of 8-OHdG occurs in 30 minutes, and in irradiated mice, the 8-OHdGs induced in the mouse liver are removed with a half-life of 11 minutes.
When OGG1 is present at an oxidized guanine within a methylated CpG site it recruits TET1 to the 8-OHdG lesion (see Figure). This allows TET1 to demethylate an adjacent methylated cytosine. Demethylation of cytosine is an epigenetic alteration.
As an example, when human mammary epithelial cells were treated with H2O2 for six hours, 8-OHdG increased about 3.5-fold in DNA and this caused about 80% demethylation of the 5-methylcytosines in the genome. Demethylation of CpGs in a gene promoter by TET enzyme activity increases transcription of the gene into messenger RNA. In cells treated with H2O2, one particular gene was examined, BACE1. The methylation level of the BACE1 CpG island was reduced (an epigenetic alteration) and this allowed about 6.5 fold increase of expression of BACE1 messenger RNA.
While six-hour incubation with H2O2 causes considerable demethylation of 5-mCpG sites, shorter times of H2O2 incubation appear to promote other epigenetic alterations. Treatment of cells with H2O2 for 30 minutes causes the mismatch repair protein heterodimer MSH2-MSH6 to recruit DNA methyltransferase 1 (DNMT1) to sites of some kinds of oxidative DNA damage. This could cause increased methylation of cytosines (epigenetic alterations) at these locations.
Jiang et al. treated HEK 293 cells with agents causing oxidative DNA damage, (potassium bromate (KBrO3) or potassium chromate (K2CrO4)). Base excision repair (BER) of oxidative damage occurred with the DNA repair enzyme polymerase beta localizing to oxidized guanines. Polymerase beta is the main human polymerase in short-patch BER of oxidative DNA damage. Jiang et al. also found that polymerase beta recruited the DNA methyltransferase protein DNMT3b to BER repair sites. They then evaluated the methylation pattern at the single nucleotide level in a small region of DNA including the promoter region and the early transcription region of the BRCA1 gene. Oxidative DNA damage from bromate modulated the DNA methylation pattern (caused epigenetic alterations) at CpG sites within the region of DNA studied. In untreated cells, CpGs located at −189, −134, −29, −19, +16, and +19 of the BRCA1 gene had methylated cytosines (where numbering is from the messenger RNA transcription start site, and negative numbers indicate nucleotides in the upstream promoter region). Bromate treatment-induced oxidation resulted in the loss of cytosine methylation at −189, −134, +16 and +19 while also leading to the formation of new methylation at the CpGs located at −80, −55, −21 and +8 after DNA repair was allowed.
Homologous recombinational repair alters epigenetic markers
At least four articles report the recruitment of DNA methyltransferase 1 (DNMT1) to sites of DNA double-strand breaks. During homologous recombinational repair (HR) of the double-strand break, the involvement of DNMT1 causes the two repaired strands of DNA to have different levels of methylated cytosines. One strand becomes frequently methylated at about 21 CpG sites downstream of the repaired double-strand break. The other DNA strand loses methylation at about six CpG sites that were previously methylated downstream of the double-strand break, as well as losing methylation at about five CpG sites that were previously methylated upstream of the double-strand break. When the chromosome is replicated, this gives rise to one daughter chromosome that is heavily methylated downstream of the previous break site and one that is unmethylated in the region both upstream and downstream of the previous break site. With respect to the gene that was broken by the double-strand break, half of the progeny cells express that gene at a high level and in the other half of the progeny cells expression of that gene is repressed. When clones of these cells were maintained for three years, the new methylation patterns were maintained over that time period.
In mice with a CRISPR-mediated homology-directed recombination insertion in their genome there were a large number of increased methylations of CpG sites within the double-strand break-associated insertion.
Non-homologous end joining can cause some epigenetic marker alterations
Non-homologous end joining (NHEJ) repair of a double-strand break can cause a small number of demethylations of pre-existing cytosine DNA methylations downstream of the repaired double-strand break. Further work by Allen et al. showed that NHEJ of a DNA double-strand break in a cell could give rise to some progeny cells having repressed expression of the gene harboring the initial double-strand break and some progeny having high expression of that gene due to epigenetic alterations associated with NHEJ repair. The frequency of epigenetic alterations causing repression of a gene after an NHEJ repair of a DNA double-strand break in that gene may be about 0.9%.
Techniques used to study epigenetics
Epigenetic research uses a wide range of molecular biological techniques to further understanding of epigenetic phenomena. These techniques include chromatin immunoprecipitation (together with its large-scale variants ChIP-on-chip and ChIP-Seq), fluorescent in situ hybridization, methylation-sensitive restriction enzymes, DNA adenine methyltransferase identification (DamID) and bisulfite sequencing. Furthermore, the use of bioinformatics methods has a role in computational epigenetics.
Chromatin Immunoprecipitation
Chromatin Immunoprecipitation (ChIP) has helped bridge the gap between DNA and epigenetic interactions. With the use of ChIP, researchers are able to make findings in regards to gene regulation, transcription mechanisms, and chromatin structure.
Fluorescent in situ hybridization
Fluorescent in situ hybridization (FISH) is very important to understand epigenetic mechanisms. FISH can be used to find the location of genes on chromosomes, as well as finding noncoding RNAs. FISH is predominantly used for detecting chromosomal abnormalities in humans.
Methylation-sensitive restriction enzymes
Methylation sensitive restriction enzymes paired with PCR is a way to evaluate methylation in DNA - specifically the CpG sites. If DNA is methylated, the restriction enzymes will not cleave the strand. Contrarily, if the DNA is not methylated, the enzymes will cleave the strand and it will be amplified by PCR.
Bisulfite sequencing
Bisulfite sequencing is another way to evaluate DNA methylation. Cytosine will be changed to uracil from being treated with sodium bisulfite, whereas methylated cytosines will not be affected.
Nanopore sequencing
Certain sequencing methods, such as nanopore sequencing, allow sequencing of native DNA. Native (=unamplified) DNA retains the epigenetic modifications which would otherwise be lost during the amplification step. Nanopore basecaller models can distinguish between the signals obtained for epigenetically modified bases and unaltered based and provide an epigenetic profile in addition to the sequencing result.
Structural inheritance
In ciliates such as Tetrahymena and Paramecium, genetically identical cells show heritable differences in the patterns of ciliary rows on their cell surface. Experimentally altered patterns can be transmitted to daughter cells. It seems existing structures act as templates for new structures. The mechanisms of such inheritance are unclear, but reasons exist to assume that multicellular organisms also use existing cell structures to assemble new ones.
Nucleosome positioning
Eukaryotic genomes have numerous nucleosomes. Nucleosome position is not random, and determine the accessibility of DNA to regulatory proteins. Promoters active in different tissues have been shown to have different nucleosome positioning features. This determines differences in gene expression and cell differentiation. It has been shown that at least some nucleosomes are retained in sperm cells (where most but not all histones are replaced by protamines). Thus nucleosome positioning is to some degree inheritable. Recent studies have uncovered connections between nucleosome positioning and other epigenetic factors, such as DNA methylation and hydroxymethylation.
Histone variants
Different histone variants are incorporated into specific regions of the genome non-randomly. Their differential biochemical characteristics can affect genome functions via their roles in gene regulation, and maintenance of chromosome structures.
Genomic architecture
The three-dimensional configuration of the genome (the 3D genome) is complex, dynamic and crucial for regulating genomic function and nuclear processes such as DNA replication, transcription and DNA-damage repair.
Functions and consequences
In the brain
Memory
Memory formation and maintenance are due to epigenetic alterations that cause the required dynamic changes in gene transcription that create and renew memory in neurons.
An event can set off a chain of reactions that result in altered methylations of a large set of genes in neurons, which give a representation of the event, a memory.
Areas of the brain important in the formation of memories include the hippocampus, medial prefrontal cortex (mPFC), anterior cingulate cortex and amygdala, as shown in the diagram of the human brain in this section.
When a strong memory is created, as in a rat subjected to contextual fear conditioning (CFC), one of the earliest events to occur is that more than 100 DNA double-strand breaks are formed by topoisomerase IIB in neurons of the hippocampus and the medial prefrontal cortex (mPFC). These double-strand breaks are at specific locations that allow activation of transcription of immediate early genes (IEGs) that are important in memory formation, allowing their expression in mRNA, with peak mRNA transcription at seven to ten minutes after CFC.
Two important IEGs in memory formation are EGR1 and the alternative promoter variant of DNMT3A, DNMT3A2. EGR1 protein binds to DNA at its binding motifs, 5′-GCGTGGGCG-3′ or 5′-GCGGGGGCGG-3', and there are about 12,000 genome locations at which EGR1 protein can bind. EGR1 protein binds to DNA in gene promoter and enhancer regions. EGR1 recruits the demethylating enzyme TET1 to an association, and brings TET1 to about 600 locations on the genome where TET1 can then demethylate and activate the associated genes.
The DNA methyltransferases DNMT3A1, DNMT3A2 and DNMT3B can all methylate cytosines (see image this section) at CpG sites in or near the promoters of genes. As shown by Manzo et al., these three DNA methyltransferases differ in their genomic binding locations and DNA methylation activity at different regulatory sites. Manzo et al. located 3,970 genome regions exclusively enriched for DNMT3A1, 3,838 regions for DNMT3A2 and 3,432 regions for DNMT3B. When DNMT3A2 is newly induced as an IEG (when neurons are activated), many new cytosine methylations occur, presumably in the target regions of DNMT3A2. Oliviera et al. found that the neuronal activity-inducible IEG levels of Dnmt3a2 in the hippocampus determined the ability to form long-term memories.
Rats form long-term associative memories after contextual fear conditioning (CFC). Duke et al. found that 24 hours after CFC in rats, in hippocampus neurons, 2,097 genes (9.17% of the genes in the rat genome) had altered methylation. When newly methylated cytosines are present in CpG sites in the promoter regions of genes, the genes are often repressed, and when newly demethylated cytosines are present the genes may be activated. After CFC, there were 1,048 genes with reduced mRNA expression and 564 genes with upregulated mRNA expression. Similarly, when mice undergo CFC, one hour later in the hippocampus region of the mouse brain there are 675 demethylated genes and 613 hypermethylated genes. However, memories do not remain in the hippocampus, but after four or five weeks the memories are stored in the anterior cingulate cortex. In the studies on mice after CFC, Halder et al. showed that four weeks after CFC there were at least 1,000 differentially methylated genes and more than 1,000 differentially expressed genes in the anterior cingulate cortex, while at the same time the altered methylations in the hippocampus were reversed.
The epigenetic alteration of methylation after a new memory is established creates a different pool of nuclear mRNAs. As reviewed by Bernstein, the epigenetically determined new mix of nuclear mRNAs are often packaged into neuronal granules, or messenger RNP, consisting of mRNA, small and large ribosomal subunits, translation initiation factors and RNA-binding proteins that regulate mRNA function. These neuronal granules are transported from the neuron nucleus and are directed, according to 3′ untranslated regions of the mRNA in the granules (their "zip codes"), to neuronal dendrites. Roughly 2,500 mRNAs may be localized to the dendrites of hippocampal pyramidal neurons and perhaps 450 transcripts are in excitatory presynaptic nerve terminals (dendritic spines). The altered assortments of transcripts (dependent on epigenetic alterations in the neuron nucleus) have different sensitivities in response to signals, which is the basis of altered synaptic plasticity. Altered synaptic plasticity is often considered the neurochemical foundation of learning and memory.
Aging
Epigenetics play a major role in brain aging and age-related cognitive decline, with relevance to life extension.
Other and general
In adulthood, changes in the epigenome are important for various higher cognitive functions. Dysregulation of epigenetic mechanisms is implicated in neurodegenerative disorders and diseases. Epigenetic modifications in neurons are dynamic and reversible. Epigenetic regulation impacts neuronal action, affecting learning, memory, and other cognitive processes.
Early events, including during embryonic development, can influence development, cognition, and health outcomes through epigenetic mechanisms.
Epigenetic mechanisms have been proposed as "a potential molecular mechanism for effects of endogenous hormones on the organization of developing brain circuits".
Nutrients could interact with the epigenome to "protect or boost cognitive processes across the lifespan".
A review suggests neurobiological effects of physical exercise via epigenetics seem "central to building an 'epigenetic memory' to influence long-term brain function and behavior" and may even be heritable.
With the axo-ciliary synapse, there is communication between serotonergic axons and antenna-like primary cilia of CA1 pyramidal neurons that alters the neuron's epigenetic state in the nucleus via the signalling distinct from that at the plasma membrane (and longer-term). University press release:
Epigenetics also play a major role in the brain evolution in and to humans.
Development
Developmental epigenetics can be divided into predetermined and probabilistic epigenesis. Predetermined epigenesis is a unidirectional movement from structural development in DNA to the functional maturation of the protein. "Predetermined" here means that development is scripted and predictable. Probabilistic epigenesis on the other hand is a bidirectional structure-function development with experiences and external molding development.
Somatic epigenetic inheritance, particularly through DNA and histone covalent modifications and nucleosome repositioning, is very important in the development of multicellular eukaryotic organisms. The genome sequence is static (with some notable exceptions), but cells differentiate into many different types, which perform different functions, and respond differently to the environment and intercellular signaling. Thus, as individuals develop, morphogens activate or silence genes in an epigenetically heritable fashion, giving cells a memory. In mammals, most cells terminally differentiate, with only stem cells retaining the ability to differentiate into several cell types ("totipotency" and "multipotency"). In mammals, some stem cells continue producing newly differentiated cells throughout life, such as in neurogenesis, but mammals are not able to respond to loss of some tissues, for example, the inability to regenerate limbs, which some other animals are capable of. Epigenetic modifications regulate the transition from neural stem cells to glial progenitor cells (for example, differentiation into oligodendrocytes is regulated by the deacetylation and methylation of histones).Chapter: "Nervous System Development" in "Epigenetics," by Benedikt Hallgrimsson and Brian Hall Unlike animals, plant cells do not terminally differentiate, remaining totipotent with the ability to give rise to a new individual plant. While plants do utilize many of the same epigenetic mechanisms as animals, such as chromatin remodeling, it has been hypothesized that some kinds of plant cells do not use or require "cellular memories", resetting their gene expression patterns using positional information from the environment and surrounding cells to determine their fate.
Epigenetic changes can occur in response to environmental exposure – for example, maternal dietary supplementation with genistein (250 mg/kg) have epigenetic changes affecting expression of the agouti gene, which affects their fur color, weight, and propensity to develop cancer. Ongoing research is focused on exploring the impact of other known teratogens, such as diabetic embryopathy, on methylation signatures.
Controversial results from one study suggested that traumatic experiences might produce an epigenetic signal that is capable of being passed to future generations. Mice were trained, using foot shocks, to fear a cherry blossom odor. The investigators reported that the mouse offspring had an increased aversion to this specific odor. They suggested epigenetic changes that increase gene expression, rather than in DNA itself, in a gene, M71, that governs the functioning of an odor receptor in the nose that responds specifically to this cherry blossom smell. There were physical changes that correlated with olfactory (smell) function in the brains of the trained mice and their descendants. Several criticisms were reported, including the study's low statistical power as evidence of some irregularity such as bias in reporting results. Due to limits of sample size, there is a probability that an effect will not be demonstrated to within statistical significance even if it exists. The criticism suggested that the probability that all the experiments reported would show positive results if an identical protocol was followed, assuming the claimed effects exist, is merely 0.4%. The authors also did not indicate which mice were siblings, and treated all of the mice as statistically independent. (see comment by Gonzalo Otazu) The original researchers pointed out negative results in the paper's appendix that the criticism omitted in its calculations, and undertook to track which mice were siblings in the future.
Transgenerational
Epigenetic mechanisms were a necessary part of the evolutionary origin of cell differentiation. Although epigenetics in multicellular organisms is generally thought to be a mechanism involved in differentiation, with epigenetic patterns "reset" when organisms reproduce, there have been some observations of transgenerational epigenetic inheritance (e.g., the phenomenon of paramutation observed in maize). Although most of these multigenerational epigenetic traits are gradually lost over several generations, the possibility remains that multigenerational epigenetics could be another aspect to evolution and adaptation.
As mentioned above, some define epigenetics as heritable.
A sequestered germ line or Weismann barrier is specific to animals, and epigenetic inheritance is more common in plants and microbes. Eva Jablonka, Marion J. Lamb and Étienne Danchin have argued that these effects may require enhancements to the standard conceptual framework of the modern synthesis and have called for an extended evolutionary synthesis.See also Denis Noble: The Music of Life, esp pp. 93–98 and p. 48, where he cites Jablonka & Lamb and Massimo Pigliucci's review of Jablonka and Lamb in Nature 435, 565–566 (2 June 2005) Other evolutionary biologists, such as John Maynard Smith, have incorporated epigenetic inheritance into population-genetics models or are openly skeptical of the extended evolutionary synthesis (Michael Lynch). Thomas Dickins and Qazi Rahman state that epigenetic mechanisms such as DNA methylation and histone modification are genetically inherited under the control of natural selection and therefore fit under the earlier "modern synthesis".
Two important ways in which epigenetic inheritance can differ from traditional genetic inheritance, with important consequences for evolution, are:
rates of epimutation can be much faster than rates of mutation
the epimutations are more easily reversible
In plants, heritable DNA methylation mutations are 100,000 times more likely to occur compared to DNA mutations. An epigenetically inherited element such as the PSI+ system can act as a "stop-gap", good enough for short-term adaptation that allows the lineage to survive for long enough for mutation and/or recombination to genetically assimilate the adaptive phenotypic change. The existence of this possibility increases the evolvability of a species.
More than 100 cases of transgenerational epigenetic inheritance phenomena have been reported in a wide range of organisms, including prokaryotes, plants, and animals. For instance, mourning-cloak butterflies will change color through hormone changes in response to experimentation of varying temperatures.Davies, Hazel (2008). Do Butterflies Bite?: Fascinating Answers to Questions about Butterflies and Moths (Animals Q&A). Rutgers University Press.
The filamentous fungus Neurospora crassa is a prominent model system for understanding the control and function of cytosine methylation. In this organism, DNA methylation is associated with relics of a genome-defense system called RIP (repeat-induced point mutation) and silences gene expression by inhibiting transcription elongation.
The yeast prion PSI is generated by a conformational change of a translation termination factor, which is then inherited by daughter cells. This can provide a survival advantage under adverse conditions, exemplifying epigenetic regulation which enables unicellular organisms to respond rapidly to environmental stress. Prions can be viewed as epigenetic agents capable of inducing a phenotypic change without modification of the genome.
Direct detection of epigenetic marks in microorganisms is possible with single molecule real time sequencing, in which polymerase sensitivity allows for measuring methylation and other modifications as a DNA molecule is being sequenced. Several projects have demonstrated the ability to collect genome-wide epigenetic data in bacteria.
Epigenetics in bacteria
While epigenetics is of fundamental importance in eukaryotes, especially metazoans, it plays a different role in bacteria. Most importantly, eukaryotes use epigenetic mechanisms primarily to regulate gene expression which bacteria rarely do. However, bacteria make widespread use of postreplicative DNA methylation for the epigenetic control of DNA-protein interactions. Bacteria also use DNA adenine methylation (rather than DNA cytosine methylation) as an epigenetic signal. DNA adenine methylation is important in bacteria virulence in organisms such as Escherichia coli, Salmonella, Vibrio, Yersinia, Haemophilus, and Brucella. In Alphaproteobacteria, methylation of adenine regulates the cell cycle and couples gene transcription to DNA replication. In Gammaproteobacteria, adenine methylation provides signals for DNA replication, chromosome segregation, mismatch repair, packaging of bacteriophage, transposase activity and regulation of gene expression. There exists a genetic switch controlling Streptococcus pneumoniae (the pneumococcus) that allows the bacterium to randomly change its characteristics into six alternative states that could pave the way to improved vaccines. Each form is randomly generated by a phase variable methylation system. The ability of the pneumococcus to cause deadly infections is different in each of these six states. Similar systems exist in other bacterial genera. In Bacillota such as Clostridioides difficile, adenine methylation regulates sporulation, biofilm formation and host-adaptation.
Medicine
Epigenetics has many and varied potential medical applications.
Twins
Direct comparisons of identical twins constitute an optimal model for interrogating environmental epigenetics. In the case of humans with different environmental exposures, monozygotic (identical) twins were epigenetically indistinguishable during their early years, while older twins had remarkable differences in the overall content and genomic distribution of 5-methylcytosine DNA and histone acetylation. The twin pairs who had spent less of their lifetime together and/or had greater differences in their medical histories were those who showed the largest differences in their levels of 5-methylcytosine DNA and acetylation of histones H3 and H4.
Dizygotic (fraternal) and monozygotic (identical) twins show evidence of epigenetic influence in humans. DNA sequence differences that would be abundant in a singleton-based study do not interfere with the analysis. Environmental differences can produce long-term epigenetic effects, and different developmental monozygotic twin subtypes may be different with respect to their susceptibility to be discordant from an epigenetic point of view.
A high-throughput study, which denotes technology that looks at extensive genetic markers, focused on epigenetic differences between monozygotic twins to compare global and locus-specific changes in DNA methylation and histone modifications in a sample of 40 monozygotic twin pairs. In this case, only healthy twin pairs were studied, but a wide range of ages was represented, between 3 and 74 years. One of the major conclusions from this study was that there is an age-dependent accumulation of epigenetic differences between the two siblings of twin pairs. This accumulation suggests the existence of epigenetic "drift". Epigenetic drift is the term given to epigenetic modifications as they occur as a direct function with age. While age is a known risk factor for many diseases, age-related methylation has been found to occur differentially at specific sites along the genome. Over time, this can result in measurable differences between biological and chronological age. Epigenetic changes have been found to be reflective of lifestyle and may act as functional biomarkers of disease before clinical threshold is reached.
A more recent study, where 114 monozygotic twins and 80 dizygotic twins were analyzed for the DNA methylation status of around 6000 unique genomic regions, concluded that epigenetic similarity at the time of blastocyst splitting may also contribute to phenotypic similarities in monozygotic co-twins. This supports the notion that microenvironment at early stages of embryonic development can be quite important for the establishment of epigenetic marks. Congenital genetic disease is well understood and it is clear that epigenetics can play a role, for example, in the case of Angelman syndrome and Prader–Willi syndrome. These are normal genetic diseases caused by gene deletions or inactivation of the genes but are unusually common because individuals are essentially hemizygous because of genomic imprinting, and therefore a single gene knock out is sufficient to cause the disease, where most cases would require both copies to be knocked out.
Genomic imprinting
Some human disorders are associated with genomic imprinting, a phenomenon in mammals where the father and mother contribute different epigenetic patterns for specific genomic loci in their germ cells. The best-known case of imprinting in human disorders is that of Angelman syndrome and Prader–Willi syndrome – both can be produced by the same genetic mutation, chromosome 15q partial deletion, and the particular syndrome that will develop depends on whether the mutation is inherited from the child's mother or from their father.
In the Överkalix study, paternal (but not maternal) grandsonsA person's paternal grandson is the son of a son of that person; a maternal grandson is the son of a daughter. of Swedish men who were exposed during preadolescence to famine in the 19th century were less likely to die of cardiovascular disease. If food was plentiful, then diabetes mortality in the grandchildren increased, suggesting that this was a transgenerational epigenetic inheritance. Robert Winston refers to this study in a The opposite effect was observed for females – the paternal (but not maternal) granddaughters of women who experienced famine while in the womb (and therefore while their eggs were being formed) lived shorter lives on average.
Examples of drugs altering gene expression from epigenetic events
The use of beta-lactam antibiotics can alter glutamate receptor activity and the action of cyclosporine on multiple transcription factors. Additionally, lithium can impact autophagy of aberrant proteins, and opioid drugs via chronic use can increase the expression of genes associated with addictive phenotypes.
Parental nutrition, in utero exposure to stress or endocrine disrupting chemicals, male-induced maternal effects such as the attraction of differential mate quality, and maternal as well as paternal age, and offspring gender could all possibly influence whether a germline epimutation is ultimately expressed in offspring and the degree to which intergenerational inheritance remains stable throughout posterity. However, whether and to what extent epigenetic effects can be transmitted across generations remains unclear, particularly in humans.
Addiction
Addiction is a disorder of the brain's reward system which arises through transcriptional and neuroepigenetic mechanisms and occurs over time from chronically high levels of exposure to an addictive stimulus (e.g., morphine, cocaine, sexual intercourse, gambling). Transgenerational epigenetic inheritance of addictive phenotypes has been noted to occur in preclinical studies. However, robust evidence in support of the persistence of epigenetic effects across multiple generations has yet to be established in humans; for example, an epigenetic effect of prenatal exposure to smoking that is observed in great-grandchildren who had not been exposed.
Research
The two forms of heritable information, namely genetic and epigenetic, are collectively called dual inheritance. Members of the APOBEC/AID family of cytosine deaminases may concurrently influence genetic and epigenetic inheritance using similar molecular mechanisms, and may be a point of crosstalk between these conceptually compartmentalized processes.
Fluoroquinolone antibiotics induce epigenetic changes in mammalian cells through iron chelation. This leads to epigenetic effects through inhibition of α-ketoglutarate-dependent dioxygenases that require iron as a co-factor.
Various pharmacological agents are applied for the production of induced pluripotent stem cells (iPSC) or maintain the embryonic stem cell (ESC) phenotypic via epigenetic approach. Adult stem cells like bone marrow stem cells have also shown a potential to differentiate into cardiac competent cells when treated with G9a histone methyltransferase inhibitor BIX01294.
Cell plasticity, which is the adaptation of cells to stimuli without changes in their genetic code, requires epigenetic changes. These have been observed in cell plasticity in cancer cells during epithelial-to-mesenchymal transition and also in immune cells, such as macrophages. Interestingly, metabolic changes underlie these adaptations, since various metabolites play crucial roles in the chemistry of epigenetic marks. This includes for instance alpha-ketoglutarate, which is required for histone demethylation, and acetyl-Coenzyme A, which is required for histone acetylation.
Epigenome editing
Epigenetic regulation of gene expression that could be altered or used in epigenome editing are or include mRNA/lncRNA modification, DNA methylation modification and histone modification.
CpG sites, SNPs and biological traits
Methylation is a widely characterized mechanism of genetic regulation that can determine biological traits. However, strong experimental evidences correlate methylation patterns in SNPs as an important additional feature for the classical activation/inhibition epigenetic dogma. Molecular interaction data, supported by colocalization analyses, identify multiple nuclear regulatory pathways, linking sequence variation to disturbances in DNA methylation and molecular and phenotypic variation.
UBASH3B locus
UBASH3B encodes a protein with tyrosine phosphatase activity, which has been previously linked to advanced neoplasia. SNP rs7115089 was identified as influencing DNA methylation and expression of this locus, as well as and Body Mass Index (BMI). In fact, SNP rs7115089 is strongly associated with BMI and with genetic variants linked to other cardiovascular and metabolic traits in GWASs. New studies suggesting UBASH3B as a potential mediator of adiposity and cardiometabolic disease. In addition, animal models demonstrated that UBASH3B expression is an indicator of caloric restriction that may drive programmed susceptibility to obesity and it is associated with other measures of adiposity in human peripherical blood.
NFKBIE locus
SNP rs730775 is located in the first intron of NFKBIE and is a cis eQTL for NFKBIE in whole blood. Nuclear factor (NF)-κB inhibitor ε (NFKBIE) directly inhibits NF-κB1 activity and is significantly co-expressed with NF-κB1, also, it is associated with rheumatoid arthritis. Colocalization analysis supports that variants for the majority of the CpG sites in SNP rs730775 cause genetic variation at the NFKBIE locus which is suggestible linked to rheumatoid arthritis through trans acting regulation of DNA methylation by NF-κB.
FADS1 locus
Fatty acid desaturase 1 (FADS1) is a key enzyme in the metabolism of fatty acids. Moreover, rs174548 in the FADS1 gene shows increased correlation with DNA methylation in people with high abundance of CD8+ T cells. SNP rs174548 is strongly associated with concentrations of arachidonic acid and other metabolites in fatty acid metabolism, blood eosinophil counts. and inflammatory diseases such as asthma. Interaction results indicated a correlation between rs174548 and asthma, providing new insights about fatty acid metabolism in CD8+ T cells with immune phenotypes.
Pseudoscience
As epigenetics is in the early stages of development as a science and is surrounded by sensationalism in the public media, David Gorski and geneticist Adam Rutherford have advised caution against the proliferation of false and pseudoscientific conclusions by new age authors making unfounded suggestions that a person's genes and health can be manipulated by mind control. Misuse of the scientific term by quack authors has produced misinformation among the general public.
See also
Baldwin effect
Behavioral epigenetics
Biological effects of radiation on the epigenome
Computational epigenetics
Contribution of epigenetic modifications to evolution
DAnCER database (2010)
Epigenesis (biology)
Epigenetics in forensic science
Epigenetics of autoimmune disorders
Epiphenotyping
Epigenetic therapy
Epigenetics of neurodegenerative diseases
Genetics
Lamarckism
Nutriepigenomics
Position-effect variegation
Preformationism
Somatic epitype
Synthetic genetic array
Sleep epigenetics
Transcriptional memory
Transgenerational epigenetic inheritance
References
Further reading
External links
The Human Epigenome Project (HEP)
The Epigenome Network of Excellence (NoE)
Canadian Epigenetics, Environment and Health Research Consortium (CEEHRC)
The Epigenome Network of Excellence (NoE) – public international site
"DNA Is Not Destiny" – Discover magazine cover story
"The Ghost In Your Genes", Horizon (2005), BBC
Epigenetics article at Hopkins Medicine
Towards a global map of epigenetic variation
Category:Genetic mapping
Category:Lamarckism
|
biology
| 9,614
|
49855
|
Umayyad Caliphate
|
https://en.wikipedia.org/wiki/Umayyad_Caliphate
|
The Umayyad Caliphate or Umayyad Empire (; ) was the second caliphate established after the death of the Islamic prophet Muhammad and was ruled by the Umayyad dynasty from 661 to 750. It succeeded the Rashidun Caliphate, of which the third caliph, Uthman ibn Affan, was also a member of the Umayyad clan. The Umayyad family established a dynasty with hereditary rule under Mu'awiya ibn Abi Sufyan, the long-time governor of Greater Syria, who became caliph after the end of the First Fitna in 661. After Mu'awiya's death in 680, conflicts over the succession resulted in the Second Fitna, and power was eventually claimed by Marwan ibn al-Hakam, who came from another branch of the clan. Syria remained the Umayyads' core power base thereafter, with Damascus as their capital.
The Umayyads continued the Muslim conquests, conquering Ifriqiya, Transoxiana, Sind, the Maghreb and Hispania (al-Andalus). At its greatest extent, the Umayyad Caliphate covered an area of , making it one of the largest empires in history in terms of size. The dynasty was overthrown by the Abbasids in 750. Survivors of the dynasty established an emirate and then a caliphate in Córdoba, with Cordoba becoming a major center of science, medicine, philosophy and invention during the Islamic Golden Age.
The Umayyad Caliphate ruled over a vast multiethnic and multicultural population. Christians, who still constituted a majority of the caliphate's population, and the Jews were allowed to practice their own religion in exchange for the payment of jizya (poll tax), from which Muslims were exempt. Muslims were required to pay the zakat, which was explicitly collected for the purposes of charity and for the benefit of Muslims or Muslim converts. Under the early Umayyad caliphs, prominent positions were held by Christians, some of whom belonged to families that had served under the Byzantines. The employment of Christians was part of a broader policy of religious toleration that was necessitated by the presence of large Christian populations in the conquered provinces, such as in their metropolitan province of Syria. This policy also helped to increase Mu'awiya's popularity and solidified Syria as his power base. The Umayyad era is often considered the formative period of Islamic art.
History
Origins
Early influence
During the pre-Islamic period, the Umayyads or Banu Umayya were a leading clan of the Quraysh tribe of Mecca. By the end of the 6th century, the Umayyads dominated the Quraysh's increasingly prosperous trade networks with Syria and developed economic and military alliances with the nomadic Arab tribes that controlled the northern and central Arabian desert expanses, affording the clan a degree of political power in the region. The Umayyads under the leadership of Abu Sufyan ibn Harb were the principal leaders of Meccan opposition to the Islamic prophet Muhammad, but after the latter captured Mecca in 630, Abu Sufyan and the Quraysh embraced Islam. To reconcile his influential Qurayshite tribesmen, Muhammad gave his former opponents, including Abu Sufyan, a stake in the new order. Abu Sufyan and the Umayyads relocated to Medina, Islam's political centre, to maintain their new-found political influence in the nascent Muslim community.
Muhammad's death in 632 left open the succession of leadership of the Muslim community. Leaders of the Ansar, the natives of Medina who had provided Muhammad safe haven after his emigration from Mecca in 622, discussed forwarding their own candidate out of concern that the Muhajirun, Muhammad's early followers and fellow emigrants from Mecca, would ally with their fellow tribesmen from the former Qurayshite elite and take control of the Muslim state. The Muhajirun gave allegiance to one of their own, the early, elderly companion of Muhammad, Abu Bakr (), and put an end to Ansarite deliberations. Abu Bakr was viewed as acceptable by the Ansar and the Qurayshite elite and was acknowledged as caliph (leader of the Muslim community). He showed favor to the Umayyads by awarding them command roles in the Muslim conquest of Syria. One of the appointees was Yazid, the son of Abu Sufyan, who owned property and maintained trade networks in Syria.
Abu Bakr's successor Umar () curtailed the influence of the Qurayshite elite in favor of Muhammad's earlier supporters in the administration and military, but nonetheless allowed the growing foothold of Abu Sufyan's sons in Syria, which was all but conquered by 638. When Umar's overall commander of the province Abu Ubayda ibn al-Jarrah died in 639, he appointed Yazid governor of Syria's Damascus, Palestine and Jordan districts. Yazid died shortly after and Umar appointed his brother Mu'awiya in his place. Umar's exceptional treatment of Abu Sufyan's sons may have stemmed from his respect for the family, their burgeoning alliance with the powerful Banu Kalb tribe as a counterbalance to the influential Himyarite settlers in Homs who viewed themselves as equals to the Quraysh in nobility, or the lack of a suitable candidate at the time, particularly amid the plague of Amwas which had already killed Abu Ubayda and Yazid. Under Mu'awiya's stewardship, Syria remained domestically peaceful, organized and well-defended from its former Byzantine rulers.
Caliphate of Uthman
Umar's successor, Uthman ibn Affan, was a wealthy Umayyad and early Muslim convert with marital ties to Muhammad. He was elected by the shura council, composed of Muhammad's cousin Ali, al-Zubayr ibn al-Awwam, Talha ibn Ubayd Allah, Sa'd ibn Abi Waqqas and Abd al-Rahman ibn Awf, all of whom were close, early companions of Muhammad and belonged to the Quraysh. He was chosen over Ali because he would ensure the concentration of state power into the hands of the Quraysh, as opposed to Ali's determination to diffuse power among all of the Muslim factions. From early in his reign, Uthman displayed explicit favoritism to his kinsmen, in stark contrast to his predecessors. He appointed his family members as governors over the regions successively conquered under Umar and himself, namely much of the Sasanian Empire, i.e. Iraq and Iran, and the former Byzantine territories of Syria and Egypt. In Medina, he relied extensively on the counsel of his Umayyad cousins, the brothers al-Harith and Marwan ibn al-Hakam. According to the historian Wilferd Madelung, this policy stemmed from Uthman's "conviction that the house of Umayya, as the core clan of Quraysh, was uniquely qualified to rule in the name of Islam".
Uthman's nepotism provoked the ire of the Ansar and the members of the shura. In 645/46, he added the Jazira (Upper Mesopotamia) to Mu'awiya's Syrian governorship and granted the latter's request to take possession of all Byzantine crown lands in Syria to help pay his troops. He had the surplus taxes from the wealthy provinces of Kufa and Egypt forwarded to the treasury in Medina, which he used at his personal disposal, frequently disbursing its funds and war booty to his Umayyad relatives. Moreover, the lucrative Sasanian crown lands of Iraq, which Umar had designated as communal property for the benefit of the Arab garrison towns of Kufa and Basra, were turned into caliphal crown lands to be used at Uthman's discretion. Mounting resentment against Uthman's rule in Iraq and Egypt and among the Ansar and Quraysh of Medina culminated in the killing of the caliph in 656. In the assessment of the historian Hugh N. Kennedy, Uthman was killed because of his determination to centralize control over the caliphate's government by the traditional elite of the Quraysh, particularly his Umayyad clan, which he believed possessed the "experience and ability" to govern, at the expense of the interests, rights and privileges of many early Muslims.
First Fitna
After Uthman's assassination, Ali was recognized as the next Rashidun caliph in Medina, though his support stemmed from the Ansar and the Iraqis, while the bulk of the Quraysh was wary of his rule. The first challenge to his authority came from the Qurayshite leaders al-Zubayr and Talha, who had opposed Uthman's empowerment of the Umayyad clan but feared that their own influence and the power of the Quraysh, in general, would dissipate under Ali. Backed by one of Muhammad's wives, Aisha, they attempted to rally support against Ali among the troops of Basra, prompting the caliph to leave for Iraq's other garrison town, Kufa, where he could better confront his challengers. Ali defeated them at the Battle of the Camel, in which al-Zubayr and Talha were slain and Aisha consequently entered self-imposed seclusion. Ali's sovereignty was thereafter recognized in Basra and Egypt and he established Kufa as the caliphate's new capital.
Although Ali was able to replace Uthman's governors in Egypt and Iraq with relative ease, Mu'awiya had developed a strong powerbase and an effective military against the Byzantines from the Arab tribes of Syria. Mu'awiya did not yet explicitly claim the caliphate but was determined to retain control of Syria and opposed Ali in the name of avenging his kinsman Uthman, accusing the caliph of complicity in his death. Ali and Mu'awiya fought to a stalemate at the Battle of Siffin in early 657. Ali agreed to settle the matter with Mu'awiya by arbitration, though the talks failed to achieve a resolution. The decision to arbitrate fundamentally weakened Ali's political position as he was forced to negotiate with Mu'awiya on equal terms, while it drove a significant number of Ali's supporters, who became known as the Kharijites, to revolt. Ali's coalition steadily disintegrated and many Iraqi tribal nobles secretly defected to Mu'awiya, while Mu'awiya's ally Amr ibn al-As ousted Ali's governor from Egypt in July 658. In July 660 Mu'awiya was formally recognized as caliph in Jerusalem by his Syrian tribal allies. Ali was assassinated by a Kharijite dissident in January 661. His son Hasan succeeded him but abdicated in return for compensation upon Mu'awiya's invasion of Iraq with his Syrian army in the summer. Mu'awiya then entered Kufa and received the allegiance of the Iraqis.
Sufyanid period
Caliphate of Mu'awiya
The recognition of Mu'awiya in Kufa, referred to as the "year of unification of the community" in the Muslim traditional sources, is generally considered the start of his caliphate. With his accession, the political capital and the caliphal treasury were transferred to Damascus, the seat of Mu'awiya's power. Syria's emergence as the metropolis of the Umayyad Caliphate was the result of Mu'awiya's twenty-year entrenchment in the province, the geographic distribution of its relatively large Arab population throughout the province in contrast to their seclusion in garrison cities in other provinces, and the domination of a single tribal confederation, the Quda'a who were led by the Banu Kalb with whom Mu'awiya had a marriage alliance, as opposed to the wide array of competing tribal groups in Iraq. The long-established, formerly Christian Arab tribes in Syria, having been integrated into the military of the Byzantine Empire and their Ghassanid client kings, were "more accustomed to order and obedience" than their Iraqi counterparts, according to historian Julius Wellhausen. Mu'awiya relied on the powerful Kalbite chief Ibn Bahdal and the Kindite nobleman Shurahbil ibn Simt alongside the Qurayshite commanders al-Dahhak ibn Qays al-Fihri and Abd al-Rahman, the son of the prominent general Khalid ibn al-Walid, to guarantee the loyalty of the key military components of Syria. Mu'awiya preoccupied his core Syrian troops in nearly annual or bi-annual land and sea raids against Byzantium, which provided them with battlefield experience and war spoils, but secured no permanent territorial gains. Toward the end of his reign the caliph entered a thirty-year truce with Byzantine emperor Constantine IV (), obliging the Umayyads to pay the Empire an annual tribute of gold, horses and slaves.
Mu'awiya's main challenge was reestablishing the unity of the Muslim community and asserting his authority and that of the caliphate in the provinces amid the political and social disintegration of the First Fitna. There remained significant opposition to his assumption of the caliphate and to a strong central government. The garrison towns of Kufa and Basra, populated by the Arab immigrants and troops who arrived during the conquest of Iraq in the 630s–640s, resented the transition of power to Syria. They remained divided, nonetheless, as both cities competed for power and influence in Iraq and its eastern dependencies and remained divided between the Arab tribal nobility and the early Muslim converts, the latter of whom were divided between the pro-Alids (loyalists of Ali) and the Kharijites, who followed their own strict interpretation of Islam. The caliph applied a decentralized approach to governing Iraq by forging alliances with its tribal nobility, such as the Kufan leader al-Ash'ath ibn Qays, and entrusting the administration of Kufa and Basra to highly experienced members of the Thaqif tribe, al-Mughira ibn Shu'ba and the latter's protege Ziyad ibn Abihi (whom Mu'awiya adopted as his half-brother), respectively. In return for recognizing his suzerainty, maintaining order, and forwarding a token portion of the provincial tax revenues to Damascus, the caliph let his governors rule with practical independence. After al-Mughira's death in 670, Mu'awiya attached Kufa and its dependencies to the governorship of Basra, making Ziyad the practical viceroy over the eastern half of the caliphate. Afterward, Ziyad launched a concerted campaign to firmly establish Arab rule in the vast Khurasan region east of Iran and restart the Muslim conquests in the surrounding areas. Not long after Ziyad's death, he was succeeded by his son Ubayd Allah ibn Ziyad. Meanwhile, Amr ibn al-As ruled Egypt from the provincial capital of Fustat as a virtual partner of Mu'awiya until his death in 663, after which loyalist governors were appointed and the province became a practical appendage of Syria. Under Mu'awiya's direction, the Muslim conquest of Ifriqiya (central North Africa) was launched by the commander Uqba ibn Nafi in 670, which extended Umayyad control as far as Byzacena (modern southern Tunisia), where Uqba founded the permanent Arab garrison city of Kairouan.
Succession of Yazid I and collapse of Sufyanid rule
In contrast to Uthman, Mu'awiya restricted the influence of his Umayyad kinsmen to the governorship of Medina, where the dispossessed Islamic elite, including the Umayyads, was suspicious or hostile toward his rule. However, in an unprecedented move in Islamic politics, Mu'awiya nominated his own son, Yazid I, as his successor in 676, introducing hereditary rule to caliphal succession and, in practice, turning the office of the caliph into a kingship. The act was met with disapproval or opposition by the Iraqis and the Hejaz-based Quraysh, including the Umayyads, but most were bribed or coerced into acceptance. Yazid acceded after Mu'awiya's death in 680 and almost immediately faced a challenge to his rule by the Kufan partisans of Ali who had invited Ali's son and Muhammad's grandson Husayn to stage a revolt against Umayyad rule from Iraq. An army mobilized by Iraq's governor Ibn Ziyad intercepted and killed Husayn outside Kufa at the Battle of Karbala. Although it stymied active opposition to Umayyad authority in Iraq for the time being, the killing of Muhammad's grandson left many Muslims outraged and significantly increased Kufan hostility toward the Umayyads and sympathy for the family of Ali.
The next major challenge to Yazid's rule emanated from the Hejaz where Abd Allah ibn al-Zubayr, the son of al-Zubayr ibn al-Awwam and grandson of Abu Bakr, advocated for a shura among the Quraysh to elect the caliph and rallied opposition to the Umayyads from his headquarters in Islam's holiest sanctuary, the Ka'aba in Mecca. The Ansar and Quraysh of Medina also took up the anti-Umayyad cause and in 683 expelled the Umayyads from the city. Yazid's Syrian troops routed the Medinans at the Battle of al-Harra and subsequently plundered Medina before besieging Ibn al-Zubayr in Mecca. The Syrians withdrew upon news of Yazid's death in 683, after which Ibn al-Zubayr declared himself caliph and soon after gained recognition in most provinces of the caliphate, including Iraq and Egypt. In Syria Ibn Bahdal secured the succession of Yazid's son and appointed successor Mu'awiya II, whose authority was likely restricted to Damascus and Syria's southern districts. Mu'awiya II had been ill from the beginning of his accession, with al-Dahhak assuming the practical duties of his office, and he died in early 684 without naming a successor. His death marked the end of the Umayyads' Sufyanid ruling house, called after Mu'awiya I's father Abu Sufyan.
Early Marwanid period
Marwanid transition and end of Second Fitna
Umayyad authority nearly collapsed in their Syrian stronghold after the death of Mu'awiya II. Al-Dahhak in Damascus, the Qays tribes in Qinnasrin (northern Syria) and the Jazira, the Judham in Palestine, and the Ansar and South Arabians of Homs all opted to recognize Ibn al-Zubayr. Marwan ibn al-Hakam, the leader of the Umayyads expelled to Syria from Medina, was prepared to submit to Ibn al-Zubayr as well but was persuaded to forward his candidacy for the caliphate by Ibn Ziyad. The latter had been driven out of Iraq and strove to uphold Umayyad rule. During a summit of pro-Umayyad Syrian tribes, namely the Quda'a and their Kindite allies, organized by Ibn Bahdal in the old Ghassanid capital of Jabiya, Marwan was elected caliph in exchange for economic privileges to the loyalist tribes. At the subsequent Battle of Marj Rahit in August 684, Marwan led his tribal allies to a decisive victory against a much larger Qaysite army led by al-Dahhak, who was slain. Not long after, the South Arabians of Homs and the Judham joined the Quda'a to form the tribal confederation of Yaman. Marj Rahit led to the long-running conflict between the Qays and Yaman coalitions. The Qays regrouped in the Euphrates river fortress of Circesium under Zufar ibn al-Harith al-Kilabi and moved to avenge their losses. Although Marwan regained full control of Syria in the months following the battle, the inter-tribal strife undermined the foundation of Umayyad power: the Syrian army.
In 685, Marwan and Ibn Bahdal expelled the Zubayrid governor of Egypt and replaced him with Marwan's son Abd al-Aziz, who would rule the province until his death in 704/05. Another son, Muhammad, was appointed to suppress Zufar's rebellion in the Jazira. Marwan died in April 685 and was succeeded by his eldest son Abd al-Malik. Although Ibn Ziyad attempted to restore the Syrian army of the Sufyanid caliphs, persistent divisions along Qays–Yaman lines contributed to the army's massive rout and Ibn Ziyad's death at the hands of the pro-Alid forces of Mukhtar al-Thaqafi of Kufa at the Battle of Khazir in August 686. The setback delayed Abd al-Malik's attempts to reestablish Umayyad authority in Iraq, while pressures from the Byzantine Empire and raids into Syria by the Byzantines' Mardaite allies compelled him to sign a peace treaty with Byzantium in 689 which substantially increased the Umayyads' annual tribute to the Empire. During his siege of Circesium in 691, Abd al-Malik reconciled with Zufar and the Qays by offering them privileged positions in the Umayyad court and army, signaling a new policy by the caliph and his successors to balance the interests of the Qays and Yaman in the Umayyad state. With his unified army, Abd al-Malik marched against the Zubayrids of Iraq, having already secretly secured the defection of the province's leading tribal chiefs, and defeated Iraq's ruler, Ibn al-Zubayr's brother Mus'ab, at the Battle of Maskin in 691. Afterward, the Umayyad commander al-Hajjaj ibn Yusuf besieged Mecca and killed Ibn al-Zubayr in 692, marking the end of the Second Fitna and the reunification of the caliphate under Abd al-Malik's rule.
Domestic consolidation and centralization
Iraq remained politically unstable and the garrisons of Kufa and Basra had become exhausted by warfare with Kharijite rebels. In 694 Abd al-Malik combined both cities as a single province under the governorship of al-Hajjaj, who oversaw the suppression of the Kharijite revolts in Iraq and Iran by 698 and was subsequently given authority over the rest of the eastern caliphate. Resentment among the Iraqi troops towards al-Hajjaj's methods of governance, particularly his death threats to force participation in the war efforts and his reductions to their stipends, culminated with a mass Iraqi rebellion against the Umayyads in . The leader of the rebels was the Kufan nobleman Ibn al-Ash'ath, grandson of al-Ash'ath ibn Qays. Al-Hajjaj defeated Ibn al-Ash'ath's rebels at the Battle of Dayr al-Jamajim in April. The suppression of the revolt marked the end of the Iraqi muqātila as a military force and the beginning of Syrian military domination of Iraq. Iraqi internal divisions, and the utilization of more disciplined Syrian forces by Abd al-Malik and al-Hajjaj, voided the Iraqis' attempt to reassert power in the province.
To consolidate Umayyad rule after the Second Fitna, the Marwanids launched a series of centralization, Islamization and Arabization measures. These measures included the creation of multiple classes of Arabic-inscribed administrative media as a way to proliferate their particular political, cultural, and religious disposition to both Arab and non-Arab audiences. To prevent further rebellions in Iraq, al-Hajjaj founded a permanent Syrian garrison in Wasit, situated between Kufa and Basra, and instituted a more rigorous administration in the province. Power thereafter derived from the Syrian troops, who became Iraq's ruling class, while Iraq's Arab nobility, religious scholars and mawālī became their virtual subjects. The surplus from the agriculturally rich Sawad lands was redirected from the muqātila to the caliphal treasury in Damascus to pay the Syrian troops in Iraq. The system of military pay established by Umar, which paid stipends to veterans of the earlier Muslim conquests and their descendants, was ended, salaries being restricted to those in active service. The old system was considered a handicap on Abd al-Malik's executive authority and financial ability to reward loyalists in the army. Thus, a professional army was established during Abd al-Malik's reign whose salaries derived from tax proceeds.
In 693, the Byzantine gold solidus was replaced in Syria and Egypt with the dinar. Initially, the new coinage contained depictions of the caliph as the spiritual leader of the Muslim community and its supreme military commander. This image proved no less acceptable to Muslim officialdom and was replaced in 696 or 697 with image-less coinage inscribed with Qur'anic quotes and other Muslim religious formulas. In 698/699, similar changes were made to the silver dirhams issued by the Muslims in the former Sasanian Persian lands of the eastern caliphate. Arabic replaced Persian as the language of the dīwān in Iraq in 697, Greek in the Syrian dīwān in 700, and Greek and Coptic in the Egyptian dīwān in 705/706. Arabic ultimately became the sole official language of the Umayyad state, but the transition in faraway provinces, such as Khurasan, did not occur until the 740s. Although the official language was changed, Greek and Persian-speaking bureaucrats who were versed in Arabic kept their posts. According to Gibb, the decrees were the "first step towards the reorganization and unification of the diverse tax-systems in the provinces, and also a step towards a more definitely Muslim administration". Indeed, it formed an important part of the Islamization measures that lent the Umayyad Caliphate "a more ideological and programmatic coloring it had previously lacked", according to Blankinship.
In 691/692, Abd al-Malik completed the Dome of the Rock in Jerusalem. It was possibly intended as a monument of victory over the Christians that would distinguish Islam's uniqueness within the common Abrahamic setting of Jerusalem, home of the two older Abrahamic faiths, Judaism and Christianity. An alternative motive may have been to divert the religious focus of Muslims in the Umayyad realm from the Ka'aba in Zubayrid Mecca (683–692), where the Umayyads were routinely condemned during the Hajj. In Damascus, Abd al-Malik's son and successor al-Walid I () confiscated the cathedral of St. John the Baptist and founded the Great Mosque in its place as a "symbol of the political supremacy and moral prestige of Islam", according to historian Nikita Elisséeff. Noting al-Walid's awareness of architecture's propaganda value, historian Robert Hillenbrand calls the Damascus mosque a "victory monument" intended as a "visible statement of Muslim supremacy and permanence".
Renewal of conquests
Under al-Walid I the Umayyad Caliphate reached its greatest territorial extent. The war with the Byzantines had resumed under his father after the civil war, with the Umayyads defeating the Byzantines at the Battle of Sebastopolis in 692. The Umayyads frequently raided Byzantine Anatolia and Armenia in the following years. By 705, Armenia was annexed by the caliphate along with the principalities of Caucasian Albania and Iberia, which collectively became the province of Arminiya. In 695–698 the commander Hassan ibn al-Nu'man al-Ghassani restored Umayyad control over Ifriqiya after defeating the Byzantines and Berbers there. Carthage was captured and destroyed in 698, signaling "the final, irretrievable end of Roman power in Africa", according to Kennedy. Kairouan was firmly secured as a launchpad for later conquests, while the port town of Tunis was founded and equipped with an arsenal on Abd al-Malik's orders to establish a strong Arab fleet. Hassan ibn al-Nu'man continued the campaign against the Berbers, defeating them and killing their leader, the warrior queen al-Kahina, between 698 and 703. His successor in Ifriqiya, Musa ibn Nusayr, subjugated the Berbers of the Hawwara, Zenata and Kutama confederations and advanced into the Maghreb (western North Africa), conquering Tangier and Sus in 708/709. Musa's Berber mawla, Tariq ibn Ziyad, invaded the Visigothic Kingdom of Hispania (the Iberian Peninsula) in 711 and within five years most of Hispania was conquered.
Al-Hajjaj managed the eastern expansion from Iraq. His lieutenant governor of Khurasan, Qutayba ibn Muslim, launched numerous campaigns against Transoxiana (Central Asia), which had been a largely impenetrable region for earlier Muslim armies, between 705 and 715. Despite the distance from the Arab garrison towns of Khurasan, the unfavorable terrain and climate and his enemies' numerical superiority, Qutayba, through his persistent raids, gained the surrender of Bukhara in 706–709, Khwarazm and Samarkand in 711–712 and Farghana in 713. During Qutayba's campaigns in conquering the Bukharan territories of Numushkat and Ramithna in 707 CE (88 AH), he faced a coalition force of Turks and the Tang Empire. their army roughly numbered 200,000 soldiers of Ferghana and Sogdiana, led by Kur Maghayun, who the sources identify as the Chinese emperor's nephew. a heavy battle occurred. Qutayba managed to defeat the coalition army in combat, driving its commander to retreat, and then led his army back to his base at Merv. He established Arab garrisons and tax administrations in Samarkand and Bukhara and demolished their Zoroastrian fire temples. Both cities developed as future centers of Islamic and Arabic learning. Umayyad suzerainty was secured over the rest of conquered Transoxiana through tributary alliances with local rulers, whose power remained intact. From 708/709, al-Hajjaj's kinsman Muhammad ibn al-Qasim conquered northwestern South Asia and established out of this new territory the province of Sind. The massive war spoils netted by the conquests of Transoxiana, Sind and Hispania were comparable to the amounts accrued in the early Muslim conquests during the reign of Caliph Umar.
Al-Walid I's successor, his brother Sulayman (), continued his predecessors' militarist policies, but expansion mostly ground to a halt during his reign. The deaths of al-Hajjaj in 714 and Qutayba in 715 left the Arab armies in Transoxiana in disarray. For the next 25 years, no further eastward conquests were undertaken and the Arabs lost territory. The Tang Chinese defeated the Arabs at the Battle of Aksu in 717, forcing their withdrawal to Tashkent. Meanwhile, in 716, the governor of Khurasan, Yazid ibn al-Muhallab, attempted to conquer the principalities of Jurjan and Tabaristan along the southern Caspian coast. His Khurasani and Iraqi troops were reinforced by Syrians, marking their first deployment to Khurasan, but the Arabs' initial successes were reversed by the local Iranian coalition of Farrukhan the Great. Afterward, the Arabs withdrew in return for a tributary agreement.
On the Byzantine front, Sulayman took up his predecessor's project to capture Constantinople with increased vigor. His brother Maslama besieged the Byzantine capital from the land, while Umar ibn Hubayra al-Fazari launched a naval campaign against the city. The Byzantines destroyed the Umayyad fleets and defeated Maslama's army, prompting his withdrawal to Syria in 718. The massive losses incurred during the campaign led to a partial retrenchment of Umayyad forces from the captured Byzantine frontier districts, but already in 720, Umayyad raids against Byzantium recommenced. Nevertheless, the goal of conquering Constantinople was effectively abandoned, and the frontier between the two empires stabilized along the line of the Taurus and Anti-Taurus Mountains, over which both sides continued to launch regular raids and counter-raids during the next centuries.
Caliphate of Umar ibn Abd al-Aziz
Contrary to expectations of a son or brother succeeding him, Sulayman had nominated his cousin, Umar ibn Abd al-Aziz, as his successor and he took office in 717. After the Arabs' severe losses in the offensive against Constantinople, Umar drew down Arab forces on the caliphate's war fronts, though Narbonne in modern France was conquered during his reign. To maintain stronger oversight in the provinces, Umar dismissed all his predecessors' governors, his new appointees being generally competent men he could control. To that end, the massive viceroyalty of Iraq and the east was broken up.
Umar's most significant policy entailed fiscal reforms to equalize the status of the Arabs and mawali, thus remedying a long-standing issue which threatened the Muslim community. The jizya (poll tax) on the mawali was eliminated. Hitherto, the jizya, which was traditionally reserved for the non-Muslim majorities of the caliphate, continued to be imposed on non-Arab converts to Islam, while all Muslims who cultivated conquered lands were liable to pay the (land tax). Since avoidance of taxation incentivized both mass conversions to Islam and abandonment of land for migration to the garrison cities, it put a strain on tax revenues, especially in Egypt, Iraq and Khurasan. Thus, "the Umayyad rulers had a vested interest in preventing the conquered peoples from accepting Islam or forcing them to continue paying those taxes from which they claimed exemption as Muslims", according to Hawting. To prevent a collapse in revenue, the converts' lands would become the property of their villages and remain liable for the full rate of the .
In tandem, Umar intensified the Islamization drive of his Marwanid predecessors, enacting measures to distinguish Muslims from non-Muslims and inaugurating Islamic iconoclasm. His position among the Umayyad caliphs is unusual, in that he became the only one to have been recognized in subsequent Islamic tradition as a righteous and legitimate caliph (khalifa) and not merely someone who was a worldly king (malik).
Late Marwanid period
After the death of Umar II, another son of Abd al-Malik, Yazid II () became caliph. Not long after his accession, another mass revolt against Umayyad rule was staged in Iraq, this time by the prominent statesman Yazid ibn al-Muhallab. The latter declared a holy war against the Umayyads, took control of Basra and Wasit and gained the support of the Kufan elite. The caliph's Syrian army defeated the rebels and pursued and nearly eliminated the influential Muhallabids, marking the suppression of the last major Iraqi revolt against the Umayyads.
Yazid II reversed Umar II's equalization reforms, reimposing the jizya on the , which sparked revolts in Khurasan in 721 or 722 that persisted for some twenty years and met strong resistance among the Berbers of Ifriqiya, where the Umayyad governor was assassinated by his discontented Berber guards. Warfare on the frontiers was also resumed, with renewed annual raids against the Byzantines and the Khazars in Transcaucasia.
Caliphate of Hisham and end of expansion
The final son of Abd al-Malik to become caliph was Hisham (), whose long and eventful reign was above all marked by the curtailment of military expansion. Hisham established his court at Resafa in northern Syria, which was closer to the Byzantine border than Damascus, and resumed hostilities against the Byzantines, which had lapsed following the failure of the last siege of Constantinople. The new campaigns resulted in a number of successful raids into Anatolia, but also in a major defeat (the Battle of Akroinon), and did not lead to any significant territorial expansion.
From the caliphate's north-western African bases, a series of raids on coastal areas of the Visigothic Kingdom paved the way to the permanent occupation of most of Iberia by the Umayyads (starting in 711), and on into south-eastern Gaul (last stronghold at Narbonne in 759). Hisham's reign witnessed the end of expansion in the west, following the defeat of the Arab army by the Franks at the Battle of Tours in 732. Arab expansion had already been limited following the Battle of Toulouse in 721. In 739 a major Berber Revolt broke out in North Africa, which was probably the largest military setback in the reign of Caliph Hisham. From it emerged some of the first Muslim states outside the caliphate. It is also regarded as the beginning of Moroccan independence, as Morocco would never again come under the rule of an eastern caliph or any other foreign power until the 20th century. It was followed by the collapse of Umayyad authority in al-Andalus. In India, the Umayyad armies were defeated by the south Indian Chalukya dynasty and by the north Indian Pratiharas, stagnating further eastwards Arab expansion.
In the Caucasus, the confrontation with the Khazars peaked under Hisham: the Arabs established Derbent as a major military base and launched several invasions of the northern Caucasus, but failed to subdue the nomadic Khazars. The conflict was arduous and bloody, and the Arab army even suffered a major defeat at the Battle of Marj Ardabil in 730. Marwan ibn Muhammad, the future Marwan II, finally ended the war in 737 with a massive invasion that is reported to have reached as far as the Volga, but the Khazars remained unsubdued.
Hisham suffered still worse defeats in the east, where his armies attempted to subdue both Tokharistan, with its centre at Balkh, and Transoxiana, with its centre at Samarkand. Both areas had already been partially conquered but remained difficult to govern. Once again, a particular difficulty concerned the question of the conversion of non-Arabs, especially the Sogdians of Transoxiana. Following the Umayyad defeat in the "Day of Thirst" in 724, Ashras ibn 'Abd Allah al-Sulami, governor of Khurasan, promised tax relief to those Sogdians who converted to Islam but went back on his offer when it proved too popular and threatened to reduce tax revenues from the province.
Discontent among the Khorasani Arabs rose sharply after the losses suffered in the Battle of the Defile in 731. In 734, al-Harith ibn Surayj led a revolt that received broad backing from Arab settlers and native inhabitants alike, capturing Balkh but failing to take Merv. After this defeat, al-Harith's movement seems to have been dissolved. The problem of the rights of non-Arab Muslims would continue to plague the Umayyads to their end.
Third Fitna
Hisham was succeeded by Al-Walid II (743–744), the son of Yazid II. Al-Walid is reported to have been more interested in earthly pleasures than in religion, a reputation that may be confirmed by the decoration of the so-called "desert palaces" (including Qusayr Amra and Khirbat al-Mafjar) that have been attributed to him. He quickly attracted the enmity of many, both by executing a number of those who had opposed his accession and by persecuting the Qadariyya.
In 744, Yazid III, a son of al-Walid I, was proclaimed caliph in Damascus, while his army killed al-Walid II. Yazid III has received a certain reputation for piety and may have been sympathetic to the Qadariyya. He died a mere six months into his reign.
Yazid had appointed his brother, Ibrahim, as his successor, but Marwan II (744–750), the grandson of Marwan I, led an army from the northern frontier and entered Damascus in December 744, where he was proclaimed caliph. Marwan immediately moved the capital north to Harran, in present-day Turkey. A rebellion soon broke out in Syria, perhaps due to resentment over the relocation of the capital, and in 746 Marwan razed the walls of Homs and Damascus in retaliation.
Marwan also faced significant opposition from Kharijites in Iraq and Iran, who put forth first Dahhak ibn Qays and then Abu Dulaf as rival caliphs. In 747, Marwan managed to reestablish control of Iraq, but by this time a more serious threat had arisen in Khorasan.
Abbasid Revolution and fall
The Hashimiyya movement (a sub-sect of the Kaysanites Shia), led by the Abbasid family, overthrew the Umayyad caliphate. The Abbasids were members of the Hashim clan, rivals of the Umayyads, but the word "Hashimiyya" seems to refer specifically to Abu Hashim, a grandson of Ali and son of Muhammad ibn al-Hanafiyya. According to certain traditions, Abu Hashim died in 717 in Humeima in the house of Muhammad ibn Ali, the head of the Abbasid family, and before dying named Muhammad ibn Ali as his successor. This tradition allowed the Abbasids to rally the supporters of the failed revolt of Mukhtar, who had represented themselves as the supporters of Muhammad ibn al-Hanafiyya.
Beginning around 719, Hashimiyya missions began to seek adherents in Khurasan. Their campaign was framed as one of proselytism (dawah). They sought support for a "member of the family" of Muhammad, without making explicit mention of the Abbasids. These missions met with success both among Arabs and non-Arabs (mawali), although the latter may have played a particularly important role in the growth of the movement.
Around 746, Abu Muslim assumed leadership of the Hashimiyya in Khurasan. In 747, he successfully initiated an open revolt against Umayyad rule, which was carried out under the sign of the black flag. He soon established control of Khurasan, expelling its Umayyad governor, Nasr ibn Sayyar, and dispatched an army westwards. Kufa fell to the Hashimiyya in 749, the last Umayyad stronghold in Iraq, Wasit, was placed under siege, and in November of the same year Abul Abbas as-Saffah was recognized as the new caliph in the mosque at Kufa. At this point Marwan mobilized his troops from Harran and advanced toward Iraq. In January 750 the two forces met in the Battle of the Zab, and the Umayyads were defeated. Damascus fell to the Abbasids in April, and in August, Marwan was killed in Egypt. Some Umayyads in Syria continued to resist the takeover. The Umayyad princes Abu Muhammad al-Sufyani, al-Abbas ibn Muhammad, and Hashim ibn Yazid launched revolts in Syria and the Islamic–Byzantine frontier around late 750, but they were defeated.
The victors desecrated the tombs of the Umayyads in Syria, sparing only that of Umar II, and most of the remaining members of the Umayyad family were tracked down and killed. When Abbasids declared amnesty for members of the Umayyad family, eighty gathered to receive pardons, and all were massacred. One grandson of Hisham, Abd al-Rahman I, survived, escaped across North Africa, and established an emirate in Moorish Iberia (Al-Andalus). In a claim unrecognized outside of al-Andalus, he maintained that the Umayyad Caliphate, the true, authentic caliphate, more legitimate than the Abbasids, was continued through him in Córdoba. It was to survive for centuries.
Some Umayyads also survived in Syria, and their descendants would once more attempt to restore their old regime during the Fourth Fitna. Two Umayyads, Abu al-Umaytir al-Sufyani and Maslama ibn Ya'qub, successively seized control of Damascus from 811 to 813, and declared themselves caliphs. However, their rebellions were suppressed.
Previté-Orton argues that the reason for the decline of the Umayyads was the rapid expansion of Islam. During the Umayyad period, mass conversions brought Persians, Berbers, Copts, and Aramaic to Islam. These mawalis (clients) were often better educated and more civilised than their Arab overlords. The new converts, on the basis of equality of all Muslims, transformed the political landscape. Previté-Orton also argues that the feud between Syria and Iraq further weakened the empire.
Administration
The early Umayyad caliphs created a stable administration for the empire, following the administrative practices and political institutions of the Byzantine Empire which had ruled the same region previously. These consisted of four main governmental branches: political affairs, military affairs, tax collection, and religious administration. Each of these was further subdivided into more branches, offices, and departments.
Provinces
Geographically, the empire was divided into several provinces, the borders of which changed numerous times during the Umayyad reign. Each province had a governor appointed by the caliph. The governor was in charge of the religious officials, army leaders, police, and civil administrators in his province. Local expenses were paid for by taxes coming from that province, with the remainder each year being sent to the central government in Damascus. As the central power of the Umayyad rulers waned in the later years of the dynasty, some governors neglected to send the extra tax revenue to Damascus and created great personal fortunes.
Government workers
As the empire grew, the number of qualified Arab workers was too small to keep up with the rapid expansion of the empire. Therefore, Muawiya allowed many of the local government workers in conquered provinces to keep their jobs under the new Umayyad government. Thus, much of the local government's work was recorded in Greek, Coptic, and Persian. It was only during the reign of Abd al-Malik that government work began to be regularly recorded in Arabic.
Military
The Umayyad army was mainly Arab, with its core consisting of those who had settled in urban Syria and the Arab tribes who originally served in the army of the Eastern Roman Empire in Syria. These were supported by tribes in the Syrian desert and in the frontier with the Byzantines, as well as Christian Syrian tribes. Soldiers were registered with the Army Ministry, the Diwan Al-Jaysh, and were salaried. The army was divided into junds based on regional fortified cities. The Umayyad Syrian forces specialized in close order infantry warfare and favored using a kneeling spear wall formation in battle, probably as a result of their encounters with Roman armies. This was radically different from the original Bedouin style of mobile and individualistic fighting.
Coinage
The Byzantine and Sassanid Empires relied on money economies before the Muslim conquest and that system remained in effect during the Umayyad period. Byzantine coinage was used until 658; Byzantine gold coins were still in use until the monetary reforms . In addition to this, the Umayyad government began to mint its own coins in Damascus, which were initially similar to pre-existing coins but evolved in an independent direction. These were the first coins minted by a Muslim government in history.
Early Islamic coins re-used Byzantine and Sasanian iconography directly but added new Islamic elements. So-called "Arab-Byzantine" coins replicated Byzantine coins and were minted in Levantine cities before and after the Umayyads rose to power. Some examples of these coins, likely minted in Damascus, copied the coins of Byzantine emperor Heraclius, including a depiction of the emperor and his son Heraclius Constantine. On the reverse side, the traditional Byzantine cross-on-steps image was modified to avoid any explicitly non-Islamic connotation.
In the 690s, under Abd al-Malik's reign, a new period of experimentations began. Some "Arab-Sasanian" coins dated between 692 and 696, associated with the mints in Iraq under governor Bishr ibn Marwan, stopped using the Sasanian image of the fire altar and replaced it with three male figures standing in Arab dress. This was possibly an attempt to depict the act of Muslim prayer or the delivery of the khutba (Friday sermon). Another coin minted probably between 695 and 698 features the image of a spear under an arch. This has been variously interpreted as representing a mihrab or a "sacral arch", the latter being a late antique motif. The spear is believed to be the spear ('anaza) that Muhammad carried before him when entering the mosque.
Between 696 and 699, the caliph introduced a new system of coinage of gold, silver, and bronze. The coins generally featured Arabic inscriptions without any images, ending the earlier iconographic traditions. The main gold unit was the dinar (from Roman denarius), which was worth 20 silver coins. It was most likely modeled on the Byzantine solidus. The silver coin was called a dirham (from Greek drachma). Its size and shape was based on Sasanian coins and they were minted in much larger quantities than in the earlier Byzantine era. The bronze coin was called a fals or fulus (from Byzantine follis).
One group of bronze coins from Palestine, dated after the coinage reform of the late 690s, features the image of a seven-branched menorah and then later of a five-branched menorah, topped by an Arabic inscription of the shahada. These images may have been based on Christian representations of the menorah or on earlier Hasmonean models. The switch to a five-branched version may have been intended to further differentiate this depiction from Jewish and Christian versions.
Central diwans
To assist the caliph in administration there were six boards at the centre: Diwan al-Kharaj (the Board of Revenue), Diwan al-Rasa'il (the Board of Correspondence), Diwan al-Khatam (the Board of Signet), Diwan al-Barid (the Board of Posts), Diwan al-Qudat (the Board of Justice) and Diwan al-Jund (the Military Board).
Diwan al-Kharaj
The Central Board of Revenue administered the entire finances of the central government. It also imposed and collected taxes from the empire and disbursed the revenue of the state.
Diwan al-Rasa'il
A regular Board of Correspondence was established under the Umayyads. It issued state missives and circulars to the Central and Provincial Officers. It coordinated the work of all Boards and dealt with all correspondence as the chief secretariat.
Diwan al-Khatam
In order to reduce forgery, Diwan al-Khatam (Bureau of Registry), a kind of state chancellery, was instituted by Mu'awiyah. It used to make and preserve a copy of each official document before sealing and dispatching the original to its destination. Thus in the course of time a state archive developed in Damascus by the Umayyads under Abd al-Malik. Under the Umayyads, lead seals bearing Arabic text became an important tool in the construction of a distinct Arab and Muslim political entity and which were critical to the proliferation of the state's Islamic orientation.
Diwan al-Barid
Mu'awiya introduced the postal service, Abd al-Malik extended it throughout his empire, and Walid made full use of it. Umar ibn Abd al-Aziz developed it further by building caravanserais at stages along the Khurasan highway. Relays of horses were used for the conveyance of dispatches between the caliph and his agents and officials posted in the provinces. The main highways were divided into stages of each and each stage had horses, donkeys, or camels ready to carry the post. Primarily the service met the needs of government officials, but travelers and their important dispatches were also benefited by the system. The postal carriages were also used for the swift transport of troops. They were able to carry fifty to a hundred men at a time. Under governor Yusuf ibn Umar al-Thaqafi, the postal department of Iraq cost 4,000,000 dirhams a year.
Diwan al-Qudat
In the early period of Islam, justice was administered by Muhammad and the Rashidun caliphs in person. After the expansion of the Caliphate, Umar I had to separate the judiciary from the general administration and appointed the first qadi in Egypt as early as AD 643/23 AH. After 661, a series of judges served in Egypt during the caliphates of Hisham and Walid II.
Diwan al-Jund
The Diwan of Umar, assigning annuities to all Arabs and to the Muslim soldiers of other races, underwent a change in the hands of the Umayyads. The Umayyads meddled with the register and the recipients regarded pensions as the subsistence allowance even without being in active service. Hisham reformed it and paid only to those who participated in the battle. On the pattern of the Byzantine system, the Umayyads reformed their army organization in general and divided it into five corps: the center, two wings, vanguards, and rearguards, following the same formation while on the march or on a battlefield. Marwan II (740–50) abandoned the old division and introduced the Kurdus (cohort), a small compact body. The Umayyad troops were divided into three divisions: infantry, cavalry, and artillery. Arab troops were dressed and armed in Greek fashion. The Umayyad cavalry used plain and round saddles. The artillery used the arradah (ballista), the manjaniq (mangonel), and the dabbabah or kabsh (battering ram). The heavy engines, siege machines, and baggage were carried on camels behind the army.
Social organization
The Umayyad Caliphate had four main social classes:
Muslim Arabs
Muslim non-Arabs (clients of the Muslim Arabs)
Dhimmis (non-Muslim free persons such as Christians, Jews and Zoroastrians)
Slaves
The Muslim Arabs were at the top of the society and saw it as their duty to rule over the conquered areas. The Arab Muslims held themselves in higher esteem than Muslim non-Arabs and generally did not mix with other Muslims.
As Islam spread, more and more of the Muslim population consisted of non-Arabs. This caused social unrest, as the new converts were not given the same rights as Muslim Arabs. As conversions increased, tax revenues from non-Muslims also decreased to dangerous lows. These issues continued to worsen until they helped cause the Abbasid Revolt in the 740s.
Non-Muslims
Non-Muslim groups in the Umayyad Caliphate, which included Christians, Jews, Zoroastrians, and pagans, were called dhimmis. They were given a legally protected status as second-class citizens as long as they accepted and acknowledged the political supremacy of the ruling Muslims. More specifically, non-Muslims had to pay a tax, known as jizya, which the Muslims did not have to pay; Muslims would instead pay the zakat tax. If non-Muslims converted to Islam, they would cease paying jizya and would instead pay zakat.
Although the Umayyads were harsh when it came to defeating their Zoroastrian adversaries, they did offer protection and relative religious tolerance to the Zoroastrians who accepted their authority. As a matter of fact, Umar II was reported to have said in one of his letters commanding not to "destroy a synagogue or a church or temple of fire worshippers (meaning the Zoroastrians) as long as they have reconciled with and agreed upon with the Muslims".Recorded by Ibn Abu Shayba in Al-Musanaf and Abu 'Ubaid Ibn Sallam in his book Al-Amwal, pp.123 Fred Donner says that Zoroastrians in the northern parts of Iran were hardly penetrated by the "believers", winning virtually complete autonomy in-return for tribute-tax or jizyah. Donner adds "Zoroastrians continued to exist in large numbers in northern and western Iran and elsewhere for centuries after the rise of Islam, and indeed, much of the canon of Zoroastrian religious texts was elaborated and written down during the Islamic period."
Christians and Jews still continued to produce great theological thinkers within their communities, but as time wore on, many of the intellectuals converted to Islam, leading to a lack of great thinkers in the non-Muslim communities. Important Christian writers from the Umayyad period include the theologian John of Damascus, bishop Cosmas of Maiuma, Pope Benjamin I of Alexandria and Isaac of Nineveh.
Although non-Muslims could not hold the highest public offices in the empire, they held many bureaucratic positions within the government. An important example of Christian employment in the Umayyad government is that of Sarjun ibn Mansur. He was a Melkite Christian official of the early Umayyad Caliphate. The son of a prominent Byzantine official of Damascus, he was a favourite of the early Umayyad caliphs Mu'awiya I and Yazid I, and served as the head of the fiscal administration for Syria from the mid-7th century until the year 700, when Caliph Abd al-Malik ibn Marwan dismissed him as part of his efforts to Arabicize the administration of the caliphate. According to the Muslim historians al-Baladhuri and al-Tabari, Sarjun was a mawla of the first Umayyad caliph, Mu'awiya I (), serving as his "secretary and the person in charge of his business". The hagiographies, although less reliable, also assign to him a role in the administration, even as "ruler" (archon or even amir), of Damascus and its environs, where he was responsible for collecting the revenue. In this capacity, he is attested in later collections of source material such as that of al-Mas'udi. Sarjun ibn Mansur was replaced by Sulayman ibn Sa'd al-Khushani, another Christian.
Mu'awiya's marriage to Maysun bint Bahdal (Yazid's mother) was politically motivated, as she was the daughter of the chief of the Kalb tribe, which was a large Syriac Orthodox Christian Arab tribe in Syria. The Kalb tribe had remained largely neutral when the Muslims first went into Syria. After the plague that killed much of the Muslim army in Syria, by marrying Maysun, Mu'awiya used the Syriac Orthodox Christians against the Byzantines.
Tom Holland writes that Christians, Jews, Samaritans and Manichaeans were all treated well by Mu'awiya. Mu'awiya even restored Edessa's cathedral after it had been toppled by an earthquake. Holland also writes that, "Savagely though Mu'awiya prosecuted his wars against the Romans, yet his subjects, no longer trampled by rival armies, no longer divided by hostile watchtowers, knew only peace at last. Justice flourished in his time, and there was great peace in the regions under his control. He allowed everyone to live as they wanted."
Architecture
The Umayyads constructed grand congregational mosques and palaces within their empire. Most of their surviving monuments are located in the Levant region, their main base of power. They also continued the existing Muslim policy of building new garrison cities (amsar) in their provinces that served as bases for further expansion. Their most famous constructions include the Dome of the Rock in Jerusalem and the Great Mosque of Damascus, while other constructions included desert palaces, such as Khirbat al-Majfar and Qusayr 'Amra. Among these projects, the construction of the Great Mosque in Damascus reflected the diversity of the empire, as Greek, Persian, Coptic, Indian and Maghrebi craftsmen were recruited to build it.
Under Umayyad patronage, Islamic architecture was derived from established Byzantine and Sasanian architectural traditions, but it also innovated by combining elements of these styles together, experimenting with new building types, and implementing lavish decorative programs. Byzantine-style mosaics are prominently featured in both the Dome of the Rock and the Great Mosque of Damascus, but the lack of human figures in their imagery was a new trait that demonstrates an Islamic taboo on figural representation in religious art. Palaces were decorated with floor mosaics, frescoes, and relief carving, and some of these included representations of human figures and animals. Umayyad architecture was thus an important transitional period during which early Islamic architecture and visual culture began to develop its own distinct identity.
The later offshoot of the Umayyad dynasty in al-Andalus, which ruled the Emirate and subsequent Caliphate of Córdoba, also undertook major architectural projects in the Iberian Peninsula such as the Great Mosque of Córdoba and Madinat al-Zahra, which influenced later architecture in the western Islamic world.
Legacy
The Umayyad Caliphate was marked both by territorial expansion and by the administrative and cultural problems that such expansion created. Despite some notable exceptions, the Umayyads tended to favor the rights of the old Arab elite families, and in particular their own, over those of newly converted Muslims (mawali). Therefore, they held to a less universalist conception of Islam than did many of their rivals. As G.R. Hawting has written, "Islam was in fact regarded as the property of the conquering aristocracy."
During the period of the Umayyads, Arabic became the administrative language and the process of Arabization was initiated in the Levant, Mesopotamia, North Africa, and Iberia. State documents and currency were issued in Arabic. Conversions to Islam also created a growing population of Muslims in the territory of the caliphate.
According to one common view, the Umayyads transformed the caliphate from a religious institution (during the Rashidun Caliphate) to a dynastic one. However, the Umayyad caliphs do seem to have understood themselves as the representatives of God on earth, and to have been responsible for the "definition and elaboration of God's ordinances, or in other words the definition or elaboration of Islamic law."
The Umayyads have met with a largely negative reception from later Islamic historians, who have accused them of promoting a kingship (mulk, a term with connotations of tyranny) instead of a true caliphate (khilafa). In this respect it is notable that the Umayyad caliphs referred to themselves not as khalifat rasul Allah ("successor of the messenger of God", the title preferred by the tradition), but rather as khalifat Allah ("deputy of God"). The distinction seems to indicate that the Umayyads "regarded themselves as God's representatives at the head of the community and saw no need to share their religious power with, or delegate it to, the emergent class of religious scholars." In fact, it was precisely this class of scholars, based largely in Iraq, that was responsible for collecting and recording the traditions that form the primary source material for the history of the Umayyad period. In reconstructing this history, therefore, it is necessary to rely mainly on sources, such as the histories of Tabari and Baladhuri, that were written in the Abbasid court at Baghdad.
The book Al Muwatta, by Imam Malik, was written in the early Abbasid period in Medina. It does not contain any anti-Umayyad content because it was more concerned with what the Quran and what Muhammad said and was not a history book on the Umayyads. Even the earliest pro-Shia accounts of al-Masudi are more balanced. Al-Masudi's Ibn Hisham is the earliest Shia account of Mu'awiya. He recounted that Mu'awiya spent a great deal of time in prayer, in spite of the burden of managing a large empire.Muawiya Restorer of the Muslim Faith By Aisha Bewley Page 41 After killing off most of the Umayyads and destroying the graves of the Umayyad rulers apart from Mu'awiya and Umar ibn Abd al-Aziz, the history books were written during the later Abbasid period are more anti-Umayyad. The books written later in the Abbasid period in Iran are more anti-Umayyad, despite Iran being Sunni at the time. There was much anti-Arab sentiment in Iran after the fall of the Persian Empire.
Modern Arab nationalism regards the period of the Umayyads as part of the Arab Golden Age which it sought to emulate. This is particularly true of Syrian nationalists and the present-day state of Syria, centered like that of the Umayyads on Damascus. The Umayyad banners were white, after the banner of Mu'awiya ibn Abi Sufyan; it is now one of the four Pan-Arab colors which appear in various combinations on the flags of most Arab countries.
Religious perspectives
Sunni
Some Muslims criticized the Umayyads for having too many non-Muslim, former Roman administrators in their government. As the Muslims took over cities, they left the people's political representatives, the Roman tax collectors, and the administrators in the office. The taxes to the central government were calculated and negotiated by the people's political representatives. Both the central and local governments were compensated for the services each provided. Many Christian cities used some of the taxes to maintain their churches and run their own organizations. Later, the Umayyads were criticized by many Muslims for not reducing the taxes of the people who converted to Islam.
Later, when Umar ibn Abd al-Aziz came to power, he reduced these taxes. He is therefore praised as one of the greatest Muslim rulers after the four Rashidun caliphs. Imam Abu Muhammad Abdullah ibn Abdul Hakam (who lived in 829 and wrote a biography on Umar Ibn Abd al-Aziz)Umar Ibn Abdul Aziz By Imam Abu Muhammad Abdullah ibn Abdul Hakam died 214 AH 829 C.E. Publisher Zam Zam Publishers Karachi stated that the reduction in these taxes stimulated the economy and created wealth but it also reduced the government's budget, including eventually the defense budget.
The only Umayyad ruler who is unanimously praised by Sunni sources for his devout piety and justice is Umar ibn Abd al-Aziz. In his efforts to spread Islam, he established liberties for the Mawali by abolishing the jizya tax for converts to Islam. Imam Abu Muhammad Abdullah ibn Abdul Hakam stated that Umar ibn Abd al-Aziz also stopped the personal allowance offered to his relatives, stating that he could only give them an allowance if he gave an allowance to everyone else in the empire. After Umar ibn Abd al-Aziz was poisoned in 720, successive governments tried to reverse Umar ibn Abd al-Aziz's tax policies, but rebellion resulted.
Shias
The negative view of the Umayyads held by Shias is briefly expressed in the Shi'a book "Sulh al-Hasan". According to Shia hadiths, which are not considered authentic by Sunnis, Ali described them as the worst Fitna. In Shia sources, the Umayyad Caliphate is widely described as "tyrannical, anti-Islamic and godless". Shias point out that the founder of the dynasty, Muawiyah, declared himself a caliph in 657 and went to war against Muhammad's son-in-law and cousin, the ruling caliph Ali, clashing at the Battle of Siffin. Muawiyah also declared his son, Yazid, as his successor in breach of a treaty with Hassan, Muhammad's grandson. Another of Muhammad's grandsons, Husayn ibn Ali, would be killed by Yazid in the Battle of Karbala. Further Shia Imams, Ali ibn Husayn Zayn al-Abidin would be killed at the hands of ruling Umayyad caliphs.
Bahais
Asked for an explanation of the prophecies in the Book of Revelation (12:3), `Abdu'l-Bahá suggests in Some Answered Questions that the "great red dragon, having seven heads and ten horns, and seven crowns upon his heads", refers to the Umayyad caliphs who "rose against the religion of Prophet Muhammad and against the reality of Ali".
The seven heads of the dragon are symbolic of the seven provinces of the lands dominated by the Umayyads: Damascus, Persia, Arabia, Egypt, Africa, Andalusia, and Transoxiana. The ten horns represent the ten names of the leaders of the Umayyad dynasty: Abu Sufyan, Muawiya, Yazid, Marwan, Abd al-Malik, Walid, Sulayman, Umar, Hisham, and Ibrahim. Some names were re-used, as in the case of Yazid II and Yazid III, which were not accounted for in this interpretation.
List of caliphs
+Caliphs of Damascus Caliph Reign Muawiya I ibn Abu Sufyan 28 July 661 – 27 April 680 Yazid I ibn Muawiyah 27 April 680 – 11 November 683 Muawiya II ibn Yazid 11 November 683 – June 684 Marwan I ibn al-Hakam June 684 – 12 April 685 Abd al-Malik ibn Marwan 12 April 685 – 8 October 705 al-Walid I ibn Abd al-Malik 8 October 705 – 23 February 715 Sulayman ibn Abd al-Malik 23 February 715 – 22 September 717 Umar ibn Abd al-Aziz 22 September 717 – 4 February 720 Yazid II ibn Abd al-Malik 4 February 720 – 26 January 724 Hisham ibn Abd al-Malik 26 January 724 – 6 February 743 al-Walid II ibn Yazid 6 February 743 – 17 April 744 Yazid III ibn al-Walid 17 April 744 – 4 October 744 Ibrahim ibn al-Walid 4 October 744 – 4 December 744 Marwan II ibn Muhammad (ruled from Harran in the Jazira) 4 December 744 – 25 January 750
See also
History of Islam
List of Sunni dynasties
Notes
References
Bibliography
Further reading
External links
Category:661 establishments
Category:750s disestablishments
Category:7th-century establishments in Africa
Category:8th-century disestablishments in Africa
Category:8th century in al-Andalus
Category:Caliphates
Category:Former countries in West Asia
Category:History of the Mediterranean
Category:History of North Africa
Category:History of the Arabian Peninsula
Category:Medieval history of Spain
Category:Medieval history of Syria
Category:Medieval history of Iran
Category:States in medieval Anatolia
Category:States and territories established in the 660s
Category:States and territories disestablished in the 8th century
Category:Umayyad dynasty
Category:Historical transcontinental empires
|
ancient_medieval
| 10,534
|
49856
|
Abbasid Caliphate
|
https://en.wikipedia.org/wiki/Abbasid_Caliphate
|
The Abbasid Caliphate or Abbasid Empire was the third Islamic caliphate, founded by a descendant of Muhammad's uncle, Abbas ibn Abd al-Muttalib (566–653 CE), from whom the dynasty derives its name. The preceding Umayyad Caliphate was overthrown by the Abbasid Revolution in 750 CE (132 AH), after which the Abbasids ruled as caliphs from their base in Iraq, with Baghdad as their capital for most of their history.
The Abbasid Revolution had its origins and first successes in the easterly region of Khurasan, far from the Levantine center of Umayyad influence. The Abbasids first centered their government in Kufa, Iraq, but in 762 the second caliph al-Mansur founded the city of Baghdad and made it the capital. Baghdad became a center of science, culture, arts, and invention, ushering in what became known as the Golden Age of Islam. Baghdad housed several key academic institutions, such as the House of Wisdom, and along with its multi-ethnic and multi-religious population, made the city famous as a centre of learning across the world. The Abbasid period was marked by the use of bureaucrats in the government, including the vizier, as well as an in the ummah (Muslim community) and among the political elites.
The height of Abbasid power and prestige is traditionally associated with the reign of Harun al-Rashid (). The civil war which followed after his death brought new divisions and was followed by significant changes to the character of the state, including the creation of a new professional army recruited mainly from Turkic slaves and the construction of a new capital, Samarra, in 836. The 9th century also saw the provinces becoming increasingly autonomous, giving rise to local dynasties who controlled different regions of the empire, such as the Aghlabids, Tahirids, Samanids, Saffarids, and Tulunids. After a period of turmoil in the 860s, the caliphate regained some stability and its seat returned to Baghdad in 892.
During the 10th century, the caliphs were reduced to mere figureheads, with real political and military power resting in the hands of the Iranian Buyids and the Seljuq Turks, who took control of Baghdad in 945 and 1055, respectively. The Abbasids eventually regained control of Mesopotamia during the reign of Caliph al-Muqtafi () and extended their rule into Iran during the reign of Caliph al-Nasir (). This revival ended in 1258 with the sack of Baghdad by the Mongols under Hulagu Khan and the execution of Caliph al-Musta'sim. A surviving branch of the Abbasid dynasty was reinstated in the Mamluk capital of Cairo in 1261. Though lacking in political power, with the brief exception of Caliph al-Musta'in, the dynasty continued to claim symbolic authority until a few years after the Ottoman conquest of Egypt in 1517, the last Abbasid caliph being al-Mutawakkil III.
History
Abbasid Revolution (747–750)
The Abbasid caliphs descended from Abbas ibn Abd al-Muttalib, one of the youngest uncles of Muhammad and of the same Banu Hashim clan. This family relation to Muhammad made them appealing to those who were discontented with the rule of the Umayyad caliphs (661–750), who did not descend from the same family. Over the course of their rule, the Umayyads even suppressed several rebellions that attempted to bring other members of Muhammad's family to power. One of the claims that the Abbasids made in the early years of their political movement was that Abu Hashim, the son of Muhammad ibn al-Hanafiyya and grandson of Ali, had formally transferred the Imamate () to Muhammad ibn Ali (the great-grandson of Abbas) and thus to the Abbasid family. Muhammad ibn Ali began to campaign in Persia for the return of power to the family of Muhammad, the Hashemites, during the reign of Umar II. Later, after they had attained power and needed to broaden their support among Muslims, the Abbasids supplemented this claim with other claims to justify their legitimacy.
The Abbasids also distinguished themselves from the Umayyads by attacking their moral character and administration in general. According to Ira Lapidus, "The Abbasid revolt was supported largely by Arabs, mainly the aggrieved settlers of Merv with the addition of the Yemeni faction and their Mawali". The Abbasids also appealed to non-Arab Muslims, known as mawali, who remained outside the kinship-based society of the Arabs and were perceived as a lower class within the Umayyad empire.
During the reign of Marwan II, this opposition culminated in the rebellion of Ibrahim al-Imam, the fourth in descent from Abbas. Supported by the province of Khurasan (eastern Iran), even though the governor opposed them, and the Shi'i Arabs, he achieved considerable success, but was captured in the year 747 and died, possibly assassinated, in prison.
On 9 June 747 (15 Ramadan AH 129), Abu Muslim, rising from Khurasan, successfully initiated an open revolt against Umayyad rule, which was carried out under the sign of the Black Standard. Close to 10,000 soldiers were under Abu Muslim's command when the hostilities officially began in Merv. General Qahtaba followed the fleeing governor Nasr ibn Sayyar west defeating the Umayyads at the Battle of Gorgan, the Battle of Nahavand and finally in the Battle of Karbala, all in the year 748.
Ibrahim was captured by Marwan and was killed. The quarrel was taken up by Ibrahim's brother Abdallah, known by the name of Abu al-'Abbas as-Saffah, who defeated the Umayyads in 750 in the battle near the Great Zab and was subsequently proclaimed caliph. After this loss, Marwan fled to Egypt, where he was subsequently killed. The remainder of his family, barring one male, were also eliminated.
Establishment and consolidation (750–775)
Immediately after their victory, al-Saffah () sent his forces to Central Asia, where his forces fought against Tang expansion during the Battle of Talas. Al-Saffah focused on putting down numerous rebellions in Syria and Mesopotamia. The Byzantines conducted raids during these early distractions.
One of the first major changes effected by Abbasid rule was the move of the caliphate's center of power from Syria to Mesopotamia (present-day Iraq). This was closer to the Persian mawali support base of the Abbasids and the move addressed their demand for reduced Arab dominance in the empire. However, no definitive capital was yet selected. In these early Abbasid years, Kufa generally served as the administrative capital, but the caliphs were wary of the Alid sympathies in the city and did not always reside here. In 752, al-Saffah built a new city called al-Hashimiyya, at an uncertain location, most likely near Kufa. Later that same year, he moved to Anbar, where he built a new settlement for his Khurasani soldiers and a palace for himself.
It was al-Saffah's successor, Abu Ja'far al-Mansur () who firmly consolidated Abbasid rule and faced down internal challenges. His uncle, Abdallah ibn Ali, the victor over the Umayyads at the Battle of the Zab, was the most serious potential rival for leadership and al-Mansur sent Abu Muslim, the Khurasani revolutionary commander, against him in 754. After Abu Muslim successfully defeated him, al-Mansur then turned to eliminate Abu Muslim himself. He arranged to have him arrested and executed in 755.
On the western frontier, the Abbasids were unable to re-assert caliphal control over the western and central Maghreb, which the Umayyads had lost in the 740s. One member of the Umayyad dynasty, Abd ar-Rahman, also managed to escape the purge of his family and established independent rule in al-Andalus (present-day Spain and Portugal) in 756, founding the Umayyad Emirate of Córdoba.
In 756, al-Mansur had also sent over 4,000 Arab mercenaries to assist the Chinese Tang dynasty in the An Lushan Rebellion against An Lushan. The Abbasids, or "Black Flags" as they were commonly called, were known in Tang dynasty chronicles as the hēiyī Dàshí, "The Black-robed Tazi" () ("Tazi" being a borrowing from Persian Tāzī, the word for "Arab"). Later, Caliph Harun al-Rashid sent embassies to the Chinese Tang dynasty and established good relations with them. After the war, these embassies remained in China with al-Rashid establishing an alliance with China. Several embassies from the Abbasid Caliphs to the Chinese court have been recorded in the Old Book of Tang, the most important being those of al-Saffah, al-Mansur, and Harun al-Rashid.
In 762, al-Mansur suppressed a rebellion in the Hejaz led by al-Nafs al-Zakiyya, a descendant from Ali ibn Abi Talib, whose challenge to the Abbasid claim to leadership was based on his Alid lineage and thus presented a serious political threat. He was defeated by an Abbasid army led by Isa ibn Musa. It was after this victory, in 762, that al-Mansur finally established a proper Abbasid capital, Baghdad – officially called Madinat al-Salam ('City of Peace') – located on the Tigris River, near the former ancient capital city of Ctesiphon. Prior to this, he had continued to consider multiple sites for a capital, including al-Hashimiyya, which he used as a capital for a while, and al-Rumiyya (near the ruins of Ctesiphon), which he used for a few months. Various other sites in the region also appear to have served as "capitals" under either al-Saffah or al-Mansur prior to the founding of Baghdad.
Al-Mansur centralised the judicial administration and, later, Harun al-Rashid established the institution of Great Qadi to oversee it. The Umayyad empire was mostly Arab; however, the Abbasids progressively became made up of more and more converted Muslims in which the Arabs were only one of many ethnicities. The Abbasids had depended heavily on the support of Persians in their overthrow of the Umayyads. Al-Mansur welcomed non-Arab Muslims to his court. While this helped integrate Arab and Persian cultures, it alienated many of their Arab supporters, particularly the Khurasani Arabs who had supported them in their battles against the Umayyads.
Golden age (775–861)
The Abbasid leadership had to work hard in the last half of the 8th century (750–800) under several competent caliphs and their viziers to usher in the administrative changes needed to keep order of the political challenges created by the far-flung nature of the empire, and the limited communication across it. It was also during this early period of the dynasty, in particular during the rule of al-Mansur, Harun al-Rashid, and al-Ma'mun, that its reputation and power were created.
The position of wazir (vizier) developed in this period. It was initially akin to a secretary, but under the tenure of the Barmakids, an Iranian family close to the Abbasids, the position became powerful and Harun al-Rashid delegated state affairs to them for many years. This resulted in a more ceremonial role for many Abbasid caliphs compared with caliphal rule under the Umayyads; the viziers began to exert greater influence, and the role of the caliph's aristocracy was slowly replaced by a Barmakid bureaucracy. At the western end of the empire, Harun al-Rashid agreed to grant the province of Ifriqiya (centered in present-day Tunisia) as a hereditary emirate to Ibrahim ibn al-Aghlab, who founded the Aghlabid dynasty there.
Under Harun al-Rashid's reign (), the Abbasid Empire reached its peak. His father, al-Mahdi (), restarted the fighting with the Byzantines, and his sons continued the conflict until Empress Irene pushed for peace. After several years of peace, Nikephoros I broke the treaty, then fended off multiple incursions during the first decade of the 9th century. These Abbasid attacks pushed into the Taurus Mountains, culminating with a victory at the Battle of Krasos and the massive invasion of 806, led by al-Rashid himself. Harun al-Rashid's navy also proved successful, taking Cyprus. Al-Rashid then focused on the rebellion of Rafi ibn al-Layth in Khurasan and died while there.
Domestically, al-Rashid pursued policies similar to those of his father al-Mahdi. He released many of the Umayyads and Alids his brother al-Hadi () had imprisoned and declared amnesty for all political groups of the Quraysh. While Baghdad remained the official capital, al-Rashid chose to reside in Raqqa from 796 until the end of his reign. In 802, he established an unusual succession plan which decreed that his son al-Amin would inherit the title of Caliph and have control of Iraq and the western empire while his other son al-Ma'mun would rule Khurasan and most eastern parts of the empire. In 803, he turned on and imprisoned or killed most of the Barmakids, who had wielded administrative power on his behalf. The reasons for this sudden and ruthless move remain unclear and have been the subject of much discussion by contemporary writers and later historians.
Al-Rashid's decision to split the succession proved to be damaging to the longevity of the empire. After his death in 809, his succession pact eventually collapsed and the empire was split by a civil war between al-Amin in Iraq and al-Ma'mun in Khurasan. This ended with a successful siege of Baghdad by al-Ma'mun's forces. When the city fell in 813, al-Amin was captured and executed on the orders of al-Ma'mun's general, Tahir ibn Husayn. This marked the first time that an Abbasid ruler was publicly executed and it irrevocably damaged the prestige of the caliphate.
Al-Ma'mun became caliph and ruled until his death in 833. He initially ruled the empire from his established base in Merv, Khurasan, where his main support was found, but this prolonged the discontent and instability in Iraq and triggered further fighting in the years following his victory. In 817, he officially declared an Alid, 'Ali al-Rida, as his heir, instead of an Abbasid family member, perhaps hoping to promote Muslim unity, but the move backfired. Eventually, he was compelled to step back from these policies and move his court to Baghdad, where he arrived in August 819. After this, the rest of his reign was relatively peaceful. Exceptions included a rebellion in Azerbaijan by the Khurramites, supported by the Byzantines, which continued until 837. He also repulsed a Byzantine attack on Syria around 829, followed by counter-attacks into Anatolia, and suppressed a rebellion in Egypt in 832.
The later years of al-Ma'mun's reign are known for his intellectual interests and patronage. The so-called "translation movement" — the state-sponsored translation of scientific and literary works of antiquity into Arabic — that had begun under his predecessors was pushed even further during this time and al-Ma'mun shifted its focus to ancient Greek works of science and philosophy. In matters of religion, his interest in philosophy spurred him to endorse Mu'tazilism, the rationalist school of Islamic thought. Under its influence, he officially endorsed the doctrine of createdness of the Qur'an in 827. In 833, he went further and forcibly imposed it on the ulama, the Sunni religious scholars. This controversial policy, known as the Mihna, was eventually abandoned in 848. Ultimately, it failed to convince the Sunni ulama and instead contributed to the emergence of the latter as a more cohesive social class whose views and interests did not always align with the caliph.
Following the civil war between al-Amin and al-Ma'mun, the traditional mainstay of the Abbasid army, the Khurasaniyya and 'Abna al-dawla, were no longer seen as reliable and the caliphs sought to recruit a new type of army whose loyalty could be better assured. This process began under al-Ma'mun but it is his brother and successor, al-Mu'tasim (), who is known for its more radical implementation. Soldiers were recruited from several new sources, but the most important, especially under al-Mu'tasim, were the group referred to by Arabic chronicles as "Turks" (), who appear to have been mainly Turkic people from Central Asia. Some modern scholars refer to them as Mamluks, marking them as the antecedent of the later slave-soldiers known by that term, but their exact legal status has been a subject of scholarly debate. Many, perhaps the majority, were originally purchased or captured slaves, but they were paid regular salaries and thus likely manumitted. In any case, these outsiders did not have political ties among the traditional elites and thus their loyalty was to the caliph alone.
These troops were likely the first standing army of the caliphate and provided the caliph with a strong base of military support. However, the influx of new foreign troops into the capital created tensions with its inhabitants and with older elites. This was one of the main reasons why, in 836, al-Mu'tasim decided to found a new capital, Samarra, on an open site to the north of Baghdad. The new capital housed the caliph's army, allowed for the unhindered construction of massive new palaces, and became the focus of an even more elaborate courtly culture.
Al-Mu'tasim's reign marked the end of the strong caliphs. He strengthened his personal army with the Mamluks and promptly restarted the war with the Byzantines. Though his attempt to seize Constantinople failed when his fleet was destroyed by a storm, his military excursions were generally successful, culminating with a resounding victory in the Sack of Amorium.
Political fragmentation (861–945)
From the ninth century onward, the Abbasids found they could no longer keep together a centralized polity from Baghdad, which had grown larger than that of Rome. As mentioned, Harun al-Rashid had already granted the province of Ifriqiya to the Aghlabids, who ruled this region as an autonomous vassal state until its fall to the Fatimids in 909. In al-Ma'mun's reign, Tahir ibn Husayn (al-Ma'mun's general in the civil war) was appointed viceroy of Iran and most of the eastern regions of the empire from 821 onward. His descendants, the Tahirids, continued to govern in this position with significant autonomy until 873, although they remained loyal to the caliph and used only the title of amir. From their capital at Nishapur, they were important patrons of Arabic literature and Sunni religious scholarship, in addition to making major improvements to agriculture. In Transoxiana, the Persian Samanids of Bukhara and Samarkand ruled as local governors, initially under the Tahirids. They oversaw the development of the region's cities into major trade centers, profiting from long-distance trade between China, Central Asia, Eastern Europe, and the Middle East.
The reign of al-Mutawakkil () was characterized by the caliph's extravagant spending, his attempts to further consolidate power within the state, and his replacement of the Mihna policy with support for more orthodox Sunni scholars, in particular the Hanbali school. In 853, the Byzantines sacked Damietta in Egypt, and the caliph responded by sending troops into Anatolia, who sacked and marauded until they were eventually annihilated in 863.
Al-Mutawakkil's lifestyle and spending weakened his support among the military. In 861, he was murdered at a party by a group of Turkish soldiers. This was the first time the Abbasid military intervened so directly and violently at court and it set a precedent for further coups. The following period, sometimes known as the "Anarchy at Samarra" (861–870), saw four different caliphs come and go. While they each attempted to reassert their authority, they were at the mercy of military and political factions. Tax collection lapsed and, along with al-Mutawakkil's previous spending, this left the state short on funds, which exacerbated the infighting. In 865, the Turkish soldiers of Samarra even besieged Baghdad to overthrow the caliph al-Musta'in and, when the city fell the following year, they replaced him with al-Mu'tazz. The latter was overthrown by the same faction in 869 and replaced by al-Muhtadi, who was similarly overthrown in 870. Al-Muhtadi was succeeded by al-Mu'tamid, who was finally able to restore some order, in large part thanks to the help of his brother al-Muwaffaq, who kept the military under control and ran most government affairs. The restoration was hampered by the Zanj rebellion, which erupted in 869 and threatened the center of Abbasid control in Iraq. This major threat was not brought under control until a determined campaign was launched in 879.
By the 870s, Egypt became autonomous under Ahmad ibn Tulun and his Tulunid successors, though they continued to acknowledge the caliph and generally sent tribute to Baghdad. For a time, they even controlled Syria and parts of the Jazira (Upper Mesopotamia). In 882, the caliph al-Mu'tamid even tried to move his residence to Egypt at Ibn Tulun's invitation, though this move was aborted by the intervention of al-Muwaffaq.
In the east, the Saffarids were former soldiers in the Abbasid army who were stationed in Sistan and remained there as local strongmen. They began to challenge the Tahirids from 854 onward and in 873 they captured Nishapur, ending Tahirid rule. They marched on Baghdad in 876 but were defeated by al-Muwaffaq. The two sides were forced to come to terms and the Abbasids allowed the Saffarids to rule over Sistan, Fars, Kirman, and Khurasan.
In 898, al-Mu'tadid set the Saffarids and Samanids against each other by formally endorsing a Saffarid claim over Transoxiana, the Samanid domain. The Samanids emerged triumphant in battle and were able to expand their control thenceforth to Khurasan, while the Saffarids were contained further south. The Samanids never threatened Iraq or western Iran, but they were also not as close to the caliphs as their Tahirid predecessors and in practice they were almost entirely independent of Baghdad. They became even greater patrons of religion and the arts than the Tahirids. They still maintained an orthodox Sunni ideology but they differed from their predecessors by promoting the Persian language.
There was a brief Abbasid political and military revival at the end of the 9th century, especially under the policies of caliphs al-Mu'tadid () and al-Muktafi (). Under al-Mu'tadid, the capital was moved from Samarra back to Baghdad. Incursions by the Qarmatians and allied Bedouin tribes posed a serious threat from 899 onwards, but the Abbasid army, led by Muhammad ibn Sulayman, won a reprieve against them in 904 and 907. In 905, the same general invaded Egypt and overthrew the weakened Tulunids, re-establishing Abbasid control to the west. By the time caliph al-Muktafi died in 908, the Abbasid revival was at its peak and a strong centralized state was in place again.
After his death, however, the state became dominated by feuding bureaucrats. Under al-Muqtadir (), the Abbasid court continued to project power and wealth publicly but the politics and financial policies of the time compromised the caliphate's sustainability in the long-term. It was in this period that the practice of giving out iqtas (fiefs in the form of tax farms) as favours began, which had the effect of reducing the caliphate's own tax revenues.
In 909, North Africa was lost to the Fatimid dynasty, an Isma'ili Shia sect tracing its roots to Muhammad's daughter Fatima. The Fatimids took control of Ifriqiya from the Aghlabids and eventually conquered Egypt in 969, where they established their capital, Cairo, near Fustat. By the end of the century, they were one of the main political and ideological challenges to Sunni Islam and the Abbasids, contesting the Abbasids for the titular authority of the Islamic ummah. The challenge of the Fatimid Caliphate only ended with their downfall in the 12th century.
Under the caliph al-Radi (), Baghdad's authority declined further as local governors refused to send payments to the capital. The Ikhshidids ruled Egypt and Syria autonomously prior to the Fatimid takeover. Even in Iraq, many governors refused to obey and the caliph was unable to send armies against them. Al-Radi was forced to invite the governor of Wasit, Muhammad ibn Ra'iq, to take over the administration under the newly created position of amir al-umara ("Commander of Commanders"). Ibn Ra'iq disbanded the salaried army of the caliph and reduced much of the government's bureaucratic infrastructure, including the traditional vizierate, thus removing much of the Abbasid state's basis for power. He was overthrown in 938 and the following years were bogged down in political turmoil.
Al-Mustakfi had a short reign from 944 to 946, and it was during this period that the Persian faction known as the Buyids from Daylam swept into power and assumed control over the bureaucracy in Baghdad. According to the history of Miskawayh, they began distributing iqtas to their supporters. This period of localized secular control was to last nearly 100 years.
Outside Iraq, all the autonomous provinces slowly took on the characteristic of de facto states with hereditary rulers, armies, and revenues and operated under only nominal caliph suzerainty, which may not necessarily be reflected by any contribution to the treasury, such as the Soomro emirs that had gained control of Sindh and ruled the entire province from their capital of Mansura. Mahmud of Ghazni took the title of sultan, as opposed to the amir that had been in more common usage, signifying the Ghaznavid Empire's independence from caliphal authority, despite Mahmud's ostentatious displays of Sunni orthodoxy and ritual submission to the caliph. In the 11th century, the loss of respect for the caliphs continued, as some Islamic rulers no longer mentioned the caliph's name in the Friday khutba, or struck it off their coinage.
Buyid and Seljuq control (945–1118)
Despite the power of the Buyid amirs, the Abbasids retained a highly ritualized court in Baghdad, as described by the Buyid bureaucrat Hilal al-Sabi', and they retained a certain influence over Baghdad as well as religious life. As Buyid power waned with the rule of Baha' al-Daula, the caliphate was able to regain some measure of strength. The caliph al-Qadir, for example, led the ideological struggle against the Shia with writings such as the Baghdad Manifesto. The caliphs kept order in Baghdad itself, attempting to prevent the outbreak of fitnas in the capital, often contending with the ayyarun.
With the Buyid dynasty on the wane, a vacuum was created that was eventually filled by the dynasty of Oghuz Turks known as the Seljuqs. By 1055, the Seljuqs had wrested control from the Buyids and Abbasids, and took temporal power. When the amir and former slave Basasiri took up the Shia Fatimid banner in Baghdad in 1056–57, the caliph al-Qa'im was unable to defeat him without outside help. Toghril Beg, the Seljuq sultan, restored Baghdad to Sunni rule and took Iraq for his dynasty.
Once again, the Abbasids were forced to deal with a military power that they could not match, though the Abbasid caliph remained the titular head of the Islamic community. The succeeding sultans Alp Arslan and Malikshah, as well as their vizier Nizam al-Mulk, took up residence in Persia, but held power over the Abbasids in Baghdad. When the dynasty began to weaken in the 12th century, the Abbasids gained greater independence once again.
Revival of caliphal state (1118–1258)
Caliph al-Mustarshid () was the first caliph to build an army and to lead it in battle since the 10th century. He recruited Kurdish and Arab Bedouin tribes and re-fortified Baghdad. His first concern was not the Seljuks but the Mazyadids of Hilla in central Iraq, whom he met in battle in 1123. His bid for independence was ultimately unsuccessful, as he was defeated by a Seljuk army in 1135 and assassinated soon after.
Under al-Muqtafi (), a new caliphal state began to emerge with the help of his vizier Ibn Hubayra. Ibn Hubayra concentrated on reasserting authority in Iraq while the Seljuk Empire deteriorated. The Abbasids successfully defended Baghdad against the Seljuqs in the siege of 1157 and then conquered their Mazyadid enemies in Hilla in 1162. By the end of al-Muqtafi's reign, Baghdad controlled a state stretching from Basra in the south to the edges of Mosul in the north. After over two hundred years of Abbasid subjection to foreign dynasties, Caliph al-Mustanjid () formally declared independence from the Seljuk sultans in 1165, when he dropped their names from Abbasid coinage. Initially, the caliphs were still vulnerable to the power of the viziers, but al-Mustadi () was able to further rally some support from the Baghdad public as well as symbolic support abroad from the Ayyubid sultan Saladin and the Rum Seljuk sultan Kilij Arslan II.
The long reign of Caliph al-Nasir () marked a definitive shift in late Abbasid power. He reinvigorated public displays of caliphal prestige, removed officials who were too powerful, engaged in diplomacy with regions beyond Iraq, and extended his control over former Seljuk territories in western Iran — including Isfahan, Hamadan, Qazvin and Zanjan. He sought to build up his influence among Muslim rulers abroad largely through the Sufi-inspired futuwwa brotherhood that he headed. Under caliph al-Mustansir (), the Abbasid state achieved significant stability and many of the same policies continued. He built the Mustansiriyya Madrasa, inaugurated in 1234, the first madrasa to teach all four Sunni maddhabs (schools of jurisprudence) and the first madrasa commissioned by an Abbasid caliph.
Mongol invasion and end
In 1206, Genghis Khan established a powerful dynasty among the Mongols of Central Asia. During the 13th century, this Mongol Empire conquered most of the Eurasian land mass, including both China in the east and much of the old Islamic caliphate and the Kievan Rus' in the west. In 1252, Hulagu Khan, a grandson of Genghis Khan and brother of the new Mongol ruler, Möngke Khan, was placed in charge of a new western campaign to the Middle East that would culminate in the conquest of Baghdad in 1258.
In the years leading up the Mongol invasion, Baghdad's strength was sapped by political rivalries, sectarian tensions between Sunnis and Shias, and damaging floods. In 1257, after destroying the Assassins in Iran, Hulagu wrote to the Abbasid caliph, al-Musta'sim, demanding his submission. The caliph refused, with Hulagu's status as a non-Muslim (unlike the earlier Buyids and Seljuks) likely a factor. There followed months of diplomacy, during which the Mongols may have been informed of Baghdad's weakness by correspondence with the caliph's vizier, Ibn al-Alqami, a Shia who was later accused of colluding with them.
The Mongols began their siege of the city on 29 January 1258. On 10 February, al-Musta'sim agreed to meet with Hulagu, who demanded that the caliph order the defenders to stand down and come out of the city in exchange for mercy. The caliph complied, but the Mongols slaughtered the population and then began the sack of the city on 13 February. Contemporary accounts describe destruction, looting, rape, and killing on a massive scale over many days, with hundreds of thousands killed and the city reduced to near-empty ruins, though some, like the Christian and Shia communities, were spared. The Mongols feared rumours that a supernatural disaster would strike if the blood of al-Musta'sim, a direct descendant of Muhammad's uncle and part of a dynasty that had reigned for five centuries, was spilled. As a precaution and in accordance with a Mongol taboo against spilling royal blood, Hulagu had al-Musta'sim wrapped in a carpet and trampled to death by horses on 20 February 1258. The caliph's immediate family was also executed, with the lone exceptions of his youngest son who was sent to Mongolia and a daughter who became a slave in the harem of Hulagu.
The fall of Baghdad marked the effective end of the Abbasid Caliphate. It made a deep impression on contemporary and later writers both inside and outside the Muslim world, some of whom created legendary stories about the last caliph's demise. It is also traditionally seen as the approximate end to the "classical age" or "golden age" of Islamic civilization. The events brought profound geopolitical changes to the traditional lands of the Islamic caliphate, with Iraq, Iran, and most of the eastern lands falling under Mongol control while other Muslim rulers retained the lands to the west. Mongol expansion further west was eventually halted by the Mamluks of Egypt at the Battle of Ain Jalut in 1260, followed by the conflict between the Ilkhanids (Hulagu and his successors) and their Golden Horde rivals, which diverted Mongol attention.
Abbasid caliphs in Cairo (1261–1517)
Prior to the Mongol invasion, the later Ayyubid sultans of Egypt had built up an army recruited from slaves, the Mamluks. During a political and military crisis in 1250, the Mamluks seized power and established what is now known as the Mamluk Sultanate. Following the devastation of Baghdad in 1258 and in an effort to secure political legitimacy for the new regime in Egypt, the Mamluk ruler Baybars invited a surviving member of the Abbasid family to establish himself in Cairo in 1260–1261. The new caliph was al-Mustansir II, a brother of the former caliph al-Mustansir. In 1262, he disappeared while leading a small army in an attempt to recapture Baghdad from the Mongols. Baybars subsequently replaced him with al-Hakim I, another Abbasid family member who had just been proclaimed in Aleppo.
Thereafter, the Abbasid caliphs in Cairo continued to exist as a strictly ceremonial but nonetheless important institution within the Mamluk Sultanate, conferring significant prestige on the Mamluks. It continued to be relevant even to other Muslim rulers until the 14th century; for example, the sultans of Delhi, the Muzaffarid sultan Muhammad, the Jalayirid sultan Ahmad, and the Ottoman sultan Bayezid I all sought diplomas of investiture from the caliph or declared nominal allegiance to him. Caliph al-Musta'in even managed to reign as sultan in Cairo for a brief six months in 1412.
During the 15th century, however, the institution of the caliph declined in significance. The last Abbasid caliph in Cairo was al-Mutawakkil III, who was in place when the Ottoman sultan Selim I defeated the Mamluks in 1516 and conquered Egypt in 1517, ending the Mamluk Sultanate. Selim I met with al-Mutawakkil III in Aleppo in 1516, prior to marching into Egypt, and the caliph was then sent to the Ottoman capital of Constantinople (present-day Istanbul), ending the Abbasid caliphate definitively. The idea of a "caliphate" subsequently became an ambiguous concept that was occasionally revisited by later Muslim rulers and intellectuals for political or religious reasons. The Ottoman sultans, who were thenceforth the most powerful Muslim rulers in western Asia and the Mediterranean, did not use the title of "caliph" at all before the mid-16th century and only did so vaguely and inconsistently afterwards. The claim that al-Mutawakkil III "transferred" the office of the caliph to the Ottoman sultan during their meeting in Aleppo is a legend that was elaborated in the 19th century and is not corroborated by contemporary accounts.
Culture
Islamic Golden Age
The Abbasid historical period lasting to the Mongol conquest of Baghdad in 1258 CE is considered the Islamic Golden Age. The Islamic Golden Age was inaugurated by the middle of the 8th century by the ascension of the Abbasid Caliphate and the transfer of the capital from Damascus to Baghdad. The Abbasids were influenced by the Qur'anic injunctions and hadith, such as "the ink of a scholar is more holy than the blood of a martyr", stressing the value of knowledge. During this period the Muslim world became an intellectual center for science, philosophy, medicine and education as the Abbasids championed the cause of knowledge and established the House of Wisdom in Baghdad, where both Muslim and non-Muslim scholars sought to translate and gather all the world's knowledge into Arabic. Many classic works of antiquity that would otherwise have been lost were translated into Arabic and Persian and later in turn translated into Turkish, Hebrew and Latin. During this period the Muslim world was a cauldron of cultures which collected, synthesized and significantly advanced the knowledge gained from the Roman, Chinese, Indian, Persian, Egyptian, North African, Ancient Greek and Medieval Greek civilizations. According to Huff, "[i]n virtually every field of endeavor—in astronomy, alchemy, mathematics, medicine, optics and so forth—the Caliphate's scientists were in the forefront of scientific advance."
Literature
The best-known fiction from the Islamic world is One Thousand and One Nights, a collection of fantastical folk tales, legends and parables compiled primarily during the Abbasid era. The collection is recorded as having originated from an Arabic translation of a Sassanian-era Persian prototype, with likely origins in Indian literary traditions. Stories from Arabic, Persian, Mesopotamian, and Egyptian folklore and literature were later incorporated. The epic is believed to have taken shape in the 10th century and reached its final form by the 14th century; the number and type of tales have varied from one manuscript to another.. All Arabian fantasy tales were often called "Arabian Nights" when translated into English, regardless of whether they appeared in The Book of One Thousand and One Nights. This epic has been influential in the West since it was translated in the 18th century, first by Antoine Galland. Many imitations were written, especially in France. Various characters from this epic have themselves become cultural icons in Western culture, such as Aladdin, Sinbad and Ali Baba.
A famous example of Islamic poetry on romance was Layla and Majnun, an originally Arabic story which was further developed by Iranian, Azerbaijani and other poets in the Persian, Azerbaijani, and Turkish languages. It is a tragic story of undying love much like the later Romeo and Juliet.
Arabic poetry reached its greatest height in the Abbasid era, especially before the loss of central authority and the rise of the Persianate dynasties. Writers like Abu Tammam and Abu Nuwas were closely connected to the caliphal court in Baghdad during the early 9th century, while others such as al-Mutanabbi received their patronage from regional courts.
Under Harun al-Rashid, Baghdad was renowned for its bookstores, which proliferated after the making of paper was introduced. Chinese papermakers had been among those taken prisoner by the Arabs at the Battle of Talas in 751. As prisoners of war, they were dispatched to Samarkand, where they helped set up the first Arab paper mill. In time, paper replaced parchment as the medium for writing, and the production of books greatly increased. These events had an academic and societal impact that could be broadly compared to the introduction of the printing press in the West. Paper aided in communication and record-keeping, it also brought a new sophistication and complexity to businesses, banking, and the civil service. In 794, Jafa al-Barmak built the first paper mill in Baghdad, and from there the technology circulated. Harun required that paper be employed in government dealings, since something recorded on paper could not easily be changed or removed, and eventually, an entire street in Baghdad's business district was dedicated to selling paper and books.
Philosophy
One of the common definitions for "Islamic philosophy" is "the style of philosophy produced within the framework of Islamic culture". Islamic philosophy, in this definition is neither necessarily concerned with religious issues, nor is exclusively produced by Muslims. Their works on Aristotle were a key step in the transmission of learning from ancient Greeks to the Islamic world and the West. They often corrected the philosopher, encouraging a lively debate in the spirit of ijtihad. They also wrote influential original philosophical works, and their thinking was incorporated into Christian philosophy during the Middle Ages, notably by Thomas Aquinas.
Three speculative thinkers, al-Kindi, al-Farabi, and Avicenna, combined Aristotelianism and Neoplatonism with other ideas introduced through Islam, and Avicennism was later established as a result. Other influential Abbasid philosophers include al-Jahiz, and Ibn al-Haytham (Alhacen).
Architecture
As power shifted from the Umayyads to the Abbasids, the architectural styles changed also, from Greco-Roman tradition (which features elements of Hellenistic and Roman representative style) to Eastern tradition which retained their independent architectural traditions from Mesopotamia and Persia. The Abbasid architecture was particularly influenced by Sasanian architecture, which in turn featured elements present since ancient Mesopotamia. The Christian styles evolved into a style based more on the Sasanian Empire, utilizing mud bricks and baked bricks with carved stucco. Other architectural innovations and styles were few, such as the four-centered arch, and a dome erected on squinches. Unfortunately, much was lost due to the ephemeral nature of the stucco and luster tiles.
Another major development was the creation or vast enlargement of cities as they were turned into the capital of the empire, beginning with the creation of Baghdad in 762, which was planned as a walled city with four gates, and a mosque and palace in the center. Al-Mansur, who was responsible for the creation of Baghdad, also planned the city of Raqqa, along the Euphrates. Finally, in 836, al-Mu'tasim moved the capital to a new site that he created along the Tigris, called Samarra. This city saw 60 years of work, with race-courses and game preserves to add to the atmosphere. Due to the dry remote nature of the environment, some of the palaces built in this era were isolated havens. Al-Ukhaidir Fortress is a fine example of this type of building, which has stables, living quarters, and a mosque, all surrounding inner courtyards. Mesopotamia only has one surviving mausoleum from this era, in Samarra: an octagonal domed structure known as the Qubbat al-Sulaibiyya, which is the first known monumental tomb in Islamic architecture and may be the final resting place of al-Muntasir.
Baghdad, the epicenter of the empire, was originally organized in a circular fashion next to the Tigris River, with massive brick walls being constructed in successive rings around the core by a workforce of 100,000 with four huge gates (named Kufa, Basra, Khurasan and Syria). The central enclosure of the city contained Mansur's palace of in area and the great mosque of Baghdad, encompassing . Travel across the Tigris and the network of waterways allowing the drainage of the Euphrates into the Tigris was facilitated by bridges and canals servicing the population.
Outside the Abbasid heartlands, architecture was still influenced by the capital. In present-day Tunisia, the Great Mosque of Kairouan was founded under the Umayyad dynasty but completely rebuilt in the 9th century under the patronage of the Aghlabids, vassals of the Abbasids. The styles utilized were mainly Abbasid. In Egypt, Ahmad Ibn Tulun commissioned the Ibn Tulun Mosque, completed in 879, that is based on the style of Samarra and is now one of the best-preserved Abbasid-style mosques from this period.
Arts
The establishment of Abbasid power based in Iraq, rather than Syria, resulted in a cultural and artistic development influenced not only by the Mediterranean and Middle Eastern traditions but also by connections further afield with India, Central Asia, and China. The importation of Chinese ceramics elicited local imitations but also stirred innovations in local production. Abbasid ceramics became a more important art form with greater emphasis on decoration. A major innovation was the emergence of monochrome and polychrome lustreware, a technical achievement that had an important impact on the wider development of Islamic ceramics. Glassware also became a more important art form and was likely the origin of the lustre technique that was introduced into ceramics. Few textiles have survived but the production of tiraz, textiles with royal inscriptions, is well attested.
Another major art form was calligraphy and manuscript production. During the Abbasid period, Arabic calligraphy evolved into a more refined discipline. Rounded Kufic script was typical and became increasingly stylized. Parchment only allowed for a few lines of script, but from the late 8th century onward paper began to be produced. Qur'ans are the main type of book to have survived from this period.
Science and technology
Science
The reigns of Harun al-Rashid (786–809) and his successors fostered an age of great intellectual achievement. In large part, this was the result of the schismatic forces that had undermined the Umayyad regime, which relied on the assertion of the superiority of Arab culture as part of its claim to legitimacy, and the Abbasids' welcoming of support from non-Arab Muslims.;
A number of medieval thinkers and scientists living under Islamic rule played a role in transmitting Islamic science to the Christian West. In addition, the period saw the recovery of much of the Alexandrian mathematical, geometric and astronomical knowledge, such as that of Euclid and Claudius Ptolemy. These recovered mathematical methods were later enhanced and developed by other Islamic scholars, notably by Persian scientists Al-Biruni and Abu Nasr Mansur.
Christians (particularly Nestorian Christians) contributed to the Arab Islamic Civilization during the Umayyads and the Abbasids by translating works of Greek philosophers to Syriac and afterwards to Arabic. Nestorians played a prominent role in the formation of Arab culture, with the Academy of Gondishapur being prominent in the late Sassanid, Umayyad and early Abbasid periods. Notably, eight generations of the Nestorian Bukhtishu family served as private doctors to caliphs and sultans between the eighth and eleventh centuries.
Algebra was significantly developed by Persian scientist Muhammad ibn Mūsā al-Khwārizmī during this time in his landmark text, Kitab al-Jabr wa-l-Muqabala, from which the term algebra is derived. He is thus considered to be the father of algebra by some, although the Greek mathematician Diophantus has also been given this title. The terms algorism and algorithm are derived from the name of al-Khwarizmi, who was also responsible for introducing the Arabic numerals and Hindu–Arabic numeral system beyond the Indian subcontinent.
Arab scientist Ibn al-Haytham (Alhazen) developed an early scientific method in his Book of Optics (1021). The most important development of the scientific method was the use of experiments to distinguish between competing scientific theories set within a generally empirical orientation, which began among Muslim scientists. Ibn al-Haytham's empirical proof of the intromission theory of light (that is, that light rays entered the eyes rather than being emitted by them) was particularly important. Ibn al-Haytham was significant in the history of scientific method, particularly in his approach to experimentation, and has been referred to as the "world's first true scientist".
Medicine in medieval Islam was an area of science that advanced particularly during the Abbasids' reign. During the 9th century, Baghdad contained over 800 doctors, and great discoveries in the understanding of anatomy and diseases were made. The clinical distinction between measles and smallpox was described during this time. Famous Persian scientist Ibn Sina (known to the West as Avicenna) produced treatises and works that summarized the vast amount of knowledge that scientists had accumulated, and was very influential through his encyclopedias, The Canon of Medicine and The Book of Healing. The work of him and many others directly influenced the research of European scientists during the Renaissance.
Astronomy in medieval Islam was advanced by Al-Battani, who improved the precision of the measurement of the precession of the Earth's axis. The corrections made to the geocentric model by al-Battani, Averroes, Nasir al-Din al-Tusi, Mo'ayyeduddin Urdi and Ibn al-Shatir were later incorporated into the Copernican heliocentric model. The astrolabe, though originally developed by the Greeks, was developed further by Islamic astronomers and engineers, and subsequently brought to medieval Europe.
Muslim alchemists influenced medieval European alchemists, particularly the writings attributed to Jābir ibn Hayyān (Geber).
Technology
In technology, the Abbasids adopted papermaking from China. The use of paper spread from China into the caliphate in the 8th century CE, arriving in al-Andalus (Islamic Spain) and then the rest of Europe in the 10th century. It was easier to manufacture than parchment, less likely to crack than papyrus, and could absorb ink, making it ideal for making records and copies of the Qur'an. "Islamic paper makers devised assembly-line methods of hand-copying manuscripts to turn out editions far larger than any available in Europe for centuries." It was from the Abbasids that the rest of the world learned to make paper from linen. The knowledge of gunpowder was also transmitted from China via the caliphate, where the formulas for pure potassium nitrate and an explosive gunpowder effect were first developed.
Advances were made in irrigation and farming, using new technology such as the windmill. Crops such as almonds and citrus fruit were brought to Europe through al-Andalus, and sugar cultivation was gradually adopted by the Europeans. Apart from the Nile, Tigris and Euphrates, navigable rivers were uncommon, so transport by sea was very important. Navigational sciences were highly developed, making use of a rudimentary sextant (known as a kamal). When combined with detailed maps of the period, sailors were able to sail across oceans rather than skirt along the coast. Abbasid sailors were also responsible for reintroducing large three masted merchant vessels to the Mediterranean. The name caravel may derive from an earlier Arab ship known as the qārib. Arab merchants dominated trade in the Indian Ocean until the arrival of the Portuguese in the 16th century. Hormuz was an important center for this trade. There was also a dense network of trade routes in the Mediterranean, along which Muslim countries traded with each other and with European powers such as Venice or Genoa. The Silk Road crossing Central Asia passed through the Abbasid caliphate between China and Europe.
Engineers in the Abbasid caliphate made a number of innovative industrial uses of hydropower, and early industrial uses of tidal power, wind power, and petroleum (notably by distillation into kerosene). The industrial uses of watermills in the Islamic world date back to the 7th century, while horizontal-wheeled and vertical-wheeled water mills were both in widespread use since at least the 9th century. By the time of the Crusades, every province throughout the Islamic world had mills in operation, from al-Andalus and North Africa to the Middle East and Central Asia. These mills performed a variety of agricultural and industrial tasks. Abbasid engineers also developed machines (such as pumps) incorporating crankshafts, employed gears in mills and water-raising machines, and used dams to provide additional power to watermills and water-raising machines. Such advances made it possible for many industrial tasks that were previously driven by manual labour in ancient times to be mechanized and driven by machinery instead in the medieval Islamic world. It has been argued that the industrial use of waterpower had spread from Islamic to Christian Spain, where fulling mills, paper mills, and forge mills were recorded for the first time in Catalonia.
A number of industries were generated during the Arab Agricultural Revolution, including early industries for textiles, sugar, rope-making, matting, silk, and paper. Latin translations of the 12th century passed on knowledge of chemistry and instrument making in particular. The agricultural and handicraft industries also experienced high levels of growth during this period.
Society
Arabization
While the Abbasids originally gained power by exploiting the social inequalities against non-Arabs in the Umayyad Empire, during Abbasid rule the empire rapidly Arabized, particularly in the Fertile Crescent region (namely Mesopotamia and the Levant) as had begun under Umayyad rule. As knowledge was shared in the Arabic language throughout the empire, many people from different nationalities and religions began to speak Arabic in their everyday lives. Resources from other languages began to be translated into Arabic, and a unique Islamic identity began to form that fused previous cultures with Arab culture, creating a level of civilization and knowledge that was considered a marvel in Europe at the time.
Status of women
In contrast to the earlier era, women in Abbasid society were absent from all arenas of the community's central affairs. While their Muslim forebears led men into battle, started rebellions, and played an active role in community life, as demonstrated in the Hadith literature, Abbasid women were ideally kept in seclusion. Conquests had brought enormous wealth and large numbers of slaves to the Muslim elite. The majority of the slaves were women and children, many of whom had been dependents or harem-members of the defeated Sassanian upper classes. In the wake of the conquests an elite man could potentially own a thousand slaves, and ordinary soldiers could have ten people serving them.
It was narrated from Ibn Abbas that Muhammad said:
Even so, slave courtesans (qiyans and jawaris) and princesses produced prestigious and important poetry. Enough survives to give us access to women's historical experiences, and reveals some vivacious and powerful figures, such as the Sufi mystic Raabi'a al-Adwiyya (714–801 CE), the princess and poet 'Ulayya bint al-Mahdi (777–825 CE), and the singing-girls Shāriyah (–870 CE), Fadl Ashsha'ira (d. 871 CE) and Arib al-Ma'muniyya (797–890 CE).
Each wife in the Abbasid harem had an additional home or flat, with her own enslaved staff of eunuchs and maidservants. When a concubine gave birth to a son, she was elevated in rank to umm walad and also received apartments and (slave) servants as a gift.
Treatment of Jews and Christians
The status and treatment of Jews, Christians, and non-Muslims in the Abbasid Caliphate was a complex and continually changing issue. Non-Muslims were called dhimmis. Dhimmis faced some level of discrimination in Abbasid society: they did not have all the privileges of Muslims and had to pay jizya, a tax on non-Muslims. However, as people of the book (non-Muslim monotheists), Jews and Christians were allowed to practice their religion and were not required to convert.
One of the common aspects of the treatment of the dhimmis is that their treatment depended on who the caliph was at the time. Some Abbasid rulers, like Al-Mutawakkil (822–861 CE) imposed strict restrictions on what dhimmis could wear in public, often yellow garments that distinguished them from Muslims. Other restrictions al-Mutawakkil imposed included limiting the role of the dhimmis in government, seizing dhimmi housing and making it harder for dhimmis to become educated. Most other Abbasid caliphs were not as strict as al-Mutawakkil. During the reign of Al-Mansur (754–775 CE), it was common for Jews and Christians to influence the overall culture in the caliphate, specifically in Baghdad. Jews and Christians did this by participating in scholarly work.
It was common that laws that were imposed against dhimmis during one caliph's rule were either discarded or not practiced during future caliphs' reigns. Al-Mansur and al-Mutawakkil both instituted laws that forbade non-Muslims from participating in public office. Al-Mansur did not follow his own law very closely, bringing dhimmis back to the caliphate's treasury due to the needed expertise of dhimmis in the area of finance. Al-Mutawakkil followed the law banning dhimmis from public office more seriously, although, soon after his reign, many of the laws concerning dhimmis participating in government were completely unobserved or at least less strictly observed. Even Al-Muqtadir (), who held a similar stance as al-Mutawakkil on barring non-Muslims from public office, himself had multiple Christian secretaries, indicating that non-Muslims still had access to many of the most important figures within the caliphate. Past having a casual association or just being a secretary to high-ranking Islamic officials, some of them achieved the second highest office after the caliph: the vizier.
Jews and Christians may have had a lower overall status compared to Muslims in the Abbasid Caliphate, but dhimmis were often allowed to hold respectable and even prestigious occupations in some cases, such as doctors and public officeholders. Jews and Christians were also allowed to be rich even if they were taxed for being a dhimmi. Dhimmis were capable of moving up and down the social ladder, though this largely depended on the particular caliph. An indication as to the social standing of Jews and Christians at the time was their ability to live next to Muslim people. While al-Mansur was ruling the caliphate, for instance, it was not uncommon for dhimmis to live in the same neighborhoods as Muslims. One of the biggest reasons why dhimmis were allowed to hold prestigious jobs and positions in government is that they were generally important to the well-being of the state and were proficient to excellent with the work at hand. Some Muslims in the caliphate took offense to the idea that there were dhimmis in public offices who were in a way ruling over them although it was an Islamic state, while other Muslims were at time jealous of some dhimmis for having a level of wealth or prestige greater than other Muslims, even if Muslims were still the majority of the ruling class. In general, Muslims, Jews, and Christians had close relations that could be considered positive at times, especially for Jews, in contrast to how Jews were being treated in Europe.
Many of the laws and restrictions that were imposed on dhimmis often resembled other laws that previous states had used to discriminate against a minority religion, specifically Jewish people. Romans in the fourth century banned Jewish people from holding public offices, banned Roman citizens from converting to Judaism, and often demoted Jews who were serving in the Roman military. In direct contrast, there was an event in which two viziers, Ibn al-Furat and Ali ibn Isa ibn al-Jarrah, argued about Ibn al-Furat's decision to make a Christian the head of the military. A previous vizier, Abu Muhammad al-Hasan al-Bazuri, had done so. These laws predated al-Mansur's laws against dhimmis and often had similar restrictions, although Roman emperors were often much more strict on enforcing these laws than many Abbasid caliphs.
Most of Baghdad's Jews were incorporated into the Arab community and considered Arabic their native language. Some Jews studied Hebrew in their schools and Jewish religious education flourished. The united Muslim empire allowed Jews to reconstruct links between their dispersed communities throughout the Middle East. The city's Talmudic institute helped spread the rabbinical tradition to Europe, and the Jewish community in Baghdad went on to establish ten rabbinical schools and twenty-three synagogues. Baghdad not only contained the tombs of Muslim saints and martyrs, but also the tomb of Yusha, whose corpse had been brought to Iraq during the first migration of the Jews out of the Levant.
Holidays
There were large feasts on certain days, as the Muslims of the empire celebrated Christian holidays as well as their own. There were two main Islamic feasts: one marked by the end of Ramadan; the other, "the Feast of Sacrifice". The former was especially joyful because children would purchase decorations and sweetmeats; people prepared the best food and bought new clothes. At midmorning, the caliph, wearing Muhammad's thobe, would guide officials, accompanied by armed soldiers to the Great Mosque, where he led prayers. After the prayer, all those in attendance would exchange the best wishes and hug their kin and companions. The festivities lasted for three days. During those limited number of nights, the palaces were lit up and boats on the Tigris hung lights. It was said that Baghdad "glittered 'like a bride. During the Feast of Sacrifice, sheep were butchered in public arenas and the caliph participated in a large-scale sacrifice in the palace courtyard. Afterward, the meat would be divided and given to the poor.
In addition to these two holidays, Shias celebrated the birthdays of Fatimah and Ali ibn Abi Talib. Matrimonies and births in the royal family were observed by all in the empire. The announcement that one of the caliph's sons could recite the Koran smoothly was greeted by communal jubilation. When Harun developed this holy talent, the people lit torches and decorated the streets with wreaths of flowers, and his father, Al-Mahdi, freed 500 slaves.
Of all the holidays imported from other cultures and religions, the one most celebrated in Baghdad (a city with many Persians) was Nowruz, which celebrated the arrival of spring. In a ceremonial ablution introduced by Persian troops, residents sprinkled themselves with water and ate almond cakes. The palaces of the imperial family were lit up for six days and nights. The Abbasids also celebrated the Persian holiday of Mihraj, which marked the onset of winter (signified with pounding drums), and Sadar, when homes burned incense and the masses would congregate along the Tigris to witness princes and viziers pass by.
Military
The Abbasid army amassed an array of siege equipment, such as catapults, mangonels, battering rams, ladders, grappling irons, and hooks. All such weaponry was operated by military engineers. However, the primary siege weapon was the , a type of siege weapon that was comparable to the trebuchet employed in Western medieval times. From the seventh century onward, it had largely replaced torsion artillery. By Harun al-Rashid's time, the Abbasid army employed fire grenades. The Abbasids also utilized field hospitals and ambulances drawn by camels. The cavalry was entirely covered in iron, with helmets. Similar to medieval knights, their only exposed spots were the end of their noses and small openings in front of their eyes. Their foot soldiers were issued spears, swords, and pikes, and (in line with Persian fashion) trained to stand so solidly that, one contemporary wrote "you would have thought them held fast by clamps of bronze". Although the Abbasids never retained a substantial regular army, the caliph could recruit a considerable number of soldiers in a short time when needed from levies. There were also cohorts of regular troops who received steady pay and a special forces unit. At any moment, 125,000 Muslim soldiers could be assembled along the Byzantine frontier, Baghdad, Medina, Damascus, Rayy, and other geostrategic locations in order to quell any unrest.
During the Abbasid revolution, Abu Muslim's Khorasani army, composed largely of Arab settlers disillusioned with Umayyad rule, marched under black banners, forming a powerful force that swept westward in open revolt.
In Baghdad there were many Abbasid military leaders who were or said they were of Arab descent. However, it is clear that most of the ranks were of Iranian origin, the vast majority being from Khurasan and Transoxiana, not from western Iran or Azerbaijan. Most of the Khurasani soldiers who brought the Abbasids to power were Arabs. The standing army of the Muslims in Khurasan was overwhelmingly Arab. The unit organization of the Abbasids was designed with the goal of ethnic and racial equality among supporters. When Abu Muslim recruited officers along the Silk Road, he registered them based not on their tribal or ethno-national affiliations but on their current places of residence. Under the Abbasids, Iranian peoples became better represented in the army and bureaucracy as compared to before. The Abbasid army was centred on the Khurasan Abna al-dawla infantry and the Khurasaniyya heavy cavalry, led by their own semi-autonomous commanders () who recruited and deployed their own men with Abbasid resource grants. al-Mu‘tasim began the practice of recruiting Turkic slave soldiers from the Samanids into a private army, which allowed him to take over the reins of the caliphate. He abolished the old jund system created by Umar and diverted the salaries of the original Arab military descendants to the Turkic slave soldiers. The Turkic soldiers transformed the style of warfare, as they were known as capable horse archers, trained from childhood to ride. This military was now drafted from the ethnic groups of the faraway borderlands, and were completely separate from the rest of society. Some could not speak Arabic properly. This led to the decline of the caliphate starting with the Anarchy at Samarra.
Civil administration
As a result of such a vast Empire, the caliphate was decentralized and divided into 24 provinces.
Harun's vizier enjoyed close to unchecked powers. Under Harun, a special "bureau of confiscation" was created. This governmental wing made it possible for the vizier to seize the property and riches of any corrupt governor or civil servant. In addition, it allowed governors to confiscate the estates of lower-ranking officials. Finally, the caliph could impose the same penalty on a vizier who fell from grace. As one later caliph put it: "The vizier is our representative throughout the land and amongst our subjects. Therefore, he who obeys him obeys us; and he who obeys us obeys God, and God shall cause him who obeys Him to enter paradise."
Every regional metropolis had a post office and hundreds of roads were paved in order to link the imperial capital with other cities and towns. The empire employed a system of relays to deliver mail. The central post office in Baghdad even had a map with directions that noted the distances between each town. The roads were provided with roadside inns, hospices, and wells and could reach eastward through Persia and Central Asia, to as far as China. The post office not only enhanced civil services but also served as intelligence for the caliph. Mailmen were employed as spies who kept an eye on local affairs.
Early in the days of the caliphate, the Barmakids took the responsibility of shaping the civil service. The family had roots in a Buddhist monastery in northern Afghanistan. In the early 8th century, the family converted to Islam and began to take on a sizable part of the civil administration for the Abbasids.
Capital poured into the caliphate's treasury from a variety of taxes, including a real estate tax; a levy on cattle, gold and silver, and commercial wares; a special tax on non-Muslims; and customs dues.
Trade
Under Harun, maritime trade through the Persian Gulf thrived, with Arab vessels trading as far south as Madagascar and as far east as China, Korea, and Japan. The growing economy of Baghdad and other cities inevitably led to the demand for luxury items and formed a class of entrepreneurs who organized long-range caravans for the trade and then the distribution of their goods. A whole section in the East Baghdad suq was dedicated to Chinese goods.
Arabs traded with the Baltic region and made it as far north as the British Isles. Tens of thousands of Arab coins have been discovered in parts of Russia and Sweden, which bear witness to the comprehensive trade networks set up by the Abbasids. King Offa of Mercia (in England) minted gold coins similar to those of the Abbasids in the eighth century.
Muslim merchants employed ports in Bandar Siraf, Basra, and Aden and some Red Sea ports to travel and trade with India and South East Asia. Land routes were also utilized through Central Asia. Arab businessmen were present in China as early as the eighth century. Arab merchants sailed the Caspian Sea to reach and trade with Bukhara and Samarkand.
Many caravans and goods never made it to their intended destinations. Some Chinese exports perished in fires, while other ships sank. It was said that anybody who made it to China and back unharmed was blessed by God. Common sea routes were also plagued by pirates who built and crewed vessels that were faster than most merchant ships. It is said that many of the adventures at sea in the Sinbad tales were based on historical fiction of mariners of the day.
The Abbasids also established overland trade with Africa, largely for gold and slaves. When trade with Europe ceased due to hostilities, Jews served as a link between the two hostile worlds.
The Abbasids engaged in extensive trade with the Italian maritime republics of Venice and Genoa, from the 11th century. Venetian merchants facilitated the exchange of high-value goods such as spices, silk, and precious metals from the East. In return, Venice exported European manufactured goods and luxury items. While Genoese merchants traded in luxury goods like spices, textiles, and other high-demand items. Genoa's strategic position in the Mediterranean enabled it to integrate into the broader Mediterranean trade network, connecting the Abbasid Caliphate with other European markets. These trade relations played a key role in linking the medieval Mediterranean with the broader Islamic world. This exchange of goods, alongside cultural and technological transfers, fostered a more interconnected medieval global economy.The Abbasid Caliphate: A History. Chapter 3: The Golden Age of the Abbasid Caliphate (775–833). 2021. Tayeb El-Hibri
List of caliphs
See also
References
Notes
Citations
Sources
External links
.
.
|-
|-
Category:Arab dynasties
Category:Sunni dynasties
Category:Countries in medieval Africa
Category:States in medieval Anatolia
Category:Former Islamic monarchies in Europe
Category:Iraq under the Abbasid Caliphate
Category:History of North Africa
Category:Medieval countries in the Middle East
Category:History of South Asia
Category:History of the Mediterranean
Category:Medieval history of Iran
Category:States and territories established in the 750s
Category:States and territories disestablished in 1258
Category:States and territories established in 1261
Category:States and territories disestablished in 1517
Category:750 establishments
Category:1510s disestablishments in Asia
Category:8th-century establishments in Africa
Category:1258 disestablishments in Asia
Category:13th-century disestablishments in Africa
Category:Historical transcontinental empires
Category:Caliphates
Category:Former monarchies of West Asia
|
ancient_medieval
| 11,213
|
51054
|
American black bear
|
https://en.wikipedia.org/wiki/American_black_bear
|
The American black bear (Ursus americanus), or simply black bear, is a species of medium-sized bear which is endemic to North America. It is the continent's smallest and most widely distributed bear species. It is an omnivore, with a diet varying greatly depending on season and location. It typically lives in largely forested areas; it will leave forests in search of food and is sometimes attracted to human communities due to the immediate availability of food.
The International Union for Conservation of Nature (IUCN) lists the American black bear as a least-concern species because of its widespread distribution and a large population, estimated to be twice that of all other bear species combined. Along with the brown bear (Ursus arctos), it is one of the two modern bear species not considered by the IUCN to be globally threatened with extinction.
Taxonomy
The American black bear is not closely related to the brown bear or polar bear, though all three species are found in North America; genetic studies reveal that they split from a common ancestor 5.05 million years ago (mya). American and Asian black bears are considered sister taxa and are more closely related to each other than to the other modern species of bears.Craighead, Lance (2003). Bears of the World, Voyageur Press,
Evolution
The ancestors of American black bears and Asian black bears diverged from sun bears 4.58 mya. The American black bear then split from the Asian black bear 4.08 mya.Lisette Waits, David Paetkau, and Curtis Strobeck, "Overview" from Genetics of the Bears of the World. Chapter 3 of Bears: Status Survey and Conservation Action Plan, compiled by Christopher Servheen, Stephen Herrero and Bernard Peyton, IUCN/SSC Bear Specialist Group A small primitive bear called Ursus abstrusus is the oldest known North American fossil member of the genus Ursus, dated to 4.95 mya. This suggests that U. abstrusus may be the direct ancestor of the American black bear, which evolved in North America.Kurten, B., and E. Anderson (1980). Pleistocene mammals of North America. Columbia University Press, New York, .
The earliest American black bear fossils, from the Early Pleistocene of Port Kennedy, Pennsylvania, greatly resemble the Asian species, though later specimens grew to sizes comparable to grizzly bears. Once described as a precursor species (Ursus vitabilis), these specimens have been synonymized with U. americanus. The American black bear lived during the same period as the giant and lesser short-faced bears (Arctodus simus and A. pristinus, respectively) and the Florida spectacled bear (Tremarctos floridanus). These tremarctine bears evolved from bears that had emigrated from Asia to the Americas 7–8 mya. The giant and lesser short-faced bears are thought to have been heavily carnivorous and the Florida spectacled bear more herbivorous, while the American black bears remained arboreal omnivores, like their Asian ancestors. From the Holocene to the present, American black bears seem to have shrunk in size, but this has been disputed because of problems with dating these fossil specimens.
The American black bear's generalist behavior allowed it to exploit a wider variety of foods and has been given as a reason why, of these three genera, it alone survived climate and vegetative changes through the last Ice Age while the other, more specialized North American predators became extinct. However, both Arctodus and Tremarctos had survived several other, previous ice ages. After these prehistoric ursids became extinct at the end of the Pleistocene, American black bears, brown bears and polar bears were the only remaining bears in North America.
Hybrids
American black bears are reproductively compatible with several other bear species and occasionally produce hybrid offspring. According to Jack Hanna's Monkeys on the Interstate, a bear captured in Sanford, Florida, was thought to have been the offspring of an escaped female Asian black bear and a male American black bear."Hybrid Bears". messybeast.com. In 1859, an American black bear and a Eurasian brown bear were bred together in the London Zoological Gardens, but the three cubs that were born died before they reached maturity.Scherren, Henry. (1907). 4. Some Notes on Hybrid Bears. Proceedings of the Zoological Society of London, 1907, 431--435. https://doi.org/10.1111/j.1096-3642.1907.tb01827.x In The Variation of Animals and Plants Under Domestication, Charles Darwin noted:
A bear shot in autumn 1986 in Michigan was thought by some to be an American black bear/grizzly bear hybrid, because of its unusually large size and its proportionately larger brain case and skull. DNA testing was unable to determine whether it was a large American black bear or a grizzly bear.
Subspecies
Sixteen subspecies are traditionally recognized; however, a recent genetic study does not support designating some of these, such as the Florida black bear, as distinct subspecies. Listed alphabetically according to subspecific name:"Ursus americanus" , Mammal Species of the World, 3rd ed.
+American black bear subspeciesImage Scientific name Common name Distribution Description 120px Ursus americanus altifrontalis Olympic black bear the Pacific Northwest coast from central British Columbia through northern California and inland to the tip of northern Idaho and British Columbia 120px Ursus americanus amblyceps New Mexico black bear Colorado, New Mexico, western Texas and the eastern half of Arizona into northern Mexico and southeastern Utah 120px Ursus americanus americanus Eastern black bear Eastern Montana to the Atlantic coast, from Alaska south and east through Canada to Maine and south to Texas. Thought to be increasing in some regions. Common to Eastern Canada and the eastern U.S. wherever suitable habitat is found. A large-bodied subspecies; almost all specimens have black fur. May very rarely sport a white blaze on the chest. 120px Ursus americanus californiensis California black bear the mountain ranges of southern California, north through the Central Valley to southern Oregon Able to live in varied climates: found in temperate rain forest in the north and chaparral shrubland in the south. Small numbers may feature cinnamon-colored fur. Ursus americanus carlottae Haida Gwaii black bear or Queen Charlotte Islands black bear Haida Gwaii (formerly the Queen Charlotte Islands) and Alaska Generally larger than its mainland counterparts with a large skull and molars and found only in a black color phase. 120px Ursus americanus cinnamomum Cinnamon bear Colorado, Idaho, western Montana and Wyoming, eastern Washington and Oregon and northeastern Utah Has brown or reddish-brown fur, reminiscent of cinnamon. 120px Ursus americanus emmonsii Glacier bear or blue bear Southeastern Alaska Distinguished by its fur being silvery-gray with a blue luster found mostly on its flanks. 120px Ursus americanus eremicus East Mexican black bear Northeastern Mexico and U.S. borderlands with Texas. Most often found in Big Bend National Park and the desert border with Mexico. Numbers unknown in Mexico but are presumed to be very low. Critically Endangered. 120px Ursus americanus floridanus Florida black bear Florida, southern Georgia, Alabama and Mississippi (except the southern region) Has a light brown nose and shiny black fur. A white blaze on the chest is common in this subspecies. An average male weighs . 120px Ursus americanus hamiltoni Newfoundland black bear Newfoundland Generally bigger than its mainland relatives, ranging in size from and averaging . It has one of the longest hibernation periods of any bear in North America."Black Bear" , Parks Canada Known to favor foraging in fields of Vaccinium species. 120px Ursus americanus kermodei Kermode bear or island white bear, spirit bear the central coast of British Columbia Approximately 10% of the population of this subspecies have white or cream-colored coats due to a recessive gene. The other 90% appear as normal-colored black bears. Ursus americanus luteolus Louisiana black bear Eastern Texas, Louisiana and southern Mississippi. The validity of this subspecies has been repeatedly disputed. Has relatively long, narrow and flat skull and proportionately large molars.Louisiana Black Bear (PDF). Retrieved September 15, 2011. Prefers hardwood bottom forests and bayous as habitat.120px Ursus americanus machetes West Mexican black bear north-central Mexico Ursus americanus perniger Kenai black bear the Kenai Peninsula, Alaska Considered an "Apparently Secure Subspecies" by NatureServe. Ursus americanus pugnax Dall Island black bear Dall Island in the Alexander Archipelago, Alaska Ursus americanus vancouveri Vancouver Island black bear Vancouver Island, British Columbia Darker and slightly bigger than the other five subspecies found in British Columbia; it is most common in the north, but appears occasionally in the southern parts of Vancouver Island.
Distribution and population
Historically, American black bears occupied the majority of North America's forested regions. Today, they are primarily limited to sparsely settled, forested areas. American black bears currently inhabit much of their original Canadian range, though they seldom occur in the southern farmlands of Alberta, Saskatchewan and Manitoba; they have been extirpated on Prince Edward Island since 1937. Surveys taken in the mid-1990s found the Canadian black bear population to be between 396,000 and 476,000 in seven provinces; this estimate excludes populations in New Brunswick, the Northwest Territories, Nova Scotia and Saskatchewan. All provinces indicated stable populations of American black bears over the 2000s.
The current range in the United States is constant throughout most of the Northeast and within the Appalachian Mountains almost continuously from Maine to northern Georgia, the northern Midwest, the Rocky Mountain region, the West Coast and Alaska. However, it becomes increasingly fragmented or absent in other regions. Despite this, American black bears in those areas seem to have expanded their range in recent decades, such as with recent sightings in Ohio, Illinois, southern Indiana, and western Nebraska. Sightings of itinerant black bears in the Driftless Area of southeastern Minnesota, northeastern Iowa, and southwestern Wisconsin are common. In 2019, biologists with the Iowa Department of Natural Resources confirmed documentation of an American black bear living year-round in woodlands near the town of Decorah in northeastern Iowa, believed to be the first instance of a resident black bear in Iowa since the 1880s.
Surveys taken from 35 states in the early 1990s indicated that American black bear populations were either stable or increasing, except in Idaho and New Mexico. The population in the United States was estimated to range between 339,000 and 465,000 in 2011, though this estimate does not include data from Alaska, Idaho, South Dakota, Texas or Wyoming, whose populations were not recorded in the survey. In California there were an estimated 25,000-35,000 black bears in 2017, making it the largest population of the species in any of the 48 contiguous United States. In 2020 there were about 1,500 bears in Great Smoky Mountains National Park, where the population density is about two per square mile. In western North Carolina, the black bear population has dramatically increased in recent decades, from about 3,000 in the early 2000s to over 8,000 in the 2020s.
As of 1993, known black bear populations in Mexico existed in four areas, though knowledge on the distribution of populations outside those areas has not been updated since 1959. Mexico is the only country where the species is classified as "endangered".
Habitat
Throughout their range, habitats preferred by American black bears have a few shared characteristics. They are often found in areas with relatively inaccessible terrain, thick understory vegetation and large quantities of edible material (especially masts). The adaptation to woodlands and thick vegetation in this species may have originally been because the bear evolved alongside larger, more aggressive bear species, such as the extinct giant short-faced bear and the grizzly bear, that monopolized more open habitats and the historic presence of larger predators, such as Smilodon and the American lion, that could have preyed on black bears. Although found in the largest numbers in wild, undisturbed areas and rural regions, American black bears can adapt to surviving in some numbers in peri-urban regions, as long as they contain easily accessible foods and some vegetative coverage.Hunter, Luke (2011). Carnivores of the World, Princeton University Press,
In most of the contiguous United States, American black bears today are usually found in heavily vegetated mountainous areas, from in elevation. For American black bears living in the American Southwest and Mexico, habitat usually consists of stands of chaparral and pinyon juniper woods. In this region, bears occasionally move to more open areas to feed on prickly pear cactus. At least two distinct, prime habitat types are inhabited in the Southeastern United States. American black bears in the southern Appalachian Mountains survive in predominantly oak-hickory and mixed mesophytic forests. In the coastal areas of the southeast (such as Florida, the Carolinas and Louisiana), bears inhabit a mixture of flatwoods, bays and swampy hardwood sites.
In the northeastern part of the range (the United States and Canada), prime habitat consists of a forest canopy of hardwoods such as beech, maple, birch and coniferous species. Corn crops and oak-hickory mast are also common sources of food in some sections of the northeast; small, thick swampy areas provide excellent refuge cover largely in stands of white cedar. Along the Pacific coast, redwood, Sitka spruce and hemlocks predominate as overstory cover. Within these northern forest types are early successional areas important for American black bears, such as fields of brush, wet and dry meadows, high tidelands, riparian areas and a variety of mast-producing hardwood species. The spruce-fir forest dominates much of the range of the American black bear in the Rockies. Important non-forested areas here are wet meadows, riparian areas, avalanche chutes, roadsides, burns, sidehill parks and subalpine ridgetops.
In areas where human development is relatively low, such as stretches of Canada and Alaska, American black bears tend to be found more regularly in lowland regions. In parts of eastern Canada, especially Labrador, American black bears have adapted exclusively to semi-open areas that are more typical habitat in North America for brown bears (likely due to the absence there of brown and polar bears, as well as other large carnivore species).
Description
Build
The skulls of American black bears are broad, with narrow muzzles and large jaw hinges. In Virginia, the length of adult bear skulls was found to average . Across its range, the greatest skull length for the species has been reportedly measured from . Females tend to have slenderer and more pointed faces than males. Sexual dimorphism can also be observed in the larger cheek teeth of males.
Their claws are typically black or grayish-brown. The claws are short and rounded, being thick at the base and tapering to a point. Claws from both hind and front legs are almost identical in length, though the foreclaws tend to be more sharply curved. The paws of the species are relatively large, with a rear foot length of , which is proportionately larger than other medium-sized bear species, but much smaller than the paws of large adult brown, and especially polar bears. The soles of the feet are black or brownish and are naked, leathery and deeply wrinkled.
The hind legs are relatively longer than those of Asian black bears. The typically small tail is long.Audubon Field Guide. Audubonguides.com. Retrieved September 15, 2011.Kronk, C. (2007). Ursus americanus . Animal Diversity Web. Retrieved September 15, 2011."American black bear videos, photos and facts – Ursus americanus" . ARKive. Retrieved September 15, 2011. The ears are small and rounded and are set well back on the head.
American black bears are highly dexterous, being capable of opening screw-top jars and manipulating door latches. They also have great physical strength; a bear weighing was observed turning flat rocks weighing by flipping them over with a single foreleg. They move in a rhythmic, sure-footed way and can run at speeds of . American black bears have good eyesight and have been proven experimentally to be able to learn visual color discrimination tasks faster than chimpanzees and just as fast as domestic dogs. They are also capable of rapidly learning to distinguish different shapes such as small triangles, circles and squares.
Size
Adults typically range from in head-and-body length, and in shoulder height. Although the American black bear is the smallest bear species in North America (smaller than the brown bear and the polar bear), large males exceed the size of other bear species in other continents.
Weight tends to vary according to age, sex, health and season. Seasonal variation in weight is very pronounced: in autumn, their pre-den weight tends to be 30% higher than in spring, when black bears emerge from their dens. Bears on the East Coast tend to be heavier on average than those on the West Coast, although they typically follow Bergmann's rule, and bears from the northwest are often slightly heavier than the bears from the southeast. Adult males typically weigh between , while females weigh 33% less at .
In California, studies indicate that the average mass is in adult males and in adult females. Adults in Yukon Flats National Wildlife Refuge in east-central Alaska were found to average in males and in females, whereas on Kuiu Island in southeastern Alaska (where nutritious salmon are readily available) adults averaged .Peacock, Elizabeth (2004). "Population, Genetic and Behavioral Studies of Black Bears Ursus americanus in Southeast Alaska" . PhD Thesis, University of Nevada, Reno In Great Smoky Mountains National Park, adult males averaged and adult females averaged per one study."Ursus americanus (Pallas); Black Bear" . Discoverlife.org. Retrieved December 20, 2012.
In one of the largest studies on regional body mass, bears in British Columbia averaged in 89 females and in 243 males. In Yellowstone National Park, a study found that adult males averaged and adult females averaged .Barnes, V. G. and Bray, O. E. (1967) "Population characteristics and activities of black bears in Yellowstone National Park". Final report, Colorado Wildl. Res. Unit, Colorado State Univ., Fort Collins; cited in "Characteristics of Black Bears and Grizzly Bears in YNP" . nps.gov Black bears in north-central Minnesota averaged in 163 females and in 77 males. In New York, the males average and females .Black bears in New York State . New York State Department of Environmental Conservation, page 1. Retrieved November 11, 2021. It was found in Nevada and the Lake Tahoe region that bears closer to urban regions were significantly heavier than their arid-country dwelling counterparts, with males near urban areas averaging against wild-land males which averaged whereas peri-urban females averaged against the average of in wild-land ones. In Waterton Lakes National Park, Alberta, adults averaged .Silva, M., & Downing, J. A. (1995). CRC handbook of mammalian body masses. CRC Press.
The biggest wild American black bear ever recorded was a male from New Brunswick, shot in November 1972, that weighed after it had been dressed, meaning it weighed an estimated in life and measured long. Another notably outsized wild American black bear, weighing in at , was the cattle-killer shot in December 1921 on the Moqui Reservation in Arizona. The record-sized American black bear from New Jersey was shot in Morris County December 2011 and scaled .Stabile, Jim (December 16, 2011) "829-pound bear takes record in N.J. hunt". Daily Record The Pennsylvania state record weighed and was shot in November 2010 in Pike County."Record-busting, 879-pound bear bagged in Poconos" . Pocono Record. November 19, 2010. Retrieved 2013-08-19. The North American Bear Center, located in Ely, Minnesota, is home to the world's largest captive male and female American black bears. Ted, the male, weighed in the fall of 2006. Honey, the female, weighed in the fall of 2007.
Pelage
The fur is soft, with dense underfur and long, coarse, thick guard hairs. The fur is not as shaggy or coarse as that of brown bears.Wood, John George (1865). The Illustrated Natural History, Vol. 2, George Routledge and Sons. American black bear skins can be distinguished from those of Asian black bears by the lack of a white blaze on the chest and hairier footpads.
Despite their name, black bears show a great deal of color variation. Individual coat colors can range from white, blonde, cinnamon, light brown or dark chocolate brown to jet black, with many intermediate variations existing. Silvery-gray American black bears with a blue luster (found mostly on the flanks) occur along a portion of coastal Alaska and British Columbia. White to cream-colored American black bears occur in the coastal islands and the adjacent mainland of southwestern British Columbia. Albino individuals have also been recorded. Black coats tend to predominate in humid areas, such as Maine, New England, New York, Tennessee, Michigan and western Washington. Approximately 70% of all American black bears are black, though only 50% in the Rocky Mountains are black. Many in northwestern North America are cinnamon, blonde or light brown in color and thus may sometimes be mistaken for grizzly bears. Grizzly (and other types of brown) bears can be distinguished by their shoulder hump, larger size and broader, more concave skull.Macdonald, D. W. (2006). The Encyclopedia of Mammals. Oxford University Press, Oxford .
In his book The Great Bear Almanac, Gary Brown summarized the predominance of black or brown/blonde specimens by location:
Color variations of American black bears by location Location Color breakdown Michigan 100% black Minnesota 94% black, 6% brown New England 100% black New York 100% black Tennessee 100% black Washington (coastal) 99% black, 1% brown or blonde Washington (inland) 21% black, 79% brown or blonde Yosemite National Park 9% black, 91% brown or blonde
Behavior and life history
Their keenest sense is smell, which is about seven times more sensitive than a domestic dog's. They are excellent and strong swimmers, swimming for pleasure and to feed (largely on fish). They regularly climb trees to feed, escape enemies and hibernate. Four of the eight modern bear species are habitually arboreal (the most arboreal species, the American and Asian black bears and the sun bear, being fairly closely related). Their arboreal abilities tend to decline with age. They may be active at any time of the day or night, although they mainly forage by night. Bears living near human habitations tend to be more extensively nocturnal, while those living near brown bears tend to be more often diurnal.
American black bears tend to be territorial and non-gregarious in nature. However, at abundant food sources (e.g. spawning salmon or garbage dumps), they may congregate and dominance hierarchies form, with the largest, most powerful males dominating the most fruitful feeding spots.Nowak, R. M. (1991). Walker's Mammals of the World. The Johns Hopkins University Press, Baltimore and London. They mark their territories by rubbing their bodies against trees and clawing at the bark. Annual ranges held by mature male bears tend to be very large, though there is some variation. On Long Island off the coast of Washington, ranges average , whereas on the Ungava Peninsula in Canada ranges can average up to , with some male bears traveling as far as at times of food shortages.
Bears may communicate with various vocal and non-vocal sounds. Tongue-clicking and grunting are the most common sounds and are made in cordial situations to conspecifics, offspring and occasionally humans. When at ease, they produce a loud rumbling hum. During times of fear or nervousness, bears may moan, huff or blow air. Warning sounds include jaw-clicking and lip-popping. In aggressive interactions, black bears produce guttural pulsing calls that may sound like growling. Cubs squeal, bawl or scream when anxious and make a motor-like humming sound when comfortable or nursing. American black bears often mark trees using their teeth and claws as a form of communication with other bears, a behavior common to many species of bears.
Reproduction and development
Sows usually produce their first litter at the age of 3 to 5 years, with those living in more developed areas tending to get pregnant at younger ages. The breeding period usually occurs in the June–July period, though it can extend to August in the species' northern range. The breeding period lasts for two to three months. Both sexes are promiscuous. Males try to mate with several females, but large, dominant ones may violently claim a female if another mature male comes near. Copulation can last 20–30 minutes. Sows tend to be short-tempered with their mates after copulating.
The fertilized eggs undergo delayed development and do not implant in the female's womb until November. The gestation period lasts 235 days, and litters are usually born in late January to early February. Litter size is between one and six cubs, typically two or three. At birth, cubs weigh and measure in length. They are born with fine, gray, down-like hair and their hind quarters are underdeveloped. They typically open their eyes after 28–40 days and begin walking after 5 weeks. Cubs are dependent on their mother's milk for 30 weeks and will reach independence at 16–18 months. At 6 weeks, they attain , by 8 weeks they reach and by 6 months they weigh . They reach sexual maturity at 3 years and attain their full growth at 5 years.
Longevity and mortality
The average lifespan in the wild is 18 years, and it is quite possible for wild individuals to survive for more than 23 years. The record age of a wild individual was 39 years, while that in captivity was 44 years. The average annual survival rate is variable, ranging from 86% in Florida to 73% in Virginia and North Carolina. In Minnesota, 99% of wintering adult bears were able to survive the hibernation cycle in one study. A study of American black bears in Nevada found that the amount of annual mortality of a population of bears in wilderness areas was 0%, whereas in developed areas in the state this figure rose to 83%. Survival in subadults is generally less assured. In Alaska 14–17% of subadult males and 30–48% of subadult females were found in a study to survive to adulthood. Across the range, the estimated number of cubs who survive past their first year is 60%.
With the exception of the rare confrontation with an adult brown bear or a gray wolf pack, adult black bears are not usually subject to natural predation. Scats with fur inside of them and a carcass of an adult sow with puncture marks in the skull indicate black bears may occasionally be killed by jaguars in the southern parts of their range. In such scenarios, the big cat would have the advantage if it ambushed the bear, killing it with a crushing bite to the back of the skull. Cubs tend to be more vulnerable to predation than adults, with known predators including bobcats, coyotes, cougars, gray wolves, brown bears and other bears of their own species. Many of these will stealthily snatch small cubs right from under the sleeping mother. There is record of a golden eagle snatching a yearling cub. Once out of hibernation, mother bears may be able to fight off most potential predators. Even cougars will be displaced by an angry mother bear if they are discovered stalking the cubs. Flooding of dens after birth may also occasionally kill newborn cubs. Bear fatalities are mainly attributable to human activities. Seasonally, thousands of black bears are hunted legally across North America, and some are illegally poached or trapped unregulated. Auto collisions also may kill many black bears annually.
Hibernation
American black bears were once not considered true or "deep" hibernators, but because of discoveries about the metabolic changes that allow black bears to remain dormant for months without eating, drinking, urinating or defecating, most biologists have redefined mammalian hibernation as "specialized, seasonal reduction in metabolism concurrent with scarce food and cold weather". American black bears are now considered highly efficient hibernators. The physiology of American black bears in the wild is closely related to that of bears in captivity. Understanding the physiology of bears in the wild is vital to the bear's success in captivity.
The bears enter their dens in October and November, although in the southernmost areas of their range (i.e. Florida, Mexico, the southeastern United States), only pregnant females and mothers with yearling cubs will enter hibernation. Prior to that time, they can put on up to of body fat to get them through the several months during which they fast. Hibernation typically lasts 3–8 months, depending on regional climate.
Hibernating bears spend their time in hollowed-out dens in tree cavities, under logs or rocks, in banks, caves, or culverts, and in shallow depressions. Although naturally-made dens are occasionally used, most dens are dug out by the bear. During their time in hibernation, an American black bear's heart rate drops from 40 to 50 beats per minute to 8 beats per minute, and the metabolic rate can drop to a quarter of the bear's (non-hibernating) basal metabolic rate. These reductions in metabolic rate and heart rate do not appear to decrease the bear's ability to heal injuries during hibernation. Their circadian rhythm stays intact during hibernation. This allows the bear to sense the changes in the day based on the ambient temperature caused by the sun's position in the sky. It has also been shown that ambient light exposure and low disturbance levels (that is to say, wild bears in ambient light conditions) directly correlate with their activity levels. The bear keeping track of the changing days allows it to awaken from hibernation at the appropriate time of year to conserve as much energy as possible.
The hibernating bear does not display the same rate of muscle and bone atrophy relative to other nonhibernatory animals that are subject to long periods of inactivity due to ailment or old age. A hibernating bear only loses approximately half the muscular strength compared to that of a well-nourished, inactive human. The bear's bone mass does not change in geometry or mineral composition during hibernation, which implies that the bear's conservation of bone mass during hibernation is caused by a biological mechanism. During hibernation American black bears retain all excretory waste, leading to the development of a hardened mass of fecal material in the colon known as a fecal plug. Leptin is released into the bear's systems to suppress appetite. The retention of waste during hibernation (specifically in minerals such as calcium) may play a role in the bear's resistance to atrophy.
The body temperature does not drop significantly, like other mammalian hibernators (staying around ) and they remain somewhat alert and active. If the winter is mild enough, they may wake up and forage for food. Females also give birth in February and nurture their cubs until the snow melts. During winter, American black bears consume 25–40% of their body weight. The footpads peel off while they sleep, making room for new tissue.
Many of the physiological changes an American black bear exhibits during hibernation are retained slightly post-hibernation. Upon exiting hibernation, bears retain a reduced heart rate and basal metabolic rate. The metabolic rate of a hibernating bear will remain at a reduced level for up to 21 days after hibernation. After emerging from their winter dens in spring, they wander their home ranges for two weeks so that their metabolism accustoms itself to the activity. In mountainous areas, they seek southerly slopes at lower elevations for forage and move to northerly and easterly slopes at higher elevations as summer progresses.
The time that American black bears emerge from hibernation varies. Factors affecting this include temperature, flooding, and hunger. In southern areas, they may wake up in midwinter. Further north, they may not be seen until late March, April, or even early May. Altitude also has an effect. Bears at lower altitudes tend to emerge earlier. Mature males tend to come out earliest, followed by immature males and females, and lastly mothers with cubs. Mothers with yearling cubs are seen before those with newborns.
Dietary habits
Generally, American black bears are largely crepuscular in foraging activity, though they may actively feed at any time. Up to 85% of their diet consists of vegetation, though they tend to dig less than brown bears, eating far fewer roots, bulbs, corms and tubers than the latter species. When initially emerging from hibernation, they will seek to feed on carrion from winter-killed animals and newborn ungulates. As the spring temperature warms, American black bears seek new shoots of many plant species, especially new grasses, wetland plants and forbs. Young shoots and buds from trees and shrubs during the spring period are important to bears emerging from hibernation, as they assist in rebuilding muscle and strengthening the skeleton and are often the only digestible foods available at that time."American Black Bear Fact Sheet". National Zoo| FONZ. Retrieved September 15, 2011. During summer, the diet largely comprises fruits, especially berries and soft mast such as buds and drupes.
During the autumn hyperphagia, feeding becomes virtually the full-time task. Hard mast becomes the most important part of the diet in autumn and may even partially dictate the species' distribution. Favored mast such as hazelnuts, oak acorns and whitebark pine nuts may be consumed by the hundreds each day by a single bear during the fall. During the fall period, bears may also habitually raid the nut caches of tree squirrels. Also extremely important in fall are berries such as huckleberries and buffalo berries. Bears living in areas near human settlements or around a considerable influx of recreational human activity often come to rely on foods inadvertently provided by humans, especially during summertime. These include refuse, birdseed, agricultural products and honey from apiaries.
The majority of the animal portion of their diet consists of insects, such as bees, yellow jackets, ants, beetles and their larvae. American black bears are also fond of honey and will gnaw through trees if hives are too deeply set into the trunks for them to reach it with their paws. Once the hive is breached, the bears will scrape the honeycombs together with their paws and eat them, regardless of stings from the bees. Bears that live in northern coastal regions (especially the Pacific Coast) will fish for salmon during the night, as their black fur is easily spotted by salmon in the daytime. Other bears, such as the white-furred Kermode bears of the islands of western Canada, have a 30% greater success rate in catching salmon than their black-furred counterparts. Other fish, including suckers, trout and catfish, are readily caught whenever possible. Although American black bears do not often engage in active predation of other large animals for much of the year, the species will regularly prey on mule and white-tailed deer fawns in spring, given the opportunity. Bears may catch the scent of hiding fawns when foraging for something else and then sniff them out and pounce on them. As the fawns reach 10 days of age, they can outmaneuver the bears, and their scent is soon ignored until the next year. American black bears have also been recorded similarly preying on elk calves in Idaho and moose calves in Alaska.
Predation on adult deer is rare, but it has been recorded. They may even hunt prey up to the size of adult female moose, which are considerably larger than themselves, by ambushing them. There is at least one record of a male American black bear killing two bull elk over the course of six days by chasing them into deep snow banks, which impeded their movements. In Labrador, American black bears are exceptionally carnivorous, living largely off caribou, usually young, injured, old, sickly or dead specimens, and rodents such as voles. This is believed to be due to a paucity of edible plant life in this sub-Arctic region and a local lack of competing large carnivores (including other bear species). Like brown bears, American black bears try to use surprise to ambush their prey and target the weak, injured, sickly or dying animals in the herds. Once a deer fawn is captured, it is frequently torn apart alive while feeding. If it is able to capture a mother deer in spring, the bear frequently begins feeding on the udder of lactating females, but generally prefers meat from the viscera. Bears often drag their prey to cover, preferring to feed in seclusion. The skin of large prey is stripped back and turned inside out, with the skeleton usually left largely intact. Unlike gray wolves and coyotes, bears rarely scatter the remains of their kills. Vegetation around the carcass is usually matted down, and their droppings are frequently found nearby. Bears may attempt to cover remains of larger carcasses, though they do not do so with the same frequency as cougars and grizzly bears. They will readily consume eggs and nestlings of various birds and can easily access many tree nests, even the huge nests of bald eagles. Bears have been reported stealing deer and other game from human hunters.
Interspecific predatory relationships
Over much of their range, American black bears are assured scavengers that can intimidate, using their large size and considerable strength, and if necessary dominate other predators in confrontations over carcasses. However, on occasions where they encounter Kodiak or grizzly bears, the larger two brown subspecies dominate them. American black bears tend to escape competition from brown bears by being more active in the daytime and living in more densely forested areas. Violent interactions, resulting in the deaths of American black bears, have been recorded in Yellowstone National Park.
American black bears do occasionally compete with cougars over carcasses. Like brown bears, they will sometimes steal kills from cougars. One study found that both bear species visited 24% of cougar kills in Yellowstone and Glacier National Parks, usurping 10% of the carcasses. Another study found that American black bears visited 48% of cougar kills in summer in Colorado and 77% of kills in California. As a result, the cats spend more time killing and less time feeding on each kill.
American black bear interactions with gray wolves are much rarer than with brown bears, due to differences in habitat preferences. The majority of American black bear encounters with wolves occur in the species' northern range, with no interactions being recorded in Mexico. Despite the American black bear being more powerful on a one-to-one basis, packs of wolves have been recorded to kill black bears on numerous occasions without eating them. Unlike brown bears, American black bears frequently lose against wolves in disputes over kills. Wolf packs typically kill American black bears when the larger animals are in their hibernation cycle.
There is at least one record of an American black bear killing a wolverine (Gulo gulo) in a dispute over food in Yellowstone National Park. Anecdotal cases of alligator predation on American black bears have been reported, though such cases may involve assaults on cubs."Key West Florida Attractions | Alligator Exhibit" . Key West Aquarium (November 30, 2012). Retrieved 2012-12-20. At least one jaguar (Panthera onca) has been recorded to have attacked and eaten a black bear: "El Jefe", the jaguar famous for being the first jaguar seen in the United States in over a century.
Relationships with humans
In folklore, mythology and culture
Indigenous
Black bears feature prominently in the stories of some of North America's indigenous peoples. One tale tells of how the black bear was a creation of the Great Spirit, while the grizzly bear was created by the Evil Spirit.Lippincott, Joshua B. (2009). Folklore and Legends of the North American Indian, Abela Publishing Ltd., In the mythology of the Haida, Tlingit and Tsimshian people of the northwest coast, mankind first learned to respect bears when a girl married the son of a black bear chieftain. In Kwakwa̱ka̱ʼwakw mythology, black and brown bears became enemies when Grizzly Bear Woman killed Black Bear Woman for being lazy. Black Bear Woman's children, in turn, killed Grizzly Bear Woman's children.Averkieva, Julia and Sherman, Mark. Kwakiutl String Figures, UBC Press, 1992, The Navajo believed that the Big Black Bear was chief among the bears of the four directions surrounding Sun's house and would pray to it in order to be granted its protection during raids.Clark, LaVerne Harrell (2001). They Sang for Horses: The Impact of the Horse on Navajo & Apache Folklore, University Press of Colorado,
Sleeping Bear Dunes in Michigan is named after a Native American legend, where a female bear and her two cubs swam across Lake Michigan to escape a fire on the Wisconsin shore. The mother bear reached the shore and waited for her cubs, but they did not make it across. Two islands mark where the cubs drowned, while the dune marks the spot where the mother bear waited.National Park Service. (2020, September 10). The story of Sleeping Bear Dunes. https://www.nps.gov/slbe/learn/kidsyouth/the-story-of-sleeping-bear.htm
Anglo-American
Morris Michtom, the creator of the teddy bear, was inspired to make the toy when he came across a cartoon of Theodore Roosevelt refusing to shoot a black bear cub tied to a tree.
The fictional character Winnie-the-Pooh was named after Winnipeg, a female cub that lived at the London Zoo from 1915 until her death in 1934.A Bear Named Winnie Canadian Broadcasting Corporation.ca TV (2004)
A cub, who in the spring of 1950 was caught in the Capitan Gap Fire, was made into the living representative of Smokey Bear, the mascot of the United States Forest Service.
Terrible Ted was a de-toothed and de-clawed bear who was forced to perform as a pro wrestler and whose "career" lasted from the 1950s to the 1970s.
Clark's Bears, previously named Clark's Trading Post, is a visitor attraction in Lincoln, New Hampshire known for its trained bear shows since 1949.
The American black bear is the mascot of the University of Maine and Baylor University, the latter of which houses two live bears on campus.
Attacks on humans
Although an adult bear is quite capable of killing a human, American black bears typically avoid confronting humans. Unlike grizzly bears, which became a subject of fearsome legend among the European settlers of North America, black bears were rarely considered overly dangerous, even though they lived in areas where the pioneers had settled.
American black bears rarely attack when confronted by humans and usually only make mock charges, emit blowing noises and swat the ground with their forepaws. The number of attacks on humans is higher than those by brown bears in North America, but this is largely because black bears considerably outnumber brown bears. Compared to brown bear attacks, aggressive encounters with black bears rarely lead to serious injury. Most attacks tend to be motivated by hunger rather than territoriality and thus victims have a higher probability of surviving by fighting back rather than submitting. Unlike female brown bears, female American black bears are not as protective of their cubs and rarely attack humans in the vicinity of the cubs. However, occasionally such attacks do occur. The worst recorded attack occurred in May 1978, in which a bear killed three teenagers fishing in Algonquin Park in Ontario. Another exceptional attack occurred in August 1997 in Liard River Hot Springs Provincial Park in British Columbia, when an emaciated bear attacked a mother and child, killing the mother and a man who intervened. The bear was shot while mauling a fourth victim.
The majority of attacks happened in national parks, usually near campgrounds, where the bears had habituated to close human proximity and food. Of 1,028 incidents of aggressive acts toward humans, recorded from 1964 to 1976 in the Great Smoky Mountains National Park, 107 resulted in injury and occurred mainly in tourist hot spots where people regularly fed the bears handouts.Kruuk, Hans (2002). Hunter and Hunted: Relationships Between Carnivores and People, Cambridge University Press, In almost every case where open garbage dumps that attracted bears were closed and handouts ceased, the number of aggressive encounters dropped. However, in the Liard River Hot Springs case, the bear was apparently dependent on a local garbage dump that had closed and so was starving to death. Attempts to relocate bears are typically unsuccessful, as the bears seem able to return to their home range, even without familiar landscape cues.
Livestock and crop predation
A limitation of food sources in early spring and wild berry and nut crop failures in summer may contribute to bears regularly feeding from human-based food sources. These bears often eat crops, especially during autumn hyperphagia when natural foods are scarce. Favored crops include apples, oats and corn. American black bears can do extensive damage in areas of the northwestern United States by stripping the bark from trees and feeding on the cambium. Livestock depredations occur mostly in spring.
Although they occasionally hunt adult cattle and horses, they seem to prefer smaller prey such as sheep, goats, pigs and young calves. They usually kill by biting the neck and shoulders, though they may break the neck or back of the prey with blows with the paws. Evidence of a bear attack includes claw marks and is often found on the neck, back and shoulders of larger animals. Surplus killing of sheep and goats is common. American black bears have been known to frighten livestock herds over cliffs, causing injuries and death to many animals; whether this is intentional is not known. Occasionally bears kill pets, especially domestic dogs, which are most prone to harass a bear."Black Bear Attacks Dog" . WJHG. Retrieved December 21, 2012. It is not recommended to use unleashed dogs to deter bear attacks. Although large, aggressive dogs can sometimes cause a bear to run, if pressed, angry bears often turn the tables and end up chasing the dogs in return. A bear in pursuit of a pet dog can threaten both canid and human lives."Frequently Asked Questions Regarding Bears". Denali National Park & Preserve, National Park Service. Retrieved December 21, 2012."Encountering Black Bears in Arkansas" . University of Arkansas.
Hunting
The hunting of American black bears has taken place since the initial settlement of the Americas. The first piece of evidence dates to a Clovis site at Lehner Ranch, Arizona. Partially calcined teeth of a 3-month old black bear cub came from a roasting pit, suggesting the bear cub was eaten. The surrounding charcoal was dated to the Early Holocene (10,940 BP). Black bear remains also appear to be associated with early peoples in Tlapacoya, Mexico. Native Americans increasingly utilized black bears during the Holocene, particularly in the late Holocene upper Midwest, e.g., Hopewell and Mississippian cultures.
Some Native American tribes, in admiration for the American black bear's intelligence, would decorate the heads of bears they killed with trinkets and place them on blankets. Tobacco smoke would be wafted into the disembodied head's nostrils by the hunter that dealt the killing blow, who would compliment the animal for its courage. The Kutchin typically hunted American black bears during their hibernation cycle. Unlike the hunting of hibernating grizzly bears, which was fraught with danger, hibernating American black bears took longer to awaken and hunting them was thus safer and easier. During the European colonization of eastern North America, thousands of bears were hunted for their meat, fat and fur. Theodore Roosevelt wrote extensively on black bear hunting in his Hunting the Grisly and other sketches, in which he stated,
He wrote that black bears were difficult to hunt by stalking, due to their habitat preferences, though they were easy to trap. Roosevelt described how, in the southern states, planters regularly hunted bears on horseback with hounds. General Wade Hampton was known to have been present at 500 successful bear hunts, two-thirds of which he killed personally. He killed 30 or 40 bears with only a knife, which he would use to stab the bears between the shoulder blades while they were distracted by his hounds. Unless well trained, horses were often useless in bear hunts, as they often bolted when the bears stood their ground. In 1799, 192,000 American black bear skins were exported from Quebec. In 1822, 3,000 skins were exported from the Hudson's Bay Company.Partington, Charles Frederick (1835). The British Cyclopædia of Natural History: Combining a Scientific Classification of Animals, Plants, and Minerals, Vol. 1, Orr & Smith. In 1992, untanned, fleshed and salted hides were sold for an average of $165.
In Canada, black bears are considered as both a big game and furbearer species in all provinces, save for New Brunswick and the Northwest Territories, where they are only classed as a big game species. There are around 80,900 licensed bear hunters in Canada. Canadian black bear hunts take place in the fall and spring, and both male and female bears can be legally taken, though some provinces prohibit the hunting of females with cubs, or yearlings.
Currently, 28 of the U.S. states have American black bear hunting seasons. Nineteen states require a bear hunting license, with some also requiring a big game license. In eight states, only a big game license is required. Overall, over 481,500 American black bear hunting licenses are sold per year. The hunting methods and seasons vary greatly according to state, with some bear hunting seasons including fall only, spring and fall, or year-round. New Jersey, in November 2010, approved a six-day bear-hunting season in early December 2010 to slow the growth of the population. Bear hunting had been banned in New Jersey for five years before that time. A Fairleigh Dickinson University PublicMind poll found that 53% of New Jersey voters approved of the new season if scientists concluded that bears were leaving their usual habitats and destroying private property. Men, older voters and those living in rural areas were more likely to approve of a bear hunting season in New Jersey than women, younger voters and those living in more developed parts of the state. In the western states, where there are large American black bear populations, there are spring and year-round seasons. Approximately 18,000 American black bears were killed annually in the U.S. between 1988 and 1992. Within this period, annual kills ranged from six bears in South Carolina to 2,232 in Maine. According to Dwight Schuh in his Bowhunter's Encyclopedia, American black bears are the third most popular quarry of bowhunters, behind deer and elk.Schuh, Dwight R. (1992). Bowhunter's Encyclopedia, Stackpole Books,
Meat
Bear meat had historically been held in high esteem among North America's indigenous people and colonists. American black bears were the only bear species the Kutchin hunted for their meat, though this constituted only a small part of their diet.Nelson, Richard K. (1986). Hunters of the Northern Forest: Designs for Survival Among the Alaskan Kutchin, University of Chicago Press, According to the second volume of Frank Forester's Field Sports of the United States, and British Provinces, of North America:
Theodore Roosevelt likened the flesh of young American black bears to that of pork, and not as coarse or flavorless as the meat of grizzly bears.Roosevelt, Theodore. Hunting Trips of a Ranchman: Hunting Trips on the Prairie and in the Mountains, Adamant Media Corporation, The most favored cuts are concentrated in the legs and loins. Meat from the neck, front legs and shoulders is usually ground into minced meat or used for stews and casseroles. Keeping the fat on tends to give the meat a strong flavor. As American black bears can have trichinellosis, cooking temperatures need to be high in order to kill the parasites.Smith, Richard P. (2007). Black Bear Hunting, Stackpole Books,
Bear fat was once valued as a cosmetic article that promoted hair growth and gloss. The fat most favored for this purpose was the hard white fat found in the body's interior. As only a small portion of this fat could be harvested for this purpose, the oil was often mixed with large quantities of hog lard. However, animal rights activism over the last decade has slowed the harvest of these animals; therefore the lard from bears has not been used in recent years for the purpose of cosmetics.
See also
List of fatal bear attacks in North America
List of individual bears
References
Further reading
External links
Wildlifeinformation.org: American Black Bear Conservation Action Plan
Category:American black bears
Category:Articles containing video clips
Category:Extant Piacenzian first appearances
Bear, American black
Category:Mammals described in 1780
Bear, American black
Category:Mammals of Canada
Category:Mammals of the United States
Category:Mammals of Mexico
Category:Pleistocene mammals of North America
Category:Pliocene carnivorans
Category:Pliocene mammals of North America
Category:Quaternary carnivorans
Category:Quaternary mammals of North America
Category:Scavengers
Category:Symbols of Alabama
Category:Symbols of West Virginia
Category:Taxa named by Peter Simon Pallas
Category:Ursus (mammal)
Category:Fauna of California
Category:Least concern biota of the United States
|
nature_wildlife
| 8,652
|
56978
|
Song dynasty
|
https://en.wikipedia.org/wiki/Song_dynasty
|
The Song dynasty ( ) was a unifying imperial dynasty of China that ruled from 960 to 1279. The dynasty was founded by Emperor Taizu of Song, who usurped the throne of the Later Zhou dynasty and went on to conquer the rest of the Ten Kingdoms, ending the Five Dynasties and Ten Kingdoms period. The Song often came into conflict with the contemporaneous Liao, Western Xia and Jin dynasties in northern China. After retreating to southern China following attacks by the Jin dynasty, the Song was eventually conquered by the Mongol-led Yuan dynasty.
The dynasty's history is divided into two periods: during the Northern Song (; 960–1127), the capital was in the northern city of Bianjing (now Kaifeng) and the dynasty controlled most of what is now East China. The Southern Song (; 1127–1279) comprise the period following the loss of control over the northern half of Song territory to the Jurchen-led Jin dynasty in the Jin–Song wars. At that time, the Song court retreated south of the Yangtze and established its capital at Lin'an (now Hangzhou). Although the Song dynasty had lost control of the traditional Chinese heartlands around the Yellow River, the Southern Song Empire contained a large population and productive agricultural land, sustaining a robust economy. In 1234, the Jin dynasty was conquered by the Mongols, who took control of northern China, maintaining uneasy relations with the Southern Song. Möngke Khan, the fourth Great Khan of the Mongol Empire, died in 1259 while besieging the mountain castle Diaoyucheng in Chongqing. His younger brother Kublai Khan was proclaimed the new Great Khan and in 1271 founded the Yuan dynasty. After two decades of sporadic warfare, Kublai Khan's armies conquered the Song dynasty in 1279 after defeating the Southern Song in the Battle of Yamen, and reunited China under the Yuan dynasty.
Technology, science, philosophy, mathematics, and engineering flourished during the Song era. The Song dynasty was the first in world history to issue banknotes or true paper money and the first Chinese government to establish a permanent standing navy. This dynasty saw the first surviving records of the chemical formula for gunpowder, the invention of gunpowder weapons such as fire arrows, bombs, and the fire lance. It also saw the first discernment of true north using a compass, first recorded description of the pound lock, and improved designs of astronomical clocks. Economically, the Song dynasty was unparalleled with a gross domestic product three times larger than that of Europe during the 12th century. China's population doubled in size between the 10th and 11th centuries. This growth was made possible by expanded rice cultivation, use of early-ripening rice from Southeast and South Asia, and production of widespread food surpluses. The Northern Song census recorded 20 million households, double that of the Han and Tang dynasties. It is estimated that the Northern Song had a population of 90 million people, and 200 million by the time of the Ming dynasty. This dramatic increase of population fomented an economic revolution in pre-modern China.
The expansion of the population, growth of cities, and emergence of a national economy led to the gradual withdrawal of the central government from direct intervention in the economy. The lower gentry assumed a larger role in local administration and affairs. Song society was vibrant, and cities had lively entertainment quarters. Citizens gathered to view and trade artwork, and intermingled at festivals and in private clubs. The spread of literature and knowledge was enhanced by the rapid expansion of woodblock printing and the 11th-century invention of movable type printing. Philosophers such as Cheng Yi and Zhu Xi reinvigorated Confucianism with new commentary, infused with Buddhist ideals, and emphasized a new organization of classic texts that established the doctrine of Neo-Confucianism. Although civil service examinations had existed since the Sui dynasty, they became much more prominent in the Song period. Officials gaining power through imperial examination led to a shift from a military-aristocratic elite to a scholar-bureaucratic elite.
History
Northern Song, 960–1127
alt=Painted image of a portly man sitting in a red throne-chair with dragon-head decorations, wearing white silk robes, black shoes, and a black hat, and sporting a black mustache and goatee.|thumb|left|upright|Court portrait of Emperor Taizu ()
After usurping the throne of the Later Zhou dynasty, Emperor Taizu of Song () spent sixteen years conquering the rest of China proper, reuniting much of the territory that had once belonged to the Han and Tang empires and ending the upheaval of the Five Dynasties and Ten Kingdoms period. In Kaifeng, he established a strong central government over the empire. The establishment of this capital marked the start of the Northern Song period. He ensured administrative stability by promoting the civil service examination system of drafting state bureaucrats by skill and merit (instead of aristocratic or military position) and promoted projects that ensured efficiency in communication throughout the empire. In one such project, cartographers created detailed maps of each province and city that were then collected in a large atlas. Emperor Taizu also promoted groundbreaking scientific and technological innovations by supporting works like the astronomical clock tower designed and built by the engineer Zhang Sixun.
The Song court maintained diplomatic relations with Chola India, the Fatimid Caliphate of Egypt, Srivijaya, the Kara-Khanid Khanate in Central Asia, the Goryeo Kingdom in Korea, and other countries that were also trade partners with Japan. Chinese records even mention an embassy from the ruler of "Fu lin" (拂菻, i.e. the Byzantine Empire), Michael VII Doukas, and its arrival in 1081. However, China's closest neighbouring states had the greatest impact on its domestic and foreign policy. From its inception under Taizu, the Song dynasty alternated between warfare and diplomacy with the ethnic Khitans of the Liao dynasty in the northeast and with the Tanguts of the Western Xia in the northwest. The Song dynasty used military force in an attempt to quell the Liao dynasty and to recapture the Sixteen Prefectures, a territory under Khitan control since 938 that was traditionally considered to be part of China proper (most parts of today's Beijing and Tianjin). Song forces were repulsed by the Liao forces, who engaged in aggressive yearly campaigns into Northern Song territory until 1005, when the signing of the Shanyuan Treaty ended these northern border clashes. The Song were forced to provide tribute to the Khitans, although this did little damage to the Song economy since the Khitans were economically dependent upon importing massive amounts of goods from the Song. More significantly, the Song state recognized the Liao state as its diplomatic equal. The Song created an extensive defensive forest along the Song–Liao border to thwart potential Khitan cavalry attacks.
The Song dynasty managed to win several military victories over the Tanguts in the early 11th century, culminating in a campaign led by the polymath scientist, general, and statesman Shen Kuo (1031–1095). However, this campaign was ultimately a failure due to a rival military officer of Shen disobeying direct orders, and the territory gained from the Western Xia was eventually lost. The Song fought against the Vietnamese kingdom of Đại Việt twice, the first conflict in 981 and later a significant war from 1075 to 1077 over a border dispute and the Song's severing of commercial relations with Đại Việt. After the Vietnamese forces inflicted heavy damages in a raid on Guangxi, the Song commander Guo Kui (1022–1088) penetrated as far as Thăng Long (modern Hanoi). Heavy losses on both sides prompted the Vietnamese commander Thường Kiệt (1019–1105) to make peace overtures, allowing both sides to withdraw from the war effort; captured territories held by both Song and Vietnamese were mutually exchanged in 1082, along with prisoners of war.
During the 11th century, political rivalries divided members of the court due to the ministers' differing approaches, opinions, and policies regarding the handling of the Song's complex society and thriving economy. The idealist Chancellor, Fan Zhongyan (989–1052), was the first to receive a heated political backlash when he attempted to institute the Qingli Reforms, which included measures such as improving the recruitment system of officials, increasing the salaries for minor officials, and establishing sponsorship programs to allow a wider range of people to be well educated and eligible for state service.
After Fan was forced to step down from his office, Wang Anshi (1021–1086) became Chancellor of the imperial court. With the backing of Emperor Shenzong (1067–1085), Wang Anshi severely criticized the educational system and state bureaucracy. Seeking to resolve what he saw as state corruption and negligence, Wang implemented a series of reforms called the New Policies. These involved land value tax reform, the establishment of several government monopolies, the support of local militias, and the creation of higher standards for the Imperial examination to make it more practical for men skilled in statecraft to pass.
The reforms created political factions in the court. Wang Anshi's "New Policies Group" (Xin Fa), also known as the "Reformers", were opposed by the ministers in the "Conservative" faction led by the historian and Chancellor Sima Guang (1019–1086). As one faction supplanted another in the majority position of the court ministers, it would demote rival officials and exile them to govern remote frontier regions of the empire. One of the prominent victims of the political rivalry, the famous poet and statesman Su Shi (1037–1101), was jailed and eventually exiled for criticizing Wang's reforms.
The continual alternation between reform and conservatism had effectively weakened the dynasty. This decline can also be attributed to Cai Jing (1047–1126), who was appointed by Emperor Zhezong (1085–1100) and who remained in power until 1125. He revived the New Policies and pursued political opponents, tolerated corruption and encouraged Emperor Huizong (1100–1126) to neglect his duties to focus on artistic pursuits. Later, a peasant rebellion broke out in Zhejiang and Fujian, headed by Fang La in 1120. The rebellion may have been caused by an increasing tax burden, the concentration of landownership and oppressive government measures.
While the central Song court remained politically divided and focused upon its internal affairs, alarming new events to the north in the Liao state finally came to its attention. The Jurchen, a subject tribe of the Liao, rebelled against them and formed their own state, the Jin dynasty. The Song official Tong Guan (1054–1126) advised Emperor Huizong to form an alliance with the Jurchens, and the joint military campaign under this Alliance Conducted at Sea toppled and completely conquered the Liao dynasty by 1125. During the joint attack, the Song's northern expedition army removed the defensive forest along the Song–Liao border.
alt=A wooden carving of a sitting Buddhist figure in loose fitting, painted robes.|thumb|left|upright=0.7|A Liao dynasty (907–1125) polychrome wood-carved statue of Guanyin, Shanxi
However, the poor performance and military weakness of the Song army was observed by the Jurchens, who immediately broke the alliance, beginning the Jin–Song Wars of 1125 and 1127. Because of the removal of the previous defensive forest, the Jin army marched quickly across the North China Plain to Kaifeng. In the Jingkang Incident during the latter invasion, the Jurchens captured not only the capital, but the retired Emperor Huizong, his successor Emperor Qinzong, and most of the Imperial court.
The remaining Song forces regrouped under the self-proclaimed Emperor Gaozong (1127–1162) and withdrew south of the Yangtze to establish a new capital at Lin'an (modern Hangzhou). The Jurchen conquest of North China and shift of capitals from Kaifeng to Lin'an was the dividing line between the Northern and Southern Song dynasties.
After their fall to the Jin, the Song lost control of North China. Now occupying what has been traditionally known as "China proper", the Jin regarded themselves the rightful rulers of China. The Jin later chose earth as their dynastic element and yellow as their royal color. According to the theory of the Five Elements (wuxing), the earth element follows the fire, the dynastic element of the Song, in the sequence of elemental creation. Therefore, their ideological move showed that the Jin considered Song reign in China complete, with the Jin replacing the Song as the rightful rulers of China proper.
Southern Song, 1127–1279
Although weakened and pushed south beyond the Huai River, the Southern Song found new ways to bolster its strong economy and defend itself against the Jin dynasty. The government sponsored massive shipbuilding and harbor improvement projects, and the construction of beacons and seaport warehouses to support maritime trade abroad, including at the major international seaports, such as Quanzhou, Guangzhou, and Xiamen, that were sustaining China's commerce. There were able military officers such as Yue Fei and Han Shizhong.
To protect and support the multitude of ships sailing into the East China Sea and Yellow Sea (to Korea and Japan), Southeast Asia, the Indian Ocean, and the Red Sea, it was necessary to establish an official standing navy. The Song dynasty therefore established China's first permanent navy in 1132, with a headquarters at Dinghai. With a permanent navy, the Song were prepared to face the naval forces of the Jin on the Yangtze River in 1161, in the Battle of Tangdao and the Battle of Caishi. During these battles the Song navy employed swift paddle wheel-driven naval vessels armed with traction trebuchet catapults aboard the decks that launched gunpowder bombs. Although the Jin forces commanded by Wanyan Liang (the Prince of Hailing) boasted 70,000 men on 600 warships, and the Song forces only 3,000 men on 120 warships, the Song dynasty forces were victorious in both battles due to the destructive power of the bombs and the rapid assaults by paddlewheel ships. The strength of the navy was heavily emphasized following these victories. A century after the navy was founded it had grown in size to 52,000 fighting marines.
The Song government confiscated portions of land owned by the landed gentry in order to raise revenue for these projects, an act which caused dissension and loss of loyalty amongst leading members of Song society but did not stop the Song's defensive preparations. Financial matters were made worse by the fact that many wealthy, land-owning families—some of which had officials working for the government—used their social connections with those in office in order to obtain tax-exempt status.
Although the Song dynasty was able to hold back the Jin, a new foe came to power over the steppe, deserts, and plains north of the Jin dynasty. The Mongols, led by Genghis Khan (r. 1206–1227), initially invaded the Jin dynasty in 1205 and 1209, engaging in large raids across its borders, and in 1211 an enormous Mongol army was assembled to invade the Jin. The Jin dynasty was forced to submit and pay tribute to the Mongols as vassals; when the Jin suddenly moved their capital city from Beijing to Kaifeng, the Mongols saw this as a revolt. Under the leadership of Ögedei Khan (r.1229–1241), both the Jin dynasty and Western Xia dynasty were conquered by Mongol forces in 1233/34.
The Mongols were allied with the Song, but this alliance was broken when the Song recaptured the former imperial capitals of Kaifeng, Luoyang, and Chang'an at the collapse of the Jin dynasty. After the first Mongol invasion of Vietnam in 1258, Mongol general Uriyangkhadai attacked Guangxi from Hanoi as part of a coordinated Mongol attack in 1259 with armies attacking in Sichuan under Mongol leader Möngke Khan and other Mongol armies attacking in modern-day Shandong and Henan. On August 11, 1259, Möngke Khan died during the siege of Diaoyu Castle in Chongqing.
His successor Kublai Khan continued the assault against the Song, gaining a temporary foothold on the southern banks of the Yangtze. By the winter of 1259, Uriyangkhadai's army fought its way north to meet Kublai's army, which was besieging Ezhou in Hubei. Kublai made preparations to take Ezhou, but a pending civil war with his brother Ariq Böke—a rival claimant to the Mongol Khaganate—forced Kublai to move back north with the bulk of his forces. In Kublai's absence, the Song forces were ordered by Chancellor Jia Sidao to make an immediate assault and succeeded in pushing the Mongol forces back to the northern banks of the Yangtze. There were minor border skirmishes until 1265, when Kublai won a significant battle in Sichuan.
From 1268 to 1273, Kublai blockaded the Yangtze River with his navy and besieged Xiangyang, the last obstacle in his way to invading the rich Yangtze River basin. Kublai officially declared the creation of the Yuan dynasty in 1271. In 1275, a Song force of 130,000 troops under Chancellor Jia Sidao was defeated by Kublai's newly appointed commander-in-chief, general Bayan. By 1276, most of the Song territory had been captured by Yuan forces, including the capital Lin'an.
In the Battle of Yamen on the Pearl River Delta in 1279, the Yuan army, led by the general Zhang Hongfan, finally crushed the Song resistance. The last remaining ruler, the 13-year-old emperor Zhao Bing, committed suicide, along with Prime Minister Lu Xiufu and approximately 1300 members of the royal clan. On Kublai's orders, carried out by his commander Bayan, the rest of the former imperial family of Song were unharmed; the deposed Emperor Gong was demoted, being given the title 'Duke of Ying', but was eventually exiled to Tibet where he took up a monastic life. The former emperor would eventually be forced to commit suicide under the orders of Kublai's great-great-grandson, Gegeen Khan, out of fear that Emperor Gong would stage a coup to restore his reign. Other members of the Song imperial family continued to live in the Yuan dynasty, including Zhao Mengfu and Zhao Yong.
Culture and society
The Song dynastyChina in 1000 CE: The Most Advanced Society in the World, in Ebrey, Patricia, & Conrad Schirokauer, consultants, The Song dynasty in China (960–1279): Life in the Song Seen through a 12th-century Scroll ([§] Asian Topics on Asia for Educators) (Asia for Educators, Columbia Univ.), as accessed October 6 & 9, 2012. was an era of administrative sophistication and complex social organization. Some of the largest cities in the world were found in China during this period (Kaifeng and Hangzhou had populations of over a million). People enjoyed various social clubs and entertainment in the cities, and there were many schools and temples to provide the people with education and religious services. The Song government supported social welfare programs including the establishment of retirement homes, public clinics, and paupers' graveyards. The Song dynasty supported a widespread postal service that was modeled on the earlier Han dynasty (202 BCE – CE 220) postal system to provide swift communication throughout the empire. The central government employed thousands of postal workers of various ranks to provide service for post offices and larger postal stations. In rural areas, farming peasants either owned their own plots of land, paid rents as tenant farmers, or were serfs on large estates.
alt=Two young girls play with a toy consisting of a long feather attached to a stick, while a cat watches them. There is a large rock formation and a flowering tree to the left of the girls and the cat.|thumb|right|200px|A 12th-century painting by Su Hanchen; a girl waves a peacock feather banner like the one used in dramatical theater to signal an acting leader of troops.
Although women were on a lower social tier than men according to Confucian ethics, they enjoyed many social and legal privileges and wielded considerable power at home and in their own small businesses. As Song society became more and more prosperous and parents on the bride's side of the family provided larger dowries for her marriage, women naturally gained many new legal rights in ownership of property. Under certain circumstances, an unmarried daughter without brothers, or a surviving mother without sons, could inherit one-half of her father's share of undivided family property. There were many notable and well-educated women, and it was a common practice for women to educate their sons during their earliest youth. The mother of the scientist, general, diplomat, and statesman Shen Kuo taught him essentials of military strategy. There were also exceptional women writers and poets, such as Li Qingzhao (1084–1151), who became famous even in her lifetime.
The Song Dynasty used the term “jijian” to characterize male homosexual practices, where the “ji” in the term was used to characterize a man receiving sexual acts. Such a term was derogatory by virtue of its connection with animals deemed inferior to humans. Song Dynasty observed a few cultural pushbacks—led by the Neo-Confucian movement from the dynasty—against homosexual or bisexual practices as urbanization prompted growing male prostitution and other sex work economies. Significantly, the Song government passed laws prohibiting male prostitution, although many other cultural and literary works attested to the continued existence and prominence of men in sex work.
Religion in China during this period had a great effect on people's lives, beliefs, and daily activities, and Chinese literature on spirituality was popular. The major deities of Daoism and Buddhism, ancestral spirits, and the many deities of Chinese folk religion were worshipped with sacrificial offerings. Tansen Sen asserts that more Buddhist monks from India traveled to China during the Song than in the previous Tang dynasty (618–907). With many ethnic foreigners travelling to China to conduct trade or live permanently, there came many foreign religions; religious minorities in China included Middle Eastern Muslims, the Kaifeng Jews, and Persian Manichaeans.
The populace engaged in a vibrant social and domestic life, enjoying such public festivals as the Lantern Festival and the Qingming Festival. There were entertainment quarters in the cities providing a constant array of amusements. There were puppeteers, acrobats, theatre actors, sword swallowers, snake charmers, storytellers, singers and musicians, prostitutes, and places to relax, including tea houses, restaurants, and organized banquets. People attended social clubs in large numbers; there were tea clubs, exotic food clubs, antiquarian and art collectors' clubs, horse-loving clubs, poetry clubs, and music clubs. Like regional cooking and cuisines in the Song, the era was known for its regional varieties of performing arts styles as well. Theatrical drama was very popular amongst the elite and general populace, although Classical Chinese—not the vernacular language—was spoken by actors on stage. The four largest drama theatres in Kaifeng could hold audiences of several thousand each. There were also notable domestic pastimes, as people at home enjoyed activities such as the go and xiangqi board games.
Civil service examinations and the gentry
During this period greater emphasis was laid upon the civil service system of recruiting officials; this was based upon degrees acquired through competitive examinations, in an effort to select the most capable individuals for governance. Selecting men for office through proven merit was an ancient idea in China. The civil service system became institutionalized on a small scale during the Sui and Tang dynasties, but by the Song period, it became virtually the only means for drafting officials into the government. The advent of widespread printing helped to widely circulate Confucian teachings and to educate more and more eligible candidates for the exams. This can be seen in the number of exam takers for the low-level prefectural exams rising from 30,000 annual candidates in the early 11th century to 400,000 candidates by the late 13th century. The civil service and examination system allowed for greater meritocracy, social mobility, and equality in competition for those wishing to attain an official seat in government. Using statistics gathered by the Song state, Edward A. Kracke, Sudō Yoshiyuki, and Ho Ping-ti supported the hypothesis that simply having a father, grandfather, or great-grandfather who had served as an official of state did not guarantee one would obtain the same level of authority. Robert Hartwell and Robert P. Hymes criticized this model, stating that it places too much emphasis on the role of the nuclear family and considers only three paternal ascendants of exam candidates while ignoring the demographic reality of Song China, the significant proportion of males in each generation that had no surviving sons, and the role of the extended family. Many felt disenfranchised by what they saw as a bureaucratic system that favored the land-holding class able to afford the best education. One of the greatest literary critics of this was the official and famous poet Su Shi. Yet Su was a product of his times, as the identity, habits, and attitudes of the scholar-official had become less aristocratic and more bureaucratic with the transition of the periods from Tang to Song. At the beginning of the dynasty, government posts were disproportionately held by two elite social groups: a founding elite who had ties with the founding emperor and a semi-hereditary professional elite who used long-held clan status, family connections, and marriage alliances to secure appointments. By the late 11th century, the founding elite became obsolete, while political partisanship and factionalism at court undermined the marriage strategies of the professional elite, which dissolved as a distinguishable social group and was replaced by a multitude of gentry families.
Due to Song's enormous population growth and the body of its appointed scholar-officials being accepted in limited numbers (about 20,000 active officials during the Song period), the larger scholarly gentry class would now take over grassroots affairs on the vast local level. Excluding the scholar-officials in office, this elite social class consisted of exam candidates, examination degree-holders not yet assigned to an official post, local tutors, and retired officials. These learned men, degree-holders, and local elites supervised local affairs and sponsored necessary facilities of local communities; any local magistrate appointed to his office by the government relied upon the cooperation of the few or many local gentry in the area. For example, the Song government—excluding the educational-reformist government under Emperor Huizong—spared little amount of state revenue to maintain prefectural and county schools; instead, the bulk of the funds for schools was drawn from private financing. This limited role of government officials was a departure from the earlier Tang dynasty (618–907), when the government strictly regulated commercial markets and local affairs; now the government withdrew heavily from regulating commerce and relied upon a mass of local gentry to perform necessary duties in their communities.
The gentry distinguished themselves in society through their intellectual and antiquarian pursuits, while the homes of prominent landholders attracted a variety of courtiers, including artisans, artists, educational tutors, and entertainers. Despite the disdain for trade, commerce, and the merchant class exhibited by the highly cultured and elite exam-drafted scholar-officials, commercialism played a prominent role in Song culture and society. A scholar-official would be frowned upon by his peers if he pursued means of profiteering outside of his official salary; however, this did not stop many scholar-officials from managing business relations through the use of intermediary agents.
Law, justice, and forensic science
The Song judicial system retained most of the legal code of the earlier Tang dynasty, the basis of traditional Chinese law up until the modern era. Roving sheriffs maintained law and order in the municipal jurisdictions and occasionally ventured into the countryside. Official magistrates overseeing court cases were not only expected to be well-versed in written law but also to promote morality in society. Magistrates such as the famed Bao Zheng (999–1062) embodied the upright, moral judge who upheld justice and never failed to live up to his principles. Song judges specified the guilty person or party in a criminal act and meted out punishments accordingly, often in the form of caning. A guilty individual or parties brought to court for a criminal or civil offense were not viewed as wholly innocent until proven otherwise, while even accusers were viewed with a high level of suspicion by the judge. Due to costly court expenses and immediate jailing of those accused of criminal offenses, people in the Song preferred to settle disputes and quarrels privately, without the court's interference.
Shen Kuo's Dream Pool Essays argued against traditional Chinese beliefs in anatomy (such as his argument for two throat valves instead of three); this perhaps spurred the interest in the performance of post-mortem autopsies in China during the 12th century. The physician and judge known as Song Ci (1186–1249) wrote a pioneering work of forensic science on the examination of corpses in order to determine cause of death (strangulation, poisoning, drowning, blows, etc.) and to prove whether death resulted from murder, suicide, or accidental death. Song Ci stressed the importance of proper coroner's conduct during autopsies and the accurate recording of the inquest of each autopsy by official clerks.
Military and methods of warfare
The Song military was chiefly organized to ensure that the army could not threaten Imperial control, often at the expense of effectiveness in war. Northern Song's Military Council operated under a Chancellor, who had no control over the imperial army. The imperial army was divided among three marshals, each independently responsible to the Emperor. Since the Emperor rarely led campaigns personally, Song forces lacked unity of command. The imperial court often believed that successful generals endangered royal authority, and relieved or even executed them (notably Li Gang, Yue Fei, and Han Shizhong).
Although the scholar-officials viewed soldiers as lower members in the hierarchic social order, a person could gain status and prestige in society by becoming a high-ranking military officer with a record of victorious battles. At its height, the Song military had one million soldiers divided into platoons of 50 troops, companies made of two platoons, battalions composed of 500 soldiers. Crossbowmen were separated from the regular infantry and placed in their own units as they were prized combatants, providing effective missile fire against cavalry charges. The government was eager to sponsor new crossbow designs that could shoot at longer ranges, while crossbowmen were also valuable when employed as long-range snipers. Song cavalry employed a slew of different weapons, including halberds, swords, bows, spears, and 'fire lances' that discharged a gunpowder blast of flame and shrapnel.
Military strategy and military training were treated as sciences that could be studied and perfected; soldiers were tested in their skills of using weaponry and in their athletic ability. The troops were trained to follow signal standards to advance at the waving of banners and to halt at the sound of bells and drums.
The Song navy was of great importance during the consolidation of the empire in the 10th century; during the war against the Southern Tang state, the Song navy employed tactics such as defending large floating pontoon bridges across the Yangtze River in order to secure movements of troops and supplies. There were large ships in the Song navy that could carry 1,000 soldiers aboard their decks, while the swift-moving paddle-wheel craft were viewed as essential fighting ships in any successful naval battle.
In a battle on January 23, 971, massive arrow fire from Song dynasty crossbowmen decimated the war elephant corps of the Southern Han army. This defeat not only marked the eventual submission of the Southern Han to the Song dynasty, but also the last instance where a war elephant corps was employed as a regular division within a Chinese army.
There was a total of 347 military treatises written during the Song period, as listed by the history text of the Song Shi (compiled in 1345). However, only a handful of these military treatises have survived, which includes the Wujing Zongyao written in 1044. It was the first known book to have listed formulas for gunpowder; it gave appropriate formulas for use in several different kinds of gunpowder bombs. It also provided detailed descriptions and illustrations of double-piston pump flamethrowers, as well as instructions for the maintenance and repair of the components and equipment used in the device.
Arts, literature, philosophy, and religion
The visual arts during the Song dynasty were heightened by new developments such as advances in landscape and portrait painting. The gentry elite engaged in the arts as accepted pastimes of the cultured scholar-official, including painting, composing poetry, and writing calligraphy. The poet and statesman Su Shi and his associate Mi Fu (1051–1107) enjoyed antiquarian affairs, often borrowing or buying art pieces to study and copy. Poetry and literature profited from the rising popularity and development of the ci poetry form. Enormous encyclopedic volumes were compiled, such as works of historiography and dozens of treatises on technical subjects. This included the universal history text of the Zizhi Tongjian, compiled into 1000 volumes of 9.4 million written Chinese characters. The genre of Chinese travel literature also became popular with the writings of the geographer Fan Chengda (1126–1193) and Su Shi, the latter of whom wrote the 'daytrip essay' known as Record of Stone Bell Mountain that used persuasive writing to argue for a philosophical point. Although an early form of the local geographic gazetteer existed in China since the 1st century, the matured form known as "treatise on a place", or fangzhi, replaced the old "map guide", or , during the Song dynasty.
The imperial courts of the emperor's palace were filled with his entourage of court painters, calligraphers, poets, and storytellers. Emperor Huizong was the eighth emperor of the Song dynasty and he was a renowned artist as well as a patron of the art and the catalogue of his collection listed over 6,000 known paintings.Ebrey, Cambridge, 149. A prime example of a highly venerated court painter was Zhang Zeduan (1085–1145) who painted an enormous panoramic painting, Along the River During the Qingming Festival. Emperor Gaozong of Song initiated a massive art project during his reign, known as the Eighteen Songs of a Nomad Flute from the life story of Cai Wenji (b. 177). This art project was a diplomatic gesture to the Jin dynasty while he negotiated for the release of his mother from Jurchen captivity in the north.
alt=Four lines of vertically oriented Chinese characters. The two on the left are formed from a continuous line, the calligraphy equivalent of cursive. The two on the right use a more traditional multiple stroke writing style.|thumb|Chinese calligraphy of mixed styles written by Song dynasty poet Mi Fu (1051–1107)
alt=A portrait of an older, balding man in a half pale green and half sky blue robe. He is sitting on an armchair holding a thin wooden stick, possibly a folded up fan.|thumb|right|200px|Portrait of the Chinese Zen Buddhist Wuzhun Shifan, painted in 1238 AD.
In philosophy, Chinese Buddhism had waned in influence but it retained its hold on the arts and on the charities of monasteries. Buddhism had a profound influence upon the budding movement of Neo-Confucianism, led by Cheng Yi (1033–1107) and Zhu Xi (1130–1200). Mahayana Buddhism influenced Fan Zhongyan and Wang Anshi through its concept of ethical universalism, while Buddhist metaphysics deeply affected the pre–Neo-Confucian doctrine of Cheng Yi. The philosophical work of Cheng Yi in turn influenced Zhu Xi. Although his writings were not accepted by his contemporary peers, Zhu's commentary and emphasis upon the Confucian classics of the Four Books as an introductory corpus to Confucian learning formed the basis of the Neo-Confucian doctrine. By the year 1241, under the sponsorship of Emperor Lizong, Zhu Xi's Four Books and his commentary on them became standard requirements of study for students attempting to pass the civil service examinations. The neighbouring countries of Japan and Korea also adopted Zhu Xi's teaching, known as the Shushigaku (朱子學, School of Zhu Xi) of Japan, and in Korea the Jujahak (주자학). Buddhism's continuing influence can be seen in painted artwork such as Lin Tinggui's Luohan Laundering. However, the ideology was highly criticized and even scorned by some. The statesman and historian Ouyang Xiu (1007–1072) called the religion a "curse" that could only be remedied by uprooting it from Chinese culture and replacing it with Confucian discourse. The Chan sect experienced a literary flourishing in the Song period, which saw the publication of several major classical koan collections which remain influential in Zen philosophy and practice to the present day. A true revival of Buddhism in Chinese society would not occur until the Mongol rule of the Yuan dynasty, with Kublai Khan's sponsorship of Tibetan Buddhism and Drogön Chögyal Phagpa as the leading lama. The Christian sect of Nestorianism, which had entered China in the Tang era, would also be revived in China under Mongol rule.
Cuisine and clothing
Sumptuary laws regulated the food that one consumed and the clothes that one wore according to status and social class. Clothing was made of hemp or cotton cloths, restricted to a color standard of black and white. Trousers were the acceptable attire for peasants, soldiers, artisans, and merchants, although wealthy merchants might choose to wear more ornate clothing and male blouses that came down below the waist. Acceptable apparel for scholar-officials was rigidly defined by the social ranking system. However, as time went on this rule of rank-graded apparel for officials was not as strictly enforced. Each official was able to display his awarded status by wearing different-colored traditional silken robes that hung to the ground around his feet, specific types of headgear, and even specific styles of girdles that displayed his graded-rank of officialdom.
Women wore long dresses, blouses that came down to the knee, skirts, and jackets with long or short sleeves, while women from wealthy families could wear purple scarves around their shoulders. The main difference in women's apparel from that of men was that it was fastened on the left, not on the right.
alt=A bowl of reddish-purple, oval-shaped fruits with raisin texture.|thumb|200px|left|Dried jujubes such as these were imported to Song China from South Asia and the Middle East.
The main food staples in the diet of the lower classes remained rice, pork, and salted fish. In 1011, Emperor Zhenzong of Song introduced Champa rice to China from Vietnam's Kingdom of Champa, which sent 30,000 bushels as a tribute to Song. Champa rice was drought-resistant and able to grow fast enough to offer two harvests a year instead of one.
Song restaurant and tavern menus are recorded. They list entrees for feasts, banquets, festivals, and carnivals. They reveal a diverse and lavish diet for those of the upper class. They could choose from a wide variety of meats and seafood, including shrimp, geese, duck, mussel, shellfish, fallow deer, hare, partridge, pheasant, francolin, quail, fox, badger, clam, crab, and many others. Dairy products were rare in Chinese cuisine at this time. Beef was rarely consumed since the bull was a valuable draft animal, and dog meat was absent from the diet of the wealthy, although the poor could choose to eat dog meat if necessary (yet it was not part of their regular diet). People also consumed dates, raisins, jujubes, pears, plums, apricots, pear juice, lychee-fruit juice, honey and ginger drinks, spices and seasonings of Sichuan pepper, ginger, soy sauce, vegetable oil, sesame oil, salt, and vinegar.
Economy
The Song dynasty had one of the most prosperous and advanced economies in the medieval world. Song Chinese invested their funds in joint stock companies and in multiple sailing vessels at a time when monetary gain was assured from the vigorous overseas trade and domestic trade along the Grand Canal and Yangtze River. Both private and government-controlled industries met the needs of a growing Chinese population in the Song; prominent merchant families and private businesses were allowed to occupy industries that were not already government-operated monopolies. Economic historians emphasize this toleration of market mechanisms over population growth or new farming technologies as the major cause of Song economic prosperity. Artisans and merchants formed guilds that the state had to deal with when assessing taxes, requisitioning goods, and setting standard workers' wages and prices on goods.
The iron industry was pursued by both private entrepreneurs who owned their own smelters as well as government-supervised smelting facilities. The Song economy was stable enough to produce over of iron products per year. Large-scale deforestation would have continued if not for the 11th-century innovation of the use of coal instead of charcoal in blast furnaces for smelting cast iron. Much of this iron was reserved for military use in crafting weapons and armouring troops, but some was used to fashion the many iron products needed to fill the demands of the growing domestic market. The iron trade within China was advanced by the construction of new canals, facilitating the flow of iron products from production centres to the large market in the capital city.
The annual output of minted copper currency in 1085 reached roughly six billion coins. The most notable advancement in the Song economy was the establishment of the world's first government issued paper-printed money, known as Jiaozi (see also Huizi). For the printing of paper money, the Song court established several government-run factories in the cities of Huizhou, Chengdu, Hangzhou, and Anqi. The size of the workforce employed in paper money factories was large; it was recorded in 1175 that the factory at Hangzhou employed more than a thousand workers a day.
The economic power of Song China can be attested by the growth of the urban population of its capital city Hangzhou. The population was 200,000 at the start of the 12th century and increased to 500,000 around 1170 and doubled to over a million a century later. This economic power also heavily influenced foreign economies abroad. In 1120 alone, the Song government collected 18,000,000 ounces (510,000 kg) of silver in taxes.Ebrey, Cambridge Illustrated History of China, 142. The Moroccan geographer al-Idrisi wrote in 1154 of the prowess of Chinese merchant ships in the Indian Ocean and of their annual voyages that brought iron, swords, silk, velvet, porcelain, and various textiles to places such as Aden (Yemen), the Indus River, and the Euphrates. Foreigners, in turn, affected the Chinese economy. For example, many West and Central Asian Muslims went to China to trade, becoming a preeminent force in the import and export industry, while some were even appointed as officers supervising economic affairs. Sea trade with the South-west Pacific, the Hindu world, the Islamic world, and East Africa brought merchants great fortune and spurred an enormous growth in the shipbuilding industry of Song-era Fujian. However, there was risk involved in such long overseas ventures. In order to reduce the risk of losing money on maritime trade missions abroad, wrote historians Ebrey, Walthall, and Palais:
Science and technology
Gunpowder warfare
Advancements in weapons technology enhanced by gunpowder, including the evolution of the early flamethrower, explosive grenades, firearms, cannons, and land mines, enabled the Song Chinese to ward off their militant enemies until the Song's ultimate collapse in the late 13th century. The Wujing Zongyao manuscript of 1044 was the first book in history to provide formulas for gunpowder and their specified use in different types of bombs. While engaged in a war with the Mongols, in 1259 the official Li Zengbo wrote in his Kezhai Zagao, Xugaohou that the city of Qingzhou was manufacturing one to two thousand strong iron-cased bombshells a month, dispatching to Xiangyang and Yingzhou about ten to twenty thousand such bombs at a time. In turn, the invading Mongols employed northern Chinese soldiers and used these same types of gunpowder weapons against the Song. By the 14th century the firearm and cannon could also be found in Europe, India, and the Middle East, during the early age of gunpowder warfare.
Measuring distance and mechanical navigation
As early as the Han dynasty, when the state needed to accurately measure distances traveled throughout the empire, the Chinese relied on a mechanical odometer. The Chinese odometer was a wheeled carriage, its gearwork being driven by the rotation of the carriage's wheels; specific units of distance—the Chinese li—were marked by the mechanical striking of a drum or bell as an auditory signal. The specifications for the 11th-century odometer were written by Chief Chamberlain Lu Daolong, who is quoted extensively in the historical text of the Song Shi (compiled by 1345). In the Song period, the odometer vehicle was also combined with another old complex mechanical device known as the south-pointing chariot. This device, originally crafted by Ma Jun in the 3rd century, incorporated a differential gear that allowed a figure mounted on the vehicle to always point in the southern direction, no matter how the vehicle's wheels turned about. The concept of the differential gear that was used in this navigational vehicle is now found in modern automobiles in order to apply an equal amount of torque to a car's wheels even when they are rotating at different speeds.
Polymaths, inventions, and astronomy
Polymaths such as the scientists and statesmen Shen Kuo (1031–1095) and Su Song (1020–1101) embodied advancements in all fields of study, including botany, zoology, geology, mineralogy, metallurgy, mechanics, magnetics, meteorology, horology, astronomy, pharmaceutical medicine, archeology, mathematics, cartography, optics, art criticism, hydraulics, and many other fields.
Shen Kuo was the first to discern magnetic declination of true north while experimenting with a compass. Shen theorized that geographical climates gradually shifted over time. He created a theory of land formation involving concepts accepted in modern geomorphology. He performed optical experiments with camera obscura just decades after Ibn al-Haytham was the first to do so. He also improved the designs of astronomical instruments such as the widened astronomical sighting tube, which allowed Shen Kuo to fix the position of the pole star (which had shifted over centuries of time). Shen Kuo was also known for hydraulic clockworks, as he invented a new overflow-tank clepsydra which had more efficient higher-order interpolation instead of linear interpolation in calibrating the measure of time.
Su Song was best known for his horology treatise written in 1092, which described and illustrated in great detail his hydraulic-powered, tall astronomical clock tower built in Kaifeng. The clock tower featured large astronomical instruments of the armillary sphere and celestial globe, both driven by an early intermittently working escapement mechanism (similarly to the western verge escapement of true mechanical clocks appeared in medieval clockworks, derived from ancient clockworks of classical times). Su's tower featured a rotating gear wheel with 133 clock jack mannequins who were timed to rotate past shuttered windows while ringing gongs and bells, banging drums, and presenting announcement plaques. In his printed book, Su published a celestial atlas of five star charts. These star charts feature a cylindrical projection similar to Mercator projection, the latter being a cartographic innovation of Gerardus Mercator in 1569.
The Song Chinese observed supernovae, including SN 1054, the remnants of which would form the Crab Nebula. Moreover, the Soochow Astronomical Chart on Chinese planispheres was prepared in 1193 for instructing the crown prince on astronomical findings. The planispheres were engraved in stone several decades later.
Mathematics and cartography
There were many notable improvements to Chinese mathematics during the Song era. Mathematician Yang Hui's 1261 book provided the earliest Chinese illustration of Pascal's triangle, although it had earlier been described by Jia Xian in around 1100. Yang Hui also provided rules for constructing combinatorial arrangements in magic squares, provided theoretical proof for Euclid's forty-third proposition about parallelograms, and was the first to use negative coefficients of 'x' in quadratic equations. Yang's contemporary Qin Jiushao (–1261) was the first to introduce the zero symbol into Chinese mathematics; before this blank spaces were used instead of zeroes in the system of counting rods. He is also known for working with the Chinese remainder theorem, Heron's formula, and astronomical data used in determining the winter solstice. Qin's major work was the Mathematical Treatise in Nine Sections published in 1247.
Geometry was essential to surveying and cartography. The earliest extant Chinese maps date to the 4th century BCE, yet it was not until the time of Pei Xiu (224–271) that topographical elevation, a formal rectangular grid system, and use of a standard graduated scale of distances was applied to terrain maps. Following a long tradition, Shen Kuo created a raised-relief map, while his other maps featured a uniform graduated scale of 1:900,000. A squared map of 1137—carved into a stone block—followed a uniform grid scale of 100 li for each gridded square, and accurately mapped the outline of the coasts and river systems of China, extending all the way to India. Furthermore, the world's oldest known terrain map in printed form comes from the edited encyclopedia of Yang Jia in 1155, which displayed western China without the formal grid system that was characteristic of more professionally made Chinese maps. Although gazetteers had existed since 52 CE during the Han dynasty and gazetteers accompanied by illustrative maps (Chinese: ) since the Sui dynasty, the illustrated gazetteer became much more common during the Song dynasty, when the foremost concern was for illustrative gazetteers to serve political, administrative, and military purposes.
Movable type printing
The innovation of movable type printing was made by the artisan Bi Sheng (990–1051), first described by the scientist and statesman Shen Kuo in his Dream Pool Essays of 1088. The collection of Bi Sheng's original clay-fired typeface was passed on to one of Shen Kuo's nephews, and was carefully preserved. Movable type enhanced the already widespread use of woodblock methods of printing thousands of documents and volumes of written literature, consumed eagerly by an increasingly literate public. The advancement of printing deeply affected education and the scholar-official class, since more books could be made faster while mass-produced, printed books were cheaper in comparison to laborious handwritten copies. The enhancement of widespread printing and print culture in the Song period was thus a direct catalyst in the rise of social mobility and expansion of the educated class of scholar elites, the latter which expanded dramatically in size from the 11th to 13th centuries.
The movable type invented by Bi Sheng was ultimately trumped by the use of woodblock printing due to the limitations of Chinese characters, yet movable type printing continued to be used and was improved in later periods. The Yuan scholar-official Wang Zhen () implemented a faster typesetting process, improved Bi's baked-clay movable type character set with a wooden one, and experimented with tin-metal movable type. The wealthy printing patron Hua Sui (1439–1513) of the Ming dynasty established China's first metal movable type (using bronze) in 1490. In 1638, the Peking Gazette switched their printing process from woodblock to movable type printing. Yet it was during the Qing dynasty that massive printing projects began to employ movable type printing. This includes the printing of sixty-six copies of a 5,020 volume long encyclopedia in 1725, the Complete Classics Collection of Ancient China, which necessitated the crafting of 250,000 movable type characters cast in bronze. By the 19th century the European style printing press replaced the old Chinese methods of movable type, while traditional woodblock printing in modern East Asia is used sparsely and for aesthetic reasons.
Hydraulic and nautical engineering
The most important nautical innovation of the Song period seems to have been the introduction of the magnetic mariner's compass, which permitted accurate navigation on the open sea regardless of the weather. The magnetized compass needle known in Chinese as the "south-pointing needle" was first described by Shen Kuo in his 1088 Dream Pool Essays and first mentioned in active use by sailors in Zhu Yu's 1119 Pingzhou Table Talks.
alt=A diagram of the pound lock system, from a bird's eye perspective and from a side perspective. The bird's eye view illustrates that water enters the enclosed area through two culverts on either side of the upper lock gate. The side view diagram illustrates how the elevation is higher before reaching the top gate than it is afterward.|thumb|right|upright=0.9|A plan and side view of a canal pound lock, a concept pioneered in 984 by the Assistant Commissioner of Transport for Huainan, the engineer Qiao Weiyo.
There were other considerable advancements in hydraulic engineering and nautical technology during the Song dynasty. The 10th-century invention of the pound lock for canal systems allowed different water levels to be raised and lowered for separated segments of a canal, which significantly aided the safety of canal traffic and allowed for larger barges. There was the Song-era innovation of watertight bulkhead compartments that allowed damage to hulls without sinking the ships. If ships were damaged, the Chinese of the 11th century employed drydocks to repair them while suspended out of the water. The Song used crossbeams to brace the ribs of ships in order to strengthen them in a skeletal-like structure. Stern-mounted rudders had been mounted on Chinese ships since the 1st century, as evidenced by a preserved Han tomb model of a ship. In the Song period, the Chinese devised a way to mechanically raise and lower rudders in order for ships to travel in a wider range of water depths. The Song arranged the protruding teeth of anchors in a circular pattern instead of in one direction. David Graff and Robin Higham state that this arrangement "[made] them more reliable" for anchoring ships.
Structural engineering and architecture
Architecture during the Song period reached new heights of sophistication. Authors such as Yu Hao and Shen Kuo wrote books outlining the field of architectural layouts, craftsmanship, and structural engineering in the 10th and 11th centuries, respectively. Shen Kuo preserved the written dialogues of Yu Hao when describing technical issues such as slanting struts built into pagoda towers for diagonal wind bracing. Shen Kuo also preserved Yu's specified dimensions and units of measurement for various building types. The architect Li Jie (1065–1110), who published the Yingzao Fashi ('Treatise on Architectural Methods') in 1103, greatly expanded upon the works of Yu Hao and compiled the standard building codes used by the central government agencies and by craftsmen throughout the empire. He addressed the standard methods of construction, design, and applications of moats and fortifications, stonework, greater woodwork, lesser woodwork, wood-carving, turning and drilling, sawing, bamboo work, tiling, wall building, painting and decoration, brickwork, glazed tile making, and provided proportions for mortar formulas in masonry. In his book, Li provided detailed and vivid illustrations of architectural components and cross-sections of buildings. These illustrations displayed various applications of corbel brackets, cantilever arms, mortise and tenon work of tie beams and cross beams, and diagrams showing the various building types of halls in graded sizes. He also outlined the standard units of measurement and standard dimensional measurements of all building components described and illustrated in his book.
Grandiose building projects were supported by the government, including the erection of towering Buddhist Chinese pagodas and the construction of enormous bridges (wood or stone, trestle or segmental arch bridge). Many of the pagoda towers built during the Song period were erected at heights that exceeded ten stories. Some of the most famous are the Iron Pagoda built in 1049 during the Northern Song and the Liuhe Pagoda built in 1165 during the Southern Song, though there were others. The tallest is the Liaodi Pagoda built in 1055 in Hebei, towering in total height. Some of the bridges reached lengths of , with many being wide enough to allow two lanes of cart traffic simultaneously over a waterway or ravine. The government also oversaw construction of their own administrative offices, palace apartments, city fortifications, ancestral temples, and Buddhist temples.
The professions of the architect, craftsman, carpenter, and structural engineer were not seen as professionally equal to that of a Confucian scholar-official. Architectural knowledge had been passed down orally for thousands of years in China, in many cases from a father craftsman to his son. Structural engineering and architecture schools were known to have existed during the Song period; one prestigious engineering school was headed by the renowned bridge-builder Cai Xiang (1012–1067) in medieval Fujian province.
Besides existing buildings and technical literature of building manuals, Song dynasty artwork portraying cityscapes and other buildings aid modern-day scholars in their attempts to reconstruct and realize the nuances of Song architecture. Song dynasty artists such as Li Cheng, Fan Kuan, Guo Xi, Zhang Zeduan, Emperor Huizong of Song, and Ma Lin painted close-up depictions of buildings as well as large expanses of cityscapes featuring arched bridges, halls and pavilions, pagoda towers, and distinct Chinese city walls. The scientist and statesman Shen Kuo was known for his criticism relating to architecture, saying that it was more important for an artist to capture a holistic view of a landscape than it was to focus on the angles and corners of buildings. For example, Shen criticized the work of the painter Li Cheng for failing to observe the principle of "seeing the small from the viewpoint of the large" in portraying buildings.
There were also pyramidal tomb structures in the Song era, such as the Song imperial tombs located in Gongxian, Henan. About from Gongxian is another Song dynasty tomb at Baisha, which features "elaborate facsimiles in brick of Chinese timber frame construction, from door lintels to pillars and pedestals to bracket sets, that adorn interior walls." The two large chambers of the Baisha tomb also feature conical-shaped roofs. Flanking the avenues leading to these tombs are lines of Song dynasty stone statues of officials, tomb guardians, animals, and legendary creatures.
Archaeology
alt=A heavily tarnished bronze bowl adorned with several carvings of squares that curl in on themselves at the bottom. It has three stubby, unadorned legs and two small, square handles coming off from the top rim.|thumb|Scholars of the Song dynasty claim to have collected ancient relics dating back as far as the Shang dynasty, such as this bronze ding vessel.
In addition to the Song gentry's antiquarian pursuits of art collecting, scholar-officials during the Song became highly interested in retrieving ancient relics from archaeological sites, in order to revive the use of ancient vessels in ceremonies of state ritual. Scholar-officials of the Song period claimed to have discovered ancient bronze vessels that were created as far back as the Shang dynasty (1600–1046 BCE), which bore the oracle bone script of the Shang era. Some attempted to recreate these bronze vessels by using imagination alone, not by observing tangible evidence of relics; this practice was criticized by Shen Kuo in his work of 1088. Yet Shen Kuo had much more to criticize than this practice alone. Shen objected to the idea of his peers that ancient relics were products created by famous "sages" in lore or the ancient aristocratic class; Shen rightfully attributed the discovered handicrafts and vessels from ancient times as the work of artisans and commoners from previous eras. He also disapproved of his peers' pursuit of archaeology simply to enhance state ritual, since Shen not only took an interdisciplinary approach with the study of archaeology, but he also emphasized the study of functionality and investigating what was the ancient relics' original processes of manufacture. Shen used ancient texts and existing models of armillary spheres to create one based on ancient standards; Shen described ancient weaponry such as the use of a scaled sighting device on crossbows; while experimenting with ancient musical measures, Shen suggested hanging an ancient bell by using a hollow handle.
Despite the gentry's overriding interest in archaeology simply for reviving ancient state rituals, some of Shen's peers took a similar approach to the study of archaeology. His contemporary Ouyang Xiu (1007–1072) compiled an analytical catalogue of ancient rubbings on stone and bronze which pioneered ideas in early epigraphy and archaeology. During the 11th century, Song scholars discovered the ancient shrine of Wu Liang (78–151 CE), a scholar of the Han dynasty; they produced rubbings of the carvings and bas-reliefs decorating the walls of his tomb so that they could be analyzed elsewhere. On the unreliability of historical works written after the fact, the epigrapher and poet Zhao Mingcheng (1081–1129) stated "... the inscriptions on stone and bronze are made at the time the events took place and can be trusted without reservation, and thus discrepancies may be discovered." Historian R.C. Rudolph states that Zhao's emphasis on consulting contemporary sources for accurate dating is parallel with the concern of the German historian Leopold von Ranke (1795–1886), and was in fact emphasized by many Song scholars. The Song scholar Hong Mai (1123–1202) heavily criticized what he called the court's "ridiculous" archaeological catalogue Bogutu compiled during the Huizong reign periods of Zheng He and Xuan He (1111–1125). Hong Mai obtained old vessels from the Han dynasty and compared them with the descriptions offered in the catalogue, which he found so inaccurate he stated he had to "hold my sides with laughter." Hong Mai pointed out that the erroneous material was the fault of Chancellor Cai Jing, who prohibited scholars from reading and consulting written histories.
See also
References
Citations
Sources
(hardback).
(paperback).
Further reading
External links
Song dynasty at China Heritage Quarterly
Song dynasty at bcps.org
Song and Liao artwork
Song dynasty art with video commentary
The Newly Compiled Overall Geographical Survey
Category:10th-century establishments in China
.
.
Category:1279 disestablishments in Asia
Category:13th-century disestablishments in China
Category:960 establishments
Category:Confucian dynasties
Category:Dynasties of China
Category:Former countries in Chinese history
Category:Medieval East Asia
Category:States and territories disestablished in 1279
Category:States and territories established in the 960s
|
ancient_medieval
| 10,098
|
57974
|
Battle of Britain
|
https://en.wikipedia.org/wiki/Battle_of_Britain
|
The Battle of Britain () was a military campaign of the Second World War, in which the Royal Air Force (RAF) and the Fleet Air Arm (FAA) of the Royal Navy defended the United Kingdom against large-scale attacks by Nazi Germany's air force, the Luftwaffe. It was the first major military campaign fought entirely by air forces."92 Squadron – Geoffrey Wellum." Battle of Britain Memorial Flight via raf.mod.uk.. Retrieved: 17 November 2010, archived 2 March 2009. It takes its name from the speech given by Prime Minister Winston Churchill to the House of Commons on 18 June, 1940: "What General Weygand called the 'Battle of France' is over. I expect that the Battle of Britain is about to begin."
The Germans had rapidly overwhelmed France and the Low Countries in the Battle of France, leaving Britain to face the threat of invasion by sea. The German high command recognised the difficulties of a seaborne attack while the Royal Navy controlled the English Channel and the North Sea. The primary objective of the German forces was to compel Britain to agree to a negotiated peace settlement.
The British officially recognise the battle's duration as being from 10 July until 31 October 1940, which overlaps the period of large-scale night attacks known as the Blitz, that lasted from 7 September 1940 to 11 May 1941. German historians do not follow this subdivision and regard the battle as a single campaign lasting from July 1940 to May 1941, including the Blitz.
In July 1940, the air and sea blockade began, with the Luftwaffe mainly targeting coastal-shipping convoys, as well as ports and shipping centres such as Portsmouth. On 16 July, Hitler ordered the preparation of Operation Sea Lion as a potential amphibious and airborne assault on Britain, to follow once the Luftwaffe had air superiority over the Channel. On 1 August, the Luftwaffe was directed to achieve air superiority over the RAF, with the aim of incapacitating RAF Fighter Command; 12 days later, it shifted the attacks to RAF airfields and infrastructure. As the battle progressed, the Luftwaffe also targeted factories involved in aircraft production and strategic infrastructure. Eventually, it employed terror bombing on areas of political significance and on civilians. In September, RAF Bomber Command night raids disrupted the German preparation of converted barges, and the Luftwaffe's failure to overwhelm the RAF forced Hitler to postpone and eventually cancel Operation Sea Lion. The Luftwaffe proved unable to sustain daylight raids, but their continued night-bombing operations on Britain became known as the Blitz.
Germany's failure to destroy Britain's air defences and force it out of the conflict was the first major German defeat in the Second World War.
Background
Strategic bombing during World War I introduced air attacks intended to panic civilian targets and led in 1918 to the merger of the British army and navy air services into the Royal Air Force (RAF). Its first Chief of the Air Staff, Hugh Trenchard, was among the military strategists in the 1920s, like Giulio Douhet, who saw air warfare as a new way to overcome the bloody stalemate of trench warfare. Interception was expected to be nearly impossible, with fighter planes no faster than bombers. Their slogan was that the bomber will always get through, and that the only defence was a deterrent bomber force capable of matching retaliation. Predictions were made that a bomber offensive would quickly cause thousands of deaths and civilian hysteria leading to capitulation. However, widespread pacifism following the horrors of the First World War contributed to a reluctance to provide resources.
Developing air strategies
Germany was forbidden a military air force by the 1919 Treaty of Versailles, and therefore air crew were trained by means of civilian and sport flying. Following a 1923 memorandum, the Deutsche Luft Hansa airline developed designs for aircraft such as the Junkers Ju 52, which could carry passengers and freight, but also be readily adapted into a bomber. In 1926, the secret Lipetsk fighter-pilot school began training Germans in the Soviet Union. Erhard Milch organised rapid expansion, and following the 1933 Nazi seizure of power, his subordinate Robert Knauss formulated a deterrence theory incorporating Douhet's ideas and Tirpitz's "risk theory". This proposed a fleet of heavy bombers to deter a preventive attack by France and Poland before Germany could fully rearm. A 1933–34 war game indicated a need for fighters and anti-aircraft protection as well as bombers. On 1 March 1935, the Luftwaffe was formally announced, with Walther Wever as Chief of Staff. The 1935 Luftwaffe doctrine for "Conduct of Air War" (Luftkriegführung) set air power within the overall military strategy, with critical tasks of attaining (local and temporary) air superiority and providing battlefield support for army and naval forces. Strategic bombing of industries and transport could be decisive longer-term options, dependent on opportunity or preparations by the army and navy. It could be used to overcome a stalemate, or used when only destruction of the enemy's economy would be conclusive. The list excluded bombing civilians to destroy homes or undermine morale, as that was considered a waste of strategic effort, but the doctrine allowed revenge attacks if German civilians were bombed. A revised edition was issued in 1940, and the continuing central principle of Luftwaffe doctrine was that destruction of enemy armed forces was of primary importance.
The RAF responded to Luftwaffe developments with its 1934 Expansion Plan A rearmament scheme, and in 1936 it was restructured into Bomber Command, Coastal Command, Training Command and Fighter Command. The last was under Hugh Dowding, who opposed the doctrine that bombers were unstoppable: the invention of radar at that time could allow early detection, and prototype monoplane fighters were significantly faster. Priorities were disputed, but in December 1937, the Minister in charge of Defence Coordination, Sir Thomas Inskip, sided with Dowding that "The role of our air force is not an early knock-out blow" but rather was "to prevent the Germans from knocking us out" and fighter squadrons were just as necessary as bomber squadrons.
The Spanish Civil War (1936–1939) gave the Luftwaffe Condor Legion the opportunity to test air fighting tactics with their new aeroplanes. Wolfram von Richthofen became an exponent of air power providing ground support to other services. The difficulty of accurately hitting targets prompted Ernst Udet to require that all new bombers had to be dive bombers, and led to the development of the Knickebein system for night time navigation. Priority was given to producing large numbers of smaller aeroplanes, and plans for a long-range, four-engined strategic bomber were cancelled.
First stages of the Second World War
After the evacuation of British and French soldiers from Dunkirk and the French surrender on 22 June 1940, Hitler mainly focused his energies on the possibility of invading the Soviet Union. He believed that the British, defeated on the continent and without European allies, would quickly come to terms. The Germans were so convinced of an imminent armistice that they began constructing street decorations for the homecoming parades of victorious troops. Although the British Foreign Secretary, Lord Halifax, and certain elements of the British public favoured a negotiated peace with an ascendant Germany, Churchill and a majority of his Cabinet refused to consider an armistice. Instead, Churchill used his skilful rhetoric to harden public opinion against capitulation and prepare the British for a long war.
The Battle of Britain has the unusual distinction that it gained its name before being fought. The name is derived from the This was their finest hour speech delivered by Winston Churchill in the House of Commons on 18 June, more than three weeks prior to the generally accepted date for the start of the battle:
German aims and directives
When war commenced, Hitler and the OKW (Oberkommando der Wehrmacht or "High Command of the Armed Forces") issued a series of directives ordering, planning and stating strategic objectives. "Directive No. 1 for the Conduct of the War", dated 31 August 1939, instructed the invasion of Poland on 1 September as planned. Potentially, Luftwaffe "operations against England" were to:
Both France and the UK declared war on Germany; on 9 October, Hitler's "Directive No. 6" planned the offensive to defeat these allies and "win as much territory as possible in the Netherlands, Belgium, and northern France to serve as a base for the successful prosecution of the air and sea war against England"., Directive No. 6 for the Conduct of the War, Berlin, 9 October 1939 On 29 November, OKW "Directive No. 9 – Instructions For Warfare Against The Economy Of The Enemy" stated that once this coastline had been secured, the Luftwaffe together with the Kriegsmarine (German Navy) was to blockade UK ports with sea mines. They were to attack shipping and warships and make air attacks on shore installations and industrial production. This directive remained in force in the first phase of the Battle of Britain., Directive No. 9 – Instructions For Warfare Against The Economy Of The Enemy, Berlin, 29 November 1939. It was reinforced on 24 May during the Battle of France by "Directive No. 13", which authorised the Luftwaffe "to attack the English homeland in the fullest manner, as soon as sufficient forces are available. This attack will be opened by an annihilating reprisal for English attacks on the Ruhr Basin.", Directive No. 13, Headquarters, 24 May 1940
By the end of June 1940, Germany had defeated Britain's allies on the continent, and on 30 June the OKW Chief of Staff, Alfred Jodl, issued his review of options to increase pressure on Britain to agree to a negotiated peace. The first priority was to eliminate the RAF and gain air supremacy. Intensified air attacks against shipping and the economy could affect food supplies and civilian morale in the long term. Reprisal attacks of terror bombing had the potential to cause quicker capitulation, but the effect on morale was uncertain. On the same day, the Luftwaffe Commander-in-Chief, Hermann Göring issued his operational directive: to destroy the RAF, thus protecting German industry, and also to block overseas supplies to Britain. The German Supreme Command argued over the practicality of these options.
In "Directive No. 16 – On preparations for a landing operation against England" on 16 July, Hitler required readiness by mid-August for the possibility of an invasion he called Operation Sea Lion, unless the British agreed to negotiations. The Luftwaffe reported that it would be ready to launch its major attack early in August. The Kriegsmarine Commander-in-Chief, Grand Admiral Erich Raeder, continued to highlight the impracticality of these plans and said sea invasion could not take place before early 1941. Hitler now argued that Britain was holding out in hope of assistance from Russia, and the Soviet Union was to be invaded by mid 1941. Göring met his air fleet commanders, and on 24 July issued "Tasks and Goals" of firstly gaining air supremacy, secondly protecting invasion forces and attacking the Royal Navy's ships. Thirdly, they were to blockade imports, bombing harbours and stores of supplies.
Hitler's "Directive No. 17 – For the conduct of air and sea warfare against England" issued on 1 August attempted to keep all the options open. The Luftwaffe's Adlertag campaign was to start around 5 August, subject to weather, with the aim of gaining air superiority over southern England as a necessary precondition of invasion, to give credibility to the threat and give Hitler the option of ordering the invasion. The intention was to incapacitate the RAF so much that the UK would feel open to air attack, and would begin peace negotiations. It was also to isolate the UK and damage war production, beginning an effective blockade.Directive No. 17 – For the conduct of air and sea warfare against England , Führer Headquarters, 1 August 1940. Following severe Luftwaffe losses, Hitler agreed at a 14 September OKW conference that the air campaign was to intensify regardless of invasion plans. On 16 September, Göring gave the order for this change in strategy, to the first independent strategic bombing campaign.
Negotiated peace or neutrality
Hitler's 1925 book Mein Kampf mostly set out his hatreds: he only admired ordinary German World War I soldiers and Britain, which he saw as an ally against communism. In 1935 Hermann Göring welcomed news that Britain, as a potential ally, was rearming. In 1936 he promised assistance to defend the British Empire, asking only a free hand in Eastern Europe, and repeated this to Lord Halifax in 1937. That year, von Ribbentrop met Churchill with a similar proposal; when rebuffed, he told Churchill that interference with German domination would mean war. To Hitler's great annoyance, all his diplomacy failed to stop Britain from declaring war when he invaded Poland. During the fall of France, he repeatedly discussed peace efforts with his generals.
When Churchill came to power, there was still wide support for Halifax, who as Foreign Secretary openly argued for peace negotiations in the tradition of British diplomacy, to secure British independence without war. On 20 May, Halifax secretly requested a Swedish businessman to make contact with Göring to open negotiations. Shortly afterwards, in the May 1940 War Cabinet Crisis, Halifax argued for negotiations involving the Italians, but this was rejected by Churchill with majority support. An approach made through the Swedish ambassador on 22 June was reported to Hitler, making peace negotiations seem feasible. Throughout July, as the battle started, the Germans made wider attempts to find a diplomatic solution. On 2 July, the day the armed forces were asked to start preliminary planning for an invasion, Hitler got von Ribbentrop to draft a speech offering peace negotiations. On 19 July Hitler made this speech to the German Parliament in Berlin, appealing "to reason and common sense", and said he could "see no reason why this war should go on".Hitler 1940 My Last Appeal to Great Britain His sombre conclusion was received in silence, but he did not suggest negotiations and this was perceived as being effectively an ultimatum by the British government, which rejected the offer. Halifax kept trying to arrange peace until he was sent to Washington in December as ambassador, and in January 1941 Hitler expressed continued interest in negotiating peace with Britain.
Blockade and siege
A May 1939 planning exercise by Luftflotte 3 found that the Luftwaffe lacked the means to do much damage to Britain's war economy beyond laying naval mines. Joseph Schmid, in charge of Luftwaffe intelligence, presented a report on 22 November 1939, stating that, "Of all Germany's possible enemies, Britain is the most dangerous." This "Proposal for the Conduct of Air Warfare" argued for a counter to the British blockade and said "Key is to paralyse the British trade". Instead of the Wehrmacht attacking the French, the Luftwaffe with naval assistance was to block imports to Britain and attack seaports. "Should the enemy resort to terror measuresfor example, to attack our towns in western Germany" they could retaliate by bombing industrial centres and London. Parts of this appeared on 29 November in "Directive No. 9" as future actions once the coast had been conquered. On 24 May 1940 "Directive No. 13" authorised attacks on the blockade targets, as well as retaliation for RAF bombing of industrial targets in the Ruhr.
After the defeat of France, the OKW felt they had won the war, and some more pressure would persuade Britain to give in. On 30 June, the OKW Chief of Staff Alfred Jodl issued his paper setting out options: the first was to increase attacks on shipping, economic targets and the RAF: air attacks and food shortages were expected to break morale and lead to capitulation. Destruction of the RAF was the first priority, and invasion would be a last resort. Göring's operational directive issued the same day ordered the destruction of the RAF to clear the way for attacks cutting off seaborne supplies to Britain. It made no mention of invasion.
Invasion plans
In November 1939, the OKW reviewed the potential for an air- and seaborne invasion of Britain: the Kriegsmarine was faced with the threat the Royal Navy's larger Home Fleet posed to a crossing of the English Channel, and together with the German Army viewed control of airspace as a necessary precondition. The German navy thought air superiority alone was insufficient; the German naval staff had already produced a study (in 1939) on the possibility of an invasion of Britain and concluded that it also required naval superiority.Operation Sea Lion – The German Invasion Plans section (David Shears) Thornton Cox 1975 – p. 156 The Luftwaffe said invasion could only be "the final act in an already victorious war."
Hitler first discussed the idea of an invasion at a 21 May 1940 meeting with Grand Admiral Erich Raeder, who stressed the difficulties and his own preference for a blockade. OKW Chief of Staff Jodl's 30 June report described invasion as a last resort once the British economy had been damaged and the Luftwaffe had full air superiority. On 2 July, OKW requested preliminary plans.
In Britain, Churchill described "the great invasion scare" as "serving a very useful purpose" by "keeping every man and woman tuned to a high pitch of readiness". Historian Len Deighton stated that on 10 July Churchill advised the War Cabinet that invasion could be ignored, as it "would be a most hazardous and suicidal operation".
On 11 July, Hitler agreed with Raeder that invasion would be a last resort, and the Luftwaffe advised that gaining air superiority would take 14 to 28 days. Hitler met his army chiefs, von Brauchitsch and Halder, at the Berchtesgaden Obersalzberg on 13 July where they presented detailed plans on the assumption that the navy would provide safe transport. Von Brauchitsch and Halder were surprised that Hitler took no interest in the invasion plans, unlike his usual attitude toward military operations, but on 16 July he issued Directive No. 16, ordering preparations for Operation Sea Lion.
The navy insisted on a narrow beachhead and an extended period for landing troops; the army rejected these plans: the Luftwaffe could begin an air attack in August. Hitler held a meeting of his army and navy chiefs on 31 July. The navy said 22 September was the earliest possible date and proposed postponement until the following year, but Hitler preferred September. He then told von Brauchitsch and Halder that he would decide on the landing operation eight to fourteen days after the air attack began. On 1 August, he issued Directive No. 17 for intensified air and sea warfare, to begin with Adlertag on or after 5 August, subject to weather, keeping options open for negotiated peace or blockade and siege.
Independent air attack
Under the continuing influence of the 1935 "Conduct of the Air War" doctrine, the main focus of the Luftwaffe command (including Göring) was in concentrating attacks to destroy enemy armed forces on the battlefield, and "blitzkrieg" close air support of the army succeeded brilliantly. They reserved strategic bombing for a stalemate situation or revenge attacks, but doubted if this could be decisive on its own and regarded bombing civilians to destroy homes or undermine morale as a waste of strategic effort.
The defeat of France in June 1940 introduced the prospect for the first time of independent air action against Britain. A July Fliegercorps I paper asserted that Germany was by definition an air power: "Its chief weapon against England is the Air Force, then the Navy, followed by the landing forces and the Army." In 1940, the Luftwaffe would undertake a "strategic offensive ... on its own and independent of the other services", according to an April 1944 German account of their military mission. Göring was convinced that strategic bombing could win objectives that were beyond the army and navy, and gain political advantages in the Third Reich for the Luftwaffe and himself. He expected air warfare to decisively force Britain to negotiate, as all in the OKW hoped, and the Luftwaffe took little interest in planning to support an invasion.
Opposing forces
The Luftwaffe faced a more capable opponent than any it had previously met: a sizeable, highly coordinated, well-supplied, modern air force.
Fighters
The Luftwaffe's Messerschmitt Bf 109E and Bf 110C fought against the RAF's workhorse Hurricane Mk I and the less numerous Spitfire Mk I; Hurricanes outnumbered Spitfires in RAF Fighter Command by about 2:1 when war broke out. The Bf 109E had a better climb rate and was up to 40 mph faster in level flight than the Rotol (constant speed propeller) equipped Hurricane Mk I, depending on altitude."Report on Comparative Trials of Hurricane versus Messerschmitt 109." wwiiaircraftperformance.org. Retrieved: 19 March 2015. The speed and climb disparity with the original non-Rotol Hurricane was even greater. By mid-1940, all RAF Spitfire and Hurricane fighter squadrons converted to 100 octane aviation fuel, which allowed their Merlin engines to generate significantly more power and an approximately 30 mph increase in speed at low altitudes"Calibration of Hurricane L1717 Merlin II Engine." wwiiaircraftperformance.org. Retrieved: 19 March 2015."RAE Chart of Spitfire I, Merlin III." wwiiaircraftperformance.org. Retrieved: 19 March 2015. through the use of an Emergency Boost Override. In September 1940, the more powerful Mk IIa series 1 Hurricanes started entering service in small numbers. This version was capable of a maximum speed of , some 20 mph more than the original (non-Rotol) Mk I, though it was still 15 to 20 mph slower than a Bf 109 (depending on altitude).
The performance of the Spitfire over Dunkirk came as a surprise to the Jagdwaffe, although the German pilots retained a strong belief that the 109 was the superior fighter. The British fighters were equipped with eight Browning .303 (7.7mm) machine guns firing bullets, while most Bf 109Es had two 20mm cannons firing explosive shells, supplemented by two 7.92mm machine guns. The 20mm cannons were much more effective than the .303; during the Battle it was not unknown for damaged German bombers to limp home with up to two hundred .303 hits.RAF yearbook 1978 p61 At some altitudes, the Bf 109 could outclimb the British fighter. It could also engage in vertical-plane negative-g manoeuvres without the engine cutting out because its DB 601 engine used fuel injection; this allowed the 109 to dive away from attackers more readily than the carburettor-equipped Merlin. On the other hand, the Bf 109E had the disadvantage of a much larger turning circle than its two foes. In general, though, as Alfred Price noted in The Spitfire Story:
The Bf 109E was also used as a Jabo (jagdbomber, fighter-bomber)the E-4/B and E-7 models could carry a 250 kg bomb underneath the fuselage, the later model arriving during the battle. The Bf 109, unlike the Stuka, could fight on equal terms with RAF fighters after releasing its ordnance.
At the start of the battle, the twin-engined Messerschmitt Bf 110C long-range Zerstörer ("Destroyer") was also expected to engage in air-to-air combat while escorting the Luftwaffe bomber fleet. Although the 110 was faster than the Hurricane and almost as fast as the Spitfire, its lack of manoeuvrability and acceleration meant that it was a failure as a long-range escort fighter. On 13 and 15 August, thirteen and thirty aircraft were lost, the equivalent of an entire Gruppe, and the type's worst losses during the campaign. This trend continued with a further eight and fifteen lost on 16 and 17 August.
The most successful role of the Bf 110 during the battle was as a Schnellbomber (fast bomber). The Bf 110 usually used a shallow dive to bomb the target and escape at high speed. One unit, Erprobungsgruppe 210 – initially formed as the service test unit (Erprobungskommando) for the emerging successor to the 110, the Me 210 – proved that the Bf 110 could still be used to good effect in attacking small or "pinpoint" targets.
The RAF's Boulton Paul Defiant had some initial success over Dunkirk because of its resemblance to the Hurricane; Luftwaffe fighters attacking from the rear were surprised by its unusual gun turret, which could fire to the rear. During the Battle of Britain, it proved hopelessly outclassed. The Defiant, designed to attack bombers without fighter escort, lacked any form of forward-firing armament, and the heavy turret and second crewman meant it could not outrun or outmanoeuvre either the Bf 109 or Bf 110. By the end of August, after disastrous losses, the aircraft was withdrawn from daylight service.
Bombers
The Luftwaffe's primary bombers were the Heinkel He 111, Dornier Do 17, and Junkers Ju 88 for level bombing at medium to high altitudes, and the Junkers Ju 87 Stuka for dive-bombing. The He 111 was used in greater numbers than the others during the conflict, and was better known, partly due to its distinctive wing shape. Each level bomber also had a few reconnaissance versions accompanying them that were used during the battle.
Although it had been successful in previous Luftwaffe engagements, the Stuka suffered heavy losses in the Battle of Britain, particularly on 18 August, due to its slow speed and vulnerability to fighter interception after dive-bombing a target. As the losses went up Stuka units, with limited payload and range in addition to their vulnerability, were largely removed from operations over England and diverted to concentrate on shipping, until eventually re-deployed to the Eastern Front in 1941. For some raids they were called back, such as on 13 September to attack Tangmere airfield.
The remaining three bomber types differed in their capabilities; the Dornier Do 17 was both the slowest and had the smallest bomb load; the Ju 88 was the fastest once its mainly external bomb load was dropped; and the He 111 carried the largest, internal, bomb load. All three bomber types suffered heavy losses from the home-based British fighters, but the Ju 88 had significantly lower loss rates due to its greater speed and its ability to dive out of trouble (it was originally designed as a dive bomber). The German bombers required constant protection by the Luftwaffe's insufficiently numerous fighter force. Bf 109Es were ordered to support more than 300–400 bombers on any given day. Later in the conflict, when night bombing became more frequent, all three were used. Due to its smaller bomb load, the lighter Do 17 was used less than the He 111 and Ju 88 for this purpose.
On the British side, three bomber types were mostly used on night operations against targets such as factories, invasion ports and railway centres; the Armstrong Whitworth Whitley, the Handley-Page Hampden and the Vickers Wellington were classified as heavy bombers by the RAF, although the Hampden was a medium bomber comparable to the He 111. The twin-engined Bristol Blenheim and the obsolescent single-engined Fairey Battle were both light bombers; the Blenheim was the most numerous of the aircraft equipping RAF Bomber Command, and was used in attacks against shipping, ports, airfields and factories on the continent by day and by night. The Fairey Battle squadrons, which had suffered heavy losses in daylight attacks during the Battle of France, were brought up to strength with reserve aircraft and continued to operate at night in attacks against the invasion ports, until the Battle was withdrawn from UK front-line service in October 1940."Fairey Battle." airlandseaweapons.devhub.com, 16 August 2009. Retrieved: 3 November 2010.
Pilots
Before the war, the RAF's processes for selecting potential candidates were opened to men of all social classes through the creation in 1936 of the RAF Volunteer Reserve, which "... was designed to appeal, to ... young men ... without any class distinctions ..." The older squadrons of the Royal Auxiliary Air Force did retain some of their upper-class exclusiveness, but their numbers were soon swamped by the newcomers of the RAFVR; by 1 September 1939, 6,646 pilots had been trained through the RAFVR.
By mid-1940, there were about 9,000 pilots in the RAF to man about 5,000 aircraft, most of which were bombers. Fighter Command was never short of pilots, but the problem of finding sufficient numbers of fighter pilots became acute by mid-August 1940. With aircraft production running at 300 planes each week, only 200 pilots were trained in the same period. In addition, more pilots were allocated to squadrons than there were aircraft, as this allowed squadrons to maintain operational strength despite casualties and still provide for pilot leave. Another factor was that only about 30% of the 9,000 pilots were assigned to operational squadrons; 20% of the pilots were involved in conducting pilot training, and a further 20% were undergoing further instruction, like those offered in Canada and in Southern Rhodesia to the Commonwealth trainees, although already qualified. The rest were assigned to staff positions, since RAF policy dictated that only pilots could make many staff and operational command decisions, even in engineering matters. At the height of the fighting, and despite Churchill's insistence, only 30 pilots were released to the front line from administrative duties.The pilots occupying these administrative positions included such officers as Dowding, Park and Leigh-Mallory and the numbers actually fit to serve in front line fighter squadrons are open to question.
For these reasons, and the permanent loss of 435 pilots during the Battle of France alone"A Short History of the Royal Air Force," pp. 99–100. RAF.. Retrieved: 10 July 2011. along with many more wounded, and others lost in Norway, the RAF had fewer experienced pilots at the start of the Battle of Britain than the Luftwaffe. It was the lack of trained pilots in the fighting squadrons, rather than the lack of aircraft, that became the greatest concern for Air Chief Marshal Hugh Dowding, commander of Fighter Command. Drawing from regular RAF forces, the Auxiliary Air Force and the Volunteer Reserve, the British were able to muster some 1,103 fighter pilots on 1 July. Replacement pilots, with little flight training and often no gunnery training, suffered high casualty rates, exacerbating the problem.
The Luftwaffe, on the other hand, were able to muster a large number (1,450) of experienced fighter pilots. Drawing from a cadre of Spanish Civil War veterans, these pilots already had comprehensive courses in aerial gunnery and instructions in tactics suited for fighter-versus-fighter combat. Training manuals discouraged heroism, stressing the importance of attacking only when the odds were in the pilot's favour. Despite the high levels of experience, German fighter formations did not provide a sufficient reserve of pilots to allow for losses and leave, and the Luftwaffe was unable to produce enough pilots to prevent a decline in operational strength as the battle progressed.
International participation
Allies
About 20% of pilots who took part in the battle were from non-British countries. The Royal Air Force roll of honour for the Battle of Britain recognises 595 non-British pilots (out of 2,936) as flying at least one authorised operational sortie with an eligible unit of the RAF or Fleet Air Arm between 10 July and 31 October 1940. These included 145 Poles, 127 New Zealanders, 112 Canadians, 88 Czechoslovaks, 10 Irish, 32 Australians, 28 Belgians, 25 South Africans, 13 French, 9 Americans, 3 Southern Rhodesians and individuals from Jamaica, Barbados and Newfoundland."The Airmen of the Battle of Britain" bbm.org.uk. Retrieved: 29 January 2017. "Altogether in the fighter battles, the bombing raids, and the various patrols flown between 10 July and 31 October 1940 by the Royal Air Force, 1495 aircrew were killed, of whom 449 were fighter pilots, 718 aircrew from Bomber Command, and 280 from Coastal Command. Among those killed were 47 airmen from Canada, 24 from Australia, 17 from South Africa, 30 from Poland, 20 from Czechoslovakia and six from Belgium. Forty-seven New Zealanders lost their lives, including 15 fighter pilots, 24 bomber and eight coastal aircrew. The names of these Allied and Commonwealth airmen are inscribed in a memorial book that rests in the Battle of Britain Chapel in Westminster Abbey. In the chapel is a stained glass window which contains the badges of the fighter squadrons which operated during the battle and the flags of the nations to which the pilots and aircrew belonged.Owen, R.E, New Zealanders with the Royal Air Force. Wellington, New Zealand: Government Printer, 1953, Volume 1, Chapter 4, p. 71. These pilots, some of whom had to flee their home countries because of German invasions, fought with distinction.
The No. 303 Polish Fighter Squadron was the highest-scoring fighter squadron of the Battle of Britain, even though it joined the fray two months after the battle had begun.Sikora, P. Poles in the Battle of Britain: A Photographic Album of the Polish 'Few' . Barnsley, Air World (Pen & Sword): 2020 "Had it not been for the magnificent material contributed by the Polish squadrons and their unsurpassed gallantry," wrote Air Chief Marshal Hugh Dowding, head of RAF Fighter Command, "I hesitate to say that the outcome of the Battle would have been the same."
Axis
At the urging of Italian dictator Benito Mussolini, an element of the Italian Royal Air Force (Regia Aeronautica) called the Italian Air Corps (Corpo Aereo Italiano or CAI) took part in the later stages of the Battle of Britain. It first saw action on 24 October 1940 when a force of Fiat BR.20 medium bombers attacked the port at Harwich. The CAI achieved limited success during this and subsequent raids. The unit was redeployed in January 1941, having claimed to have shot down at least nine British aircraft. This was inaccurate and their actual successes were much lower.
Luftwaffe strategy
The indecision of OKL over what to do was reflected in shifts in Luftwaffe strategy. The doctrine of concentrated close air support of the army at the battlefront succeeded against Poland, Denmark and Norway, the Low Countries and France but incurred significant losses. The Luftwaffe had to build or repair bases in the conquered territories, and rebuild their strength. In June 1940 they began regular armed reconnaissance flights and sporadic Störangriffe, nuisance raids of one or a few bombers by day and night. These gave crews practice in navigation and avoiding air defences and set off air raid alarms which disturbed civilian morale. Similar nuisance raids continued throughout the battle, into late 1940. Scattered naval mine-laying sorties began at the outset and increased gradually over the battle period.
Göring's operational directive of 30 June ordered the destruction of the RAF, including the aircraft industry, to end RAF bombing raids on Germany and facilitating attacks on ports and storage in the Luftwaffe blockade of Britain. Attacks on Channel shipping in the Kanalkampf began on 4 July, and were formalised on 11 July in an order by Hans Jeschonnek which added the arms industry as a target. On 16 July, Directive No. 16 ordered preparations for Operation Sea Lion and on the next day the Luftwaffe was ordered to stand by in full readiness. Göring met his air fleet commanders and on 24 July issued orders for gaining air supremacy, protecting the army and navy if the invasion went ahead and attacking Royal Navy ships and continuing the blockade. Once the RAF had been defeated, Luftwaffe bombers were to move forward beyond London without the need for fighter escort, destroying military and economic targets.
At a meeting on 1 August the command reviewed plans produced by each Fliegerkorps with differing proposals for targets including whether to bomb airfields but failed to decide a priority. Intelligence reports gave Göring the impression that the RAF was almost defeated, so that raids would attract British fighters for the Luftwaffe to shoot down. On 6 August he finalised plans for Adlertag (Eagle Day) with Kesselring, Sperrle and Stumpff; the destruction of RAF Fighter Command in the south of England was to take four days, with lightly escorted small bomber raids leaving the main fighter force free to attack RAF fighters. Bombing of military and economic targets was then to systematically extend up to the Midlands until daylight attacks could proceed unhindered over the whole of Britain.
Bombing of London was to be held back while these night time "destroyer" attacks proceeded over other urban areas, then, in the culmination of the campaign, a major attack on the capital was intended to cause a crisis, with refugees fleeing London just as Operation Sea Lion was to begin. With hopes fading for the possibility of invasion, on 4 September Hitler authorised a main focus on day and night attacks on tactical targets, with London as the main target, which became known as the Blitz. With increasing difficulty in defending bombers in day raids, the Luftwaffe shifted to a strategic bombing campaign of night raids aiming to overcome British resistance by damaging infrastructure and food stocks, though intentional terror bombing of civilians was not sanctioned.
Regrouping of Luftwaffe in Luftflotten
The Luftwaffe regrouped after the Battle of France into three Luftflotten (Air Fleets) opposite Britain's southern and eastern coasts. Luftflotte 2 (Generalfeldmarschall Albert Kesselring), was responsible for the bombing of south-east England and the London area. Luftflotte 3 (Generalfeldmarschall Hugo Sperrle) concentrated on the West Country, Wales, the Midlands and north-west England. Luftflotte 5 (Generaloberst Hans-Jürgen Stumpff) from his headquarters in Norway, attacked the north of England and Scotland. As the battle progressed, command responsibility shifted, with Luftflotte 3 taking more responsibility for the night bombing and the main daylight operations fell upon Luftflotte 2.
Initial Luftwaffe estimates were that it would take four days to defeat RAF Fighter Command in southern England. This would be followed by a four-week offensive during which the bombers and long-range fighters would destroy all military installations throughout the country and wreck the British aircraft industry. The campaign was planned to begin with attacks on airfields near the coast, gradually moving inland to attack the ring of sector airfields defending London. Later reassessments gave the Luftwaffe five weeks, from 8 August to 15 September, to establish temporary air superiority over England. Fighter Command had to be destroyed, either on the ground or in the air, yet the Luftwaffe had to preserve its strength to be able to support the invasion; the Luftwaffe had to maintain a high "kill ratio" over the RAF fighters. The only alternative to the goal of air superiority was a terror bombing campaign aimed at the civilian population but this was considered a last resort and it was forbidden by Hitler. The Luftwaffe kept broadly to this scheme but its commanders had differences of opinion on strategy. Sperrle wanted to eradicate the air defence infrastructure by bombing it. Kesselring championed attacking London directly, either to bombard the British government into submission or to draw RAF fighters into a decisive battle. Göring did nothing to resolve this disagreement between his commanders and gave only vague directives during the initial stages of the battle, apparently unable to decide upon which strategy to pursue.
Tactics
Fighter formations
Luftwaffe formations employed a loose section of two (called Rotte [pack]), based on a leader (Rottenführer) followed at a distance of about by his wingman, Rottenhund pack dog or Katschmarek, the turning radius of a Bf 109, enabling both aircraft to turn together at high speed. The Katschmarek flew slightly higher and was trained always to stay with his leader. With more room between them, both could spend less time maintaining formation and more time looking around and covering each other's blind spots. Attacking aircraft could be sandwiched between the two 109s. The formation was developed from principles formulated by the First World War ace Oswald Boelcke in 1916. In 1934 the Finnish Air Force adopted similar formations, called partio (patrol; two aircraft) and parvi (two patrols; four aircraft), for similar reasons, though Luftwaffe pilots during the Spanish Civil War (led by Günther Lützow and Werner Mölders, among others) are generally given credit.Nikunen, Heikki. ""The Finnish Fighter Tactics and Training Before and During the WW II." FI: Saunalahti, January 2006. Retrieved: 26 April 2008. The Rotte allowed the Rottenführer to concentrate on shooting down aircraft but few wingmen had the chance, leading to some resentment in the lower ranks where it was felt that the high scores came at their expense. Two Rotten combined as a Schwarm, where all the pilots could watch what was happening around them. Each Schwarm in a Staffel flew at staggered heights and with about between them, making the formation difficult to spot at longer ranges and allowing for a great deal of flexibility. By using a tight "cross-over" turn, a Schwarm could quickly change direction.
The Bf 110s adopted the same Schwarm formation as the 109s but were seldom able to use this to the same advantage. The Bf 110's most successful method of attack was the "bounce" from above. When attacked, Zerstörergruppen increasingly resorted to forming large defensive circles, where each Bf 110 guarded the tail of the aircraft ahead of it. Göring ordered that they be renamed "offensive circles" in a vain bid to improve rapidly declining morale. These conspicuous formations were often successful in attracting RAF fighters that were sometimes "bounced" by high-flying Bf 109s. This led to the often repeated misconception that the Bf 110s were escorted by Bf 109s.
Higher-level dispositions
Luftwaffe tactics were influenced by their fighters. The Bf 110 proved too vulnerable against the nimble single-engined RAF fighters and the bulk of fighter escort duties devolved to the Bf 109. Fighter tactics were then complicated by bomber crews who demanded closer protection. After the hard-fought battles of 15 and 18 August, Göring met his unit leaders. The need for the fighters to meet up on time with the bombers was stressed. It was also decided that one bomber Gruppe could only be properly protected by several Gruppen of 109s. Göring stipulated that as many fighters as possible were to be left free for Freie Jagd ("Free Hunts": a free-roving fighter sweep preceded a raid to try to sweep defenders out of the raid's path). The Ju 87 units, which had suffered heavy casualties, were only to be used under favourable circumstances. In early September, due to increasing complaints from the bomber crews about RAF fighters seemingly able to get through the escort screen, Göring ordered an increase in close escort duties. This decision shackled many of the Bf 109s to the bombers and, although they were more successful at protecting the bombers, casualties amongst the fighters mounted, primarily because they were forced to fly and manoeuvre at reduced speeds.
The Luftwaffe varied its tactics to break Fighter Command. It launched many Freie Jagd to draw up RAF fighters. RAF fighter controllers were often able to detect these and position squadrons to avoid them, keeping to Dowding's plan to preserve fighter strength for the bomber formations. The Luftwaffe also tried using small formations of bombers as bait, covering them with large numbers of escorts. This was more successful, but escort duty kept the fighters tied to the slower bombers making them more vulnerable.
By September, standard tactics for raids had become an amalgam of techniques. A Freie Jagd would precede the main attack formations. The bombers would fly in at altitudes between , closely escorted by fighters. Escorts were divided into two parts (usually Gruppen), some operating close to the bombers and others a few hundred yards away and a little above. If the formation was attacked from the starboard, the starboard section engaged the attackers, the top section moving to starboard and the port section to the top position. If the attack came from the port side the system was reversed. British fighters coming from the rear were engaged by the rear section and the two outside sections similarly moving to the rear. If the threat came from above, the top section went into action while the side sections gained height to be able to follow RAF fighters down as they broke away. If attacked, all sections flew in defensive circles. These tactics were skilfully evolved and carried out and were difficult to counter.
Adolf Galland noted:
The biggest disadvantage faced by Bf 109 pilots was that without the benefit of long-range drop tanks (which were introduced in limited numbers in the late stages of the battle), usually of capacity, the 109s had an endurance of just over an hour and, for the 109E, a range. Once over Britain, a 109 pilot had to keep an eye on a red "low fuel" light on the instrument panel: once this was illuminated, he was forced to turn back and head for France. With the prospect of two long flights over water and knowing their range was substantially reduced when escorting bombers or during combat, the Jagdflieger coined the term Kanalkrankheit or "Channel sickness".
Intelligence
The Luftwaffe was ill-served by its lack of military intelligence about the British defences. The German intelligence services were fractured and plagued by rivalry; their performance was "amateurish". By 1940, there were few German agents operating in Great Britain and a handful of attempts to insert spies into the country were foiled.
As a result of intercepted radio transmissions, the Germans began to realise that the RAF fighters were being controlled from ground facilities; in July and August 1939, for example, the airship Graf Zeppelin, which was packed with equipment for listening in on RAF radio and RDF transmissions, flew around the coasts of Britain. Although the Luftwaffe correctly interpreted these new ground control procedures, they were incorrectly assessed as being rigid and ineffectual. A British radar system was well known to the Luftwaffe from intelligence gathered before the war, but the highly developed "Dowding system" linked with fighter control had been a well-kept secret."Lt Col Earle Lund, USAF, p. 13." ProFTPd. Retrieved: 13 June 2008. Even when good information existed, such as a November 1939 Abwehr assessment of Fighter Command strengths and capabilities by Abteilung V, it was ignored if it did not match conventional preconceptions.
On 16 July 1940, Abteilung V, commanded by Oberstleutnant "Beppo" Schmid, produced a report on the RAF and on Britain's defensive capabilities which was adopted by the frontline commanders as a basis for their operational plans. One of the most conspicuous failures of the report was the lack of information on the RAF's RDF network and control systems capabilities; it was assumed that the system was rigid and inflexible, with the RAF fighters being "tied" to their home bases.Abteilung V Intelligence Appreciation of the RAF (see "Appendix 4") . ProFTPd.. Retrieved: 13 June 2008. An optimistic (and, as it turned out, erroneous) conclusion reached was:
Because of this statement, reinforced by another more detailed report, issued on 10 August, there was a mindset in the ranks of the Luftwaffe that the RAF would run out of frontline fighters. The Luftwaffe believed it was weakening Fighter Command at three times the actual attrition rate. Many times, the leadership believed Fighter Command's strength had collapsed, only to discover that the RAF were able to send up defensive formations at will.
Throughout the battle, the Luftwaffe had to use numerous reconnaissance sorties to make up for poor intelligence. Reconnaissance aircraft (initially mostly Dornier Do 17s, but increasingly Bf 110s) proved easy prey for British fighters, as it was seldom possible for them to be escorted by Bf 109s. Thus, the Luftwaffe operated "blind" for much of the battle, unsure of its enemy's true strengths, capabilities, and deployments. Many of the Fighter Command airfields were never attacked, while raids against supposed fighter airfields fell instead on bomber or coastal defence stations. The results of bombing and air fighting were consistently exaggerated, due to inaccurate claims, over-enthusiastic reports and the difficulty of confirmation over enemy territory. In the euphoric atmosphere of perceived victory, the Luftwaffe leadership became increasingly disconnected from reality. This lack of leadership and solid intelligence meant the Germans did not adopt a consistent strategy. Moreover, there was never a systematic focus on one type of target (such as airbases, radar stations, or aircraft factories); consequently, the effectiveness of attacks, and their contribution to wider operational or strategic goals was further diluted.
Navigational aids
While the British were using radar for air defence more effectively than the Germans realised, the Luftwaffe attempted to press its own offensive with advanced radio navigation systems of which the British were initially not aware. One of these was Knickebein ("bent leg"); this system was used at night and for raids where precision was required. It was rarely used during the Battle of Britain.
Air-sea rescue
The Luftwaffe was much better prepared for the task of air-sea rescue than the RAF, specifically tasking the Seenotdienst unit, equipped with about 30 Heinkel He 59 floatplanes, with picking up downed aircrew from the North Sea, English Channel and the Dover Straits. In addition, Luftwaffe aircraft were equipped with life rafts and the aircrew were provided with sachets of a chemical called fluorescein which, on reacting with water, created a large, easy-to-see, bright green patch. In accordance with the Geneva Convention, the He 59s were unarmed and painted white with civilian registration markings and red crosses. Nevertheless, RAF aircraft attacked these aircraft, as some were escorted by Bf 109s.
After single He 59s were forced to land on the sea by RAF fighters on 1 and 9 July, a controversial order was issued to the RAF on 13 July; this stated that from 20 July, Seenotdienst aircraft were to be shot down. One of the reasons given by Churchill was:
The British also believed that their crews would report on convoys, the Air Ministry issuing a communiqué to the German government on 14 July that Britain was
The white He 59s were soon repainted in camouflage colours and armed with defensive machine guns. Although another four He 59s were shot down by RAF aircraft, the Seenotdienst continued to pick up downed Luftwaffe and Allied aircrew throughout the battle, earning praise from Adolf Galland for their bravery.
RAF strategy
The Dowding system
During early tests of the Chain Home system, the slow flow of information from the CH radars and observers to the aircraft often caused them to miss their "bandits". The solution, today known as the "Dowding system", was to create a set of reporting chains to move information from the various observation points to the pilots in their fighters. It was named after its chief architect, "Stuffy" Dowding.
Reports from CH radars and the Observer Corps were sent directly to Fighter Command Headquarters (FCHQ) at Bentley Priory where they were "filtered" to combine multiple reports of the same formations into single tracks. Telephone operators would then forward only the information of interest to the Group headquarters, where the map would be re-created. This process was repeated to produce another version of the map at the Sector level, covering a much smaller area. Looking over their maps, Group level commanders could select squadrons to attack particular targets. From that point, the Sector operators would give commands to the fighters to arrange an interception, as well as return them to base. Sector stations also controlled the anti-aircraft batteries in their area; an army officer sat beside each fighter controller and directed the gun crews when to open and cease fire.
The Dowding system dramatically improved the speed and accuracy of the information that flowed to the pilots. During the early war period, it was expected that an average interception mission might have a 30% chance of ever seeing their target. During the battle, the Dowding system maintained an average rate over 75%, with several examples of 100% rates – every fighter dispatched found and intercepted its target. In contrast, Luftwaffe fighters attempting to intercept raids had to randomly seek their targets and often returned home having never seen enemy aircraft. The result is what is now known as an example of "force multiplication"; RAF fighters were as effective as two or more Luftwaffe fighters, greatly offsetting, or overturning, the disparity in actual numbers.
Intelligence
While Luftwaffe intelligence reports underestimated British fighter forces and aircraft production, the British intelligence estimates went the other way: they overestimated German aircraft production, numbers and range of aircraft available, and numbers of Luftwaffe pilots. In action, the Luftwaffe believed from their pilot claims and the impression given by aerial reconnaissance that the RAF was close to defeat, and the British made strenuous efforts to overcome the perceived advantages held by their opponents.
It is unclear how much the British intercepts of the Enigma cipher, used for high-security German radio communications, affected the battle. Ultra, the information obtained from Enigma intercepts, gave the highest echelons of the British command a view of German intentions. According to F. W. Winterbotham, who was the senior Air Staff representative in the Secret Intelligence Service, Ultra helped establish the strength and composition of the Luftwaffe's formations, the aims of the commanders and provided early warning of some raids. In early August it was decided that a small unit would be set up at FCHQ, which would process the flow of information from Bletchley and provide Dowding only with the most essential Ultra material; thus the Air Ministry did not have to send a continual flow of information to FCHQ, preserving secrecy, and Dowding was not inundated with non-essential information. Keith Park and his controllers were also told about Ultra. In a further attempt to camouflage the existence of Ultra, Dowding created a unit named No. 421 (Reconnaissance) Flight RAF. This unit (which later became No. 91 Squadron RAF), was equipped with Hurricanes and Spitfires and sent out aircraft to search for and report Luftwaffe formations approaching England. In addition, the radio listening service (known as Y Service), monitoring the patterns of Luftwaffe radio traffic contributed considerably to the early warning of raids.
Tactics
Fighter formations
In the late 1930s, Fighter Command expected to face only bombers over Britain, not single-engined fighters. A series of "Fighting Area Tactics" were formulated and rigidly adhered to, involving a series of manoeuvres designed to concentrate a squadron's firepower to bring down bombers. RAF fighters flew in tight, v-shaped sections ("vics") of three aircraft, with four such "sections" in tight formation. Only the squadron leader at the front was free to watch for the enemy; the other pilots had to concentrate on keeping station. Training also emphasised by-the-book attacks by sections breaking away in sequence. Fighter Command recognised the weaknesses of this structure early in the battle, but it was felt too risky to change tactics during the battle because replacement pilotsoften with only minimal flying timecould not be readily retrained, and inexperienced pilots needed firm leadership in the air only rigid formations could provide. German pilots dubbed the RAF formations Idiotenreihen ("rows of idiots") because they left squadrons vulnerable to attack.
Front line RAF pilots were acutely aware of the inherent deficiencies of their own tactics. A compromise was adopted whereby squadron formations used much looser formations with one or two "weavers" flying independently above and behind to provide increased observation and rear protection; these tended to be the least experienced men and were often the first to be shot down without the other pilots even noticing that they were under attack. During the battle, 74 Squadron under Squadron Leader Adolph "Sailor" Malan adopted a variation of the German formation called the "fours in line astern", which was a vast improvement on the old three aircraft "vic". Malan's formation was later generally used by Fighter Command.
Squadron- and higher-level deployment
The weight of the battle fell upon 11 Group. Keith Park's tactics were to dispatch individual squadrons to intercept raids. The intention was to subject incoming bombers to continual attacks by relatively small numbers of fighters and try to break up the tight German formations. Once formations had fallen apart, stragglers could be picked off one by one. Where multiple squadrons intercepted a raid the intended procedure was for the slower Hurricanes to tackle the bombers while the more agile Spitfires engaged the fighter escort. This ideal was not always achieved, resulting in occasions when Spitfires and Hurricanes reversed roles. Park also issued instructions to his units to attack the bombers from the front, as they were more vulnerable to head-on approaches than to attacks from other angles. Again, in fast-moving, three-dimensional air battles, few RAF fighter units were able to attack the bombers head-on.
During the battle, some commanders, notably Leigh-Mallory, proposed squadrons be formed into "Big Wings," consisting of at least three squadrons, to attack the enemy en masse, a method pioneered by Douglas Bader.
Proponents of this tactic claimed interceptions in large numbers caused greater enemy losses while reducing their own casualties. Opponents pointed out the big wings would take too long to form up, and the strategy ran a greater risk of fighters being caught on the ground refuelling. The big wing idea also caused pilots to overclaim their kills, due to the confusion of a more intense battle zone. This led to considerable overestimation of the effectiveness of Big Wings.
The issue caused intense friction between Park and Leigh-Mallory, as 12 Group was assigned the task of protecting 11 Group's airfields while Park's squadrons intercepted incoming raids. The delay in forming up Big Wings meant the formations often did not arrive at all or until after German bombers had hit 11 Group's airfields. Dowding, to highlight the problem of the Big Wing's performance, submitted a report compiled by Park to the Air Ministry on 15 November. In the report, he highlighted that during the period of 11 September – 31 October, the extensive use of the Big Wing had resulted in just 10 interceptions and one German aircraft destroyed, but his report was ignored. Post-war analysis agrees Dowding and Park's approach was best for 11 Group.
Dowding's removal from his post in November 1940 has been blamed on this struggle between Park and Leigh-Mallory's daylight strategy. The intensive raids and destruction wrought during the Blitz damaged both Dowding and Park in particular, because of the failure to produce an effective night-fighter defence system - something for which the influential Leigh-Mallory had long criticised them.
Bomber and Coastal Command contributions
Bomber Command and Coastal Command aircraft flew offensive sorties against targets in Germany and France during the battle. An hour after the declaration of war, Bomber Command launched raids on warships and naval ports by day, and in night raids dropped leaflets as it was considered illegal to bomb targets which could affect civilians. After the initial disasters of the war, with Vickers Wellington bombers shot down in large numbers attacking Wilhelmshaven and the slaughter of the Fairey Battle squadrons sent to France, it became clear that they would have to operate mainly at night to avoid incurring very high losses. Churchill came to power on 10 May 1940, and the War Cabinet on 12 May agreed that German actions justified "unrestricted warfare", and on 14 May they authorised an attack on the night of 14/15 May against oil and rail targets in Germany. At the urging of Clement Attlee, the Cabinet on 15 May authorised a full bombing strategy against "suitable military objectives", even where there could be civilian casualties. That evening, a night time bomber campaign began against the German oil industry, communications, and forests/crops, mainly in the Ruhr area. The RAF lacked accurate night navigation and carried small bomb loads. As the threat mounted, Bomber Command changed targeting priority on 3 June 1940 to attack the German aircraft industry. On 4 July, the Air Ministry gave Bomber Command orders to attack ports and shipping. By September, the build-up of invasion barges in the Channel ports had become a top priority target.
On 7 September, the government issued a warning that the invasion could be expected within the next few days and, that night, Bomber Command attacked the Channel ports and supply dumps. On 13 September, they carried out another large raid on the Channel ports, sinking 80 large barges in the port of Ostend. 84 barges were sunk in Dunkirk after another raid on 17 September and by 19 September, almost 200 barges had been sunk. The loss of these barges may have contributed to Hitler's decision to postpone Operation Sea Lion indefinitely. The success of these raids was in part because the Germans had few Freya radar stations set up in France, so that air defences of the French harbours were not nearly as good as the air defences over Germany; Bomber Command had directed some 60% of its strength against the Channel ports.
The Bristol Blenheim units also raided German-occupied airfields throughout July to December 1940, both during daylight hours and at night. Although most of these raids were unproductive, there were some successes; on 1 August, five out of twelve Blenheims sent to attack Haamstede and Evere (Brussels) were able to destroy or heavily damage three Bf 109s of II./JG 27 and apparently kill a Staffelkapitän identified as a Hauptmann Albrecht von Ankum-Frank. Two other 109s were claimed by Blenheim gunners. Another successful raid on Haamstede was made by a single Blenheim on 7 August which destroyed one 109 of 4./JG 54, heavily damaged another and caused lighter damage to four more.
There were some missions that produced an almost 100% casualty rate amongst the Blenheims; one such operation was mounted on 13 August 1940 against a Luftwaffe airfield near Aalborg in north-eastern Denmark by 12 aircraft of 82 Squadron. One Blenheim returned early (the pilot was later charged and due to appear before a court martial, but was killed on another operation); the other eleven, which reached Denmark, were shot down, five by flak and six by Bf 109s. Of the 33 crewmen who took part in the attack, 20 were killed and 13 captured.
As well as the bombing operations, Blenheim-equipped units had been formed to carry out long-range strategic reconnaissance missions over Germany and German-occupied territories. In this role, the Blenheims again proved to be too slow and vulnerable against Luftwaffe fighters, and they took constant casualties.
Coastal Command directed its attention towards the protection of British shipping, and the destruction of enemy shipping. As invasion became more likely, it participated in the strikes on French harbours and airfields, laying mines, and mounting numerous reconnaissance missions over the enemy-held coast. In all, some 9,180 sorties were flown by bombers from July to October 1940. Although this was much less than the 80,000 sorties flown by fighters, bomber crews suffered about half the total casualties borne by their fighter colleagues. The bomber contribution was, therefore, much more dangerous on a loss-per-sortie comparison.
Bomber, reconnaissance, and antisubmarine patrol operations continued throughout these months with little respite and none of the publicity accorded to Fighter Command. In his famous 20 August speech about "The Few", praising Fighter Command, Churchill also made a point of mentioning Bomber Command's contribution, adding that bombers were even then striking back at Germany; this part of the speech is often overlooked, even today."Speech of 20 August 1940." Winston Churchill. Retrieved: 16 April 2008. The Battle of Britain Chapel in Westminster Abbey lists in a roll of honour, 718 Bomber Command crew members, and 280 from Coastal Command who were killed between 10 July and 31 October.
Bomber and Coastal Command attacks against invasion barge concentrations in Channel ports were widely reported by the British media during September and October 1940. In what became known as 'the Battle of the Barges' RAF attacks were claimed in British propaganda to have sunk large numbers of barges, and to have created widespread chaos and disruption to German invasion preparations. Given the volume of British propaganda interest in these bomber attacks during September and earlier October, it is striking how quickly this was overlooked once the Battle of Britain had been concluded. Even by mid-war, the bomber pilots' efforts had been largely eclipsed by a continuing focus on the Few, this a result of the Air Ministry's continuing valorisation of the ″fighter boys″, beginning with the March 1941 Battle of Britain propaganda pamphlet.
Air-sea rescue
One of the biggest oversights of the entire system was the lack of adequate air-sea rescue organisation. The RAF had started organising a system in 1940 with High Speed Launches (HSLs) based on flying boat bases and at some overseas locations, but it was still believed that the amount of cross-Channel traffic meant that there was no need for a rescue service to cover these areas. Downed pilots and aircrew, it was hoped, would be picked up by any boats or ships which happened to be passing by. Otherwise, the local life boat would be alerted, assuming someone had seen the pilot going into the water."RAF History: Air/Sea Search and Rescue – 60th Anniversary." UK: RAF. Retrieved: 24 May 2008.
RAF aircrew were issued with a life jacket, nicknamed the "Mae West," but in 1940 it still required manual inflation, which was almost impossible for someone who was injured or in shock. The waters of the English Channel and Dover Straits are cold, even in the middle of summer, and clothing issued to RAF aircrew did little to insulate them against these freezing conditions. The RAF also imitated the German practice of issuing fluorescein. A conference in 1939 had placed air-sea rescue under Coastal Command. Because pilots had been lost at sea during the "Channel Battle", on 22 August, control of RAF rescue launches was passed to the local naval authorities and 12 Lysanders were given to Fighter Command to help look for pilots at sea. In all, some 200 pilots and aircrew were lost at sea during the battle. No proper air-sea rescue service was formed until 1941.
Phases of the battle
The battle covered a shifting geographical area, and there have been differing opinions on significant dates: when the Air Ministry proposed 8 August as the start, Dowding responded that operations "merged into one another almost insensibly", and proposed 10 July as the onset of increased attacks. With the caution that phases drifted into each other and dates are not firm, the Royal Air Force Museum states that five main phases can be identified:
26 June – 16 July: Störangriffe ("nuisance raids"), scattered small scale probing attacks both day and night, armed reconnaissance and mine-laying sorties. From 4 July, daylight Kanalkampf ("the Channel battles") against shipping.
17 July – 12 August: daylight Kanalkampf attacks on shipping intensify through this period, increased attacks on ports and coastal airfields, night raids on RAF and aircraft manufacturing.
13 August – 6 September: Adlerangriff ("Eagle Attack"), the main assault; attempt to destroy the RAF in southern England, including massive daylight attacks on RAF airfields, followed from 19 August by heavy night bombing of ports and industrial cities, including suburbs of London.
7 September – 2 October: the Blitz commences, main focus day and night attacks on London.
3–31 October: large scale night bombing raids, mostly on London; daylight attacks now confined to small scale fighter-bomber Störangriffe raids luring RAF fighters into dogfights.
Small scale raids
Following Germany's rapid territorial gains in the Battle of France, the Luftwaffe had to reorganise its forces, set up bases along the coast, and rebuild after heavy losses. It began small scale bombing raids on Britain on the night of 5/6 June, and continued sporadic attacks throughout June and July. The first large-scale attack was at night, on 18/19 June, when small raids scattered between Yorkshire and Kent involved in total 100 bombers. These Störangriffe ("nuisance raids") which involved only a few aeroplanes, sometimes just one, were used to train bomber crews in both day and night attacks, to test defences and try out methods, with most flights at night. They found that, rather than carrying small numbers of large high explosive bombs, it was more effective to use more small bombs, similarly incendiaries had to cover a large area to set effective fires. These training flights continued through August and into the first week of September. Against this, the raids also gave the British time to assess the German tactics, and invaluable time for the RAF fighters and anti-aircraft defences to prepare and gain practice.
The attacks were widespread: over the night of 30 June alarms were set off in 20 counties by just 20 bombers, then next day the first daylight raids were carried out during 1 July, on both Hull in Yorkshire and Wick, Caithness. On 3 July most flights were reconnaissance sorties, but 15 civilians were killed when bombs hit Guildford in Surrey. Numerous small Störangriffe raids, both day and night, were made daily through August, September and into the winter, with aims including bringing RAF fighters up to battle, destruction of specific military and economic targets, and setting off air-raid warnings to affect civilian morale: four major air-raids in August involved hundreds of bombers; in the same month 1,062 small raids were made, spread across the whole of Britain.
Channel battles
The Kanalkampf comprised a series of running fights over convoys in the English Channel. It was launched partly because Kesselring and Sperrle were not sure about what else to do, and partly because it gave German aircrews some training and a chance to probe the British defences. Dowding could provide only minimal shipping protection, and these battles off the coast tended to favour the Germans, whose bomber escorts had the advantage of altitude and outnumbered the RAF fighters. From 9 July reconnaissance probing by Dornier Do 17 bombers put a severe strain on RAF pilots and machines, with high RAF losses to Bf 109s. When nine 141 Squadron Defiants went into action on 19 July six were lost to Bf 109s before a squadron of Hurricanes intervened. On 25 July a coal convoy and escorting destroyers suffered such heavy losses to attacks by Stuka dive bombers that the Admiralty decided convoys should travel at night: the RAF shot down 16 raiders but lost 7 aircraft. By 8 August 18 coal ships and 4 destroyers had been sunk, but the Navy was determined to send a convoy of 20 ships through rather than move the coal by railway. After repeated Stuka attacks that day, six ships were badly damaged, four were sunk and only four reached their destination. The RAF lost 19 fighters and shot down 31 German aircraft. The Navy now cancelled all further convoys through the Channel and the cargo was sent by rail. Even so, these early combat encounters provided both sides with experience.
Main assault
The main attack upon the RAF's defences was code-named Adlerangriff ("Eagle Attack"). Intelligence reports gave Göring the impression that the RAF was almost defeated, and raids would attract British fighters for the Luftwaffe to shoot down. The strategy agreed on 6 August was to destroy RAF Fighter Command across the south of England in four days, then bombing of military and economic targets was to systematically extend up to the Midlands until daylight attacks could proceed unhindered over the whole of Britain, culminating in a major bombing attack on London.
Assault on RAF: radar and airfields
Poor weather delayed Adlertag ("Eagle Day") until 13 August 1940. On 12 August, the first attempt was made to blind the Dowding system, when aircraft from the specialist fighter-bomber unit Erprobungsgruppe 210 attacked four radar stations. Three were briefly taken off the air but were back working within six hours. The raids appeared to show that British radars were difficult to knock out. The failure to mount follow-up attacks allowed the RAF to get the stations back on the air, and the Luftwaffe neglected strikes on the supporting infrastructure, such as phone lines and power stations, which could have rendered the radars useless, even if the lattice-work towers themselves, which were very difficult to destroy, remained intact.
Adlertag opened with a series of attacks, led again by Erpro 210, on coastal airfields used as forward landing grounds for the RAF fighters, as well as 'satellite airfields'"Satellite" airfields were mostly fully equipped but did not have the sector control room which allowed "Sector" airfields such as Biggin Hill to monitor and control RAF fighter formations. RAF units from Sector airfields often flew into a satellite airfield for operations during the day, returning to their home airfield in the evenings. including Manston and Hawkinge. As the week drew on, the airfield attacks moved further inland, and repeated raids were made on the radar chain. 15 August was "The Greatest Day" when the Luftwaffe mounted the largest number of sorties of the campaign. Luftflotte 5 attacked the north of England. Raiding forces from Denmark and Norway, which believed Fighter Command strength to be concentrated in the south, ran into resistance which was unexpectedly strong. Inadequately escorted by Bf 110s, Bf109s having insufficient range to escort raids from Norway, bombers were shot down in large numbers. North East England was attacked by 65 Heinkel 111s escorted by 34 Messerschmitt 110s, and RAF Great Driffield was attacked by 50 unescorted Junkers 88s. Out of 115 bombers and 35 fighters sent, 75 planes were destroyed and many others were damaged beyond repair. Furthermore, due to early engagement by RAF fighters, many of the bombers dropped their payloads ineffectively early."Document 32. Battle of Britain Historical Society. Retrieved: 19 March 2015. As a result of these casualties, Luftflotte 5 did not appear in strength again in the campaign.
18 August, which had the greatest number of casualties to both sides, has been dubbed "The Hardest Day". Following this grinding battle, exhaustion and the weather reduced operations for most of a week, allowing the Luftwaffe to review their performance. "The Hardest Day" had sounded the end for the Ju 87 in the campaign. This veteran of Blitzkrieg was too vulnerable to fighters to operate over Britain. Göring withdrew the Stuka from the fighting to preserve the Stuka force, removing the main Luftwaffe precision-bombing weapon and shifting the burden of pinpoint attacks onto the already-stretched Erpro 210. The Bf 110 proved too clumsy for dogfighting with single-engined fighters, and its participation was scaled back. It would be used only when range required it or when sufficient single-engined escort could not be provided for the bombers.
Göring made yet another important decision: to order more bomber escorts at the expense of free-hunting sweeps. To achieve this, the weight of the attack now fell on Luftflotte 2, and the bulk of the Bf 109s in Luftflotte 3 were transferred to Kesselring's command, reinforcing the fighter bases in the Pas-de-Calais. Stripped of its fighters, Luftflotte 3 would concentrate on the night bombing campaign. Göring, expressing disappointment with the fighter performance thus far in the campaign, also made sweeping changes in the command structure of the fighter units, replacing many Geschwaderkommodore with younger, more aggressive pilots such as Adolf Galland and Werner Mölders.
Finally, Göring stopped the attacks on the radar chain. These were seen as unsuccessful, and neither the Reichsmarschall nor his subordinates realised how vital the Chain Home stations were to the defence systems. It was known that radar provided some early warning of raids, but the belief among German fighter pilots was that anything bringing up the "Tommies" to fight was to be encouraged.
Raids on British cities
On the afternoon of 15 August, Hauptmann Walter Rubensdörffer leading Erprobungsgruppe 210 mistakenly bombed Croydon airfield (on the outskirts of London) instead of the intended target, RAF Kenley. German intelligence reports made the Luftwaffe optimistic that the RAF, thought to be dependent on local air control, was struggling with supply problems and pilot losses. After a raid on Biggin Hill on 18 August, Luftwaffe aircrew said they had been unopposed, the airfield was "completely destroyed", and asked, "Is England already finished?" In accordance with the strategy agreed on 6 August, defeat of the RAF was to be followed by bombing military and economic targets, systematically extending up to the Midlands.
Göring ordered attacks on aircraft factories on 19 August 1940. Sixty raids on the night of 19/20 August targeted the aircraft industry and harbours, and bombs fell on suburban areas around London: Croydon, Wimbledon and the Maldens. Night raids were made on 21/22 August on Aberdeen, Bristol and South Wales. That morning, bombs were dropped on Harrow and Wealdstone, on the outskirts of London. Overnight on 22/23 August, the output of an aircraft factory at Filton near Bristol was drastically affected by a raid in which Ju 88 bombers dropped over of high explosive bombs. On the night of 23/24 August over 200 bombers attacked the Fort Dunlop tyre factory in Birmingham, with a significant effect on production. A bombing campaign began on 24 August with the largest raid so far, killing 100 in Portsmouth, and that night, several areas of London were bombed; the East End was set ablaze and bombs landed on central London. Some historians believe that these bombs were dropped accidentally by a group of Heinkel He 111s which had failed to find their target and overshot Rochester and Thameshaven; this account has been contested as being three separate drops that night.Putland, Alan L. "19 August – 24 August 1940." Battle of Britain Historical Society. Retrieved: 12 August 2009.
More night raids were made around London on 24/25 August, when bombs fell on Croydon, Banstead, Lewisham, Uxbridge, Harrow and Hayes. London was on red alert over the night of 28/29 August, with bombs reported in Finchley, St Pancras, Wembley, Wood Green, Southgate, Old Kent Road, Mill Hill, Ilford, Chigwell and Hendon.
Attacks on airfields from 24 August
Göring's directive issued on 23 August 1940 ordered ceaseless attacks on the aircraft industry and on RAF ground organisation to force the RAF to use its fighters, continuing the tactic of luring them up to be destroyed, and added that focussed attacks were to be made on RAF airfields.
From 24 August onwards, the battle was a fight between Kesselring's Luftflotte 2 and Park's 11 Group. The Luftwaffe concentrated all their strength on knocking out Fighter Command and made repeated attacks on the airfields. Of the 33 heavy attacks in the following two weeks, 24 were against airfields. The key sector stations were hit repeatedly: Biggin Hill and Hornchurch four times each; Debden and North Weald twice each. Croydon, Gravesend, Rochford, Hawkinge and Manston were also attacked in strength. Coastal Command's Eastchurch was bombed at least seven times because it was believed to be a Fighter Command aerodrome. At times these raids caused some damage to the sector stations, threatening the integrity of the Dowding system.
To offset some losses, some 58 Fleet Air Arm fighter pilot volunteers were seconded to RAF squadrons, and a similar number of former Fairey Battle pilots were used. Most replacements from Operational Training Units (OTUs) had as little as nine hours flying time and no gunnery or air-to-air combat training. At this point, the multinational nature of Fighter Command came to the fore. Many squadrons and personnel from the air forces of the Dominions were already attached to the RAF, including top-level commanders – Australians, Canadians, New Zealanders, Rhodesians and South Africans. Other nationalities were also represented, including Free French, Belgian and a Jewish pilot from the British mandate of Palestine.
They were bolstered by the arrival of fresh Czechoslovak and Polish squadrons. These had been held back by Dowding, who thought non-English speaking aircrew would have trouble working within his control system, but Polish and Czech fliers proved to be especially effective. The pre-war Polish Air Force had lengthy and extensive training, and high standards; with Poland conquered and under brutal German occupation, the pilots of No. 303 (Polish) Squadron, which became the highest-scoring Allied unit, were experienced and strongly motivated. Josef František, a Czech regular airman who had flown from the occupation of his own country to join the Polish and then French air forces before arriving in Britain, flew as a guest of 303 Squadron and was ultimately credited with the highest "RAF score" in the Battle of Britain.
The RAF had the advantage of fighting over home territory. Pilots who bailed out after being shot down could be back at their airfields within hours, and aircraft low on fuel or ammunition could be immediately re-equipped. One RAF pilot interviewed in late 1940 had been shot down five times during the Battle of Britain, but was able to crash-land in Britain or bail out each time. For Luftwaffe aircrews, a bailout or crash landing in England meant capture – in the critical August period, almost as many Luftwaffe pilots were taken prisoner as were killed – while parachuting into the English Channel often meant drowning. Morale began to suffer, and Kanalkrankheit ("Channel sickness") – a form of combat fatigue – began to appear among the German pilots. Their replacement problem became worse than the British.
Assessment of attempt to destroy the RAF
The effect of the German attacks on airfields is unclear. According to Stephen Bungay, Dowding, in a letter to Hugh Trenchardthe PRO, AIR 19/60. accompanying Park's report on the period 8 August – 10 September 1940, states that the Luftwaffe "achieved very little" in the last week of August and the first week of September. The only Sector Station to be shut down operationally was Biggin Hill, and it was non-operational for just two hours. Dowding admitted that 11 Group's efficiency was impaired but, despite serious damage to some airfields, only two out of 13 heavily attacked airfields were down for more than a few hours. The German refocus on London was not critical.
Retired Air Vice-Marshal Peter Dye, head of the RAF Museum, discussed the logistics of the battle in 2000 and 2010,Dye, Air Vice Marshal Peter. Aeroplane, Issue July 2010, p. 33. dealing specifically with the single-seat fighters. He said that not only was British aircraft production replacing aircraft, but replacement pilots were keeping pace with losses. The number of pilots in RAF Fighter Command increased during July, August and September. The figures indicate the number of pilots available never decreased: from July, 1,200 were available; from 1 August, 1,400; in September, over 1,400; in October, nearly 1,600; by 1 November, 1,800. Throughout the battle, the RAF had more fighter pilots available than the Luftwaffe. Although the RAF's reserves of single-seat fighters fell during July, the wastage was made up for by an efficient Civilian Repair Organisation (CRO), which by December had repaired and put back into service some 4,955 aircraft, and by aircraft held at Air Servicing Unit (ASU) airfields.
Richard Overy agrees with Dye and Bungay. Overy says that only one airfield was temporarily put out of action and "only" 103 pilots were lost. British fighter production, not counting repaired aircraft, produced 496 new aircraft in July, 467 in August, and 467 in September, covering the losses of August and September. Overy indicates the number of serviceable and total strength returns reveal an increase in fighters from 3 August to 7 September, 1,061 on strength and 708 serviceable to 1,161 on strength and 746 serviceable. Moreover, Overy points out that the number of RAF fighter pilots grew by one-third between June and August 1940. Personnel records show a constant supply of around 1,400 pilots in the crucial weeks of the battle. In the second half of September it reached 1,500. The shortfall of pilots was never above 10%. The Germans never had more than between 1,100 and 1,200 pilots, a deficiency of up to one-third. "If Fighter Command were 'the few', the German fighter pilots were fewer".
Other scholars assert that this period was the most dangerous of all. In The Narrow Margin, published in 1961, historians Derek Wood and Derek Dempster believed that the two weeks from 24 August to 6 September represented a real danger. According to them, from 24 August to 6 September 295 fighters had been totally destroyed and 171 badly damaged, against a total output of 269 new and repaired Spitfires and Hurricanes. They say that 103 pilots were killed or missing and 128 were wounded, a total wastage of 120 pilots per week out of a fighting strength of just under 1,000, and that during August no more than 260 fighter pilots were turned out by OTUs, while casualties were just over 300. A full squadron establishment was 26 pilots, whereas the average in August was 16. In their assessment, the RAF was losing the battle. Denis Richards, in his 1953 contribution to the official British account History of the Second World War, agreed that lack of pilots, especially experienced ones, was the RAF's greatest problem. He states that between 8 and 18 August 154 RAF pilots were killed, severely wounded, or missing, while only 63 new pilots were trained. Availability of aircraft was also a serious issue. While its reserves during the Battle of Britain never declined to a half dozen planes as some later claimed, Richards describes 24 August to 6 September as the critical period because during these two weeks Germany destroyed far more aircraft through its attacks on 11 Group's southeast bases than Britain was producing. Three more weeks of such a pace would indeed have exhausted aircraft reserves. Germany had also suffered heavy losses of pilots and aircraft, hence its shift to night-time attacks in September. On 7 September RAF aircraft losses fell below British production and remained so until the end of the war.
Day and night attacks on London: start of the Blitz
Hitler's "Directive No. 17 – For the conduct of air and sea warfare against England" issued on 1 August 1940, reserved to himself the right to decide on terror attacks as measures of reprisal. Hitler issued a directive that London was not to be bombed save on his sole instruction. In preparation, detailed target plans under the code name Operation Loge for raids on communications, power stations, armaments works and docks in the Port of London were distributed to the Fliegerkorps in July. The port areas were crowded next to residential housing and civilian casualties would be expected, but this would combine military and economic targets with indirect effects on morale. The strategy agreed on 6 August was for raids on military and economic targets in towns and cities to culminate in a major attack on London. In mid-August, raids were made on targets on the outskirts of London.
Luftwaffe doctrine included the possibility of retaliatory attacks on cities, and since 11 May small-scale night raids by RAF Bomber Command had frequently bombed residential areas. The Germans assumed this was deliberate, and as the raids increased in frequency and scale the population grew impatient for measures of revenge. On 25 August 1940, 81 bombers of Bomber Command were sent out to raid industrial and commercial targets in Berlin. Clouds prevented accurate identification and the bombs fell across the city, causing casualties among the civilian population as well as damage to residential areas. Continuing RAF raids on Berlin led to Hitler withdrawing his directive on 30 August, and giving the go-ahead to the planned bombing offensive. On 3 September Göring planned to bomb London daily, with General Albert Kesselring's enthusiastic support, having received reports the average strength of RAF squadrons was down to five or seven fighters out of twelve and their airfields in the area were out of action. Hitler issued a directive on 5 September to attack cities including London. Note: OKW War diary, 6–9 September 1940. In a widely publicised speech delivered on 4 September 1940, Hitler condemned the bombing of Berlin and presented the planned attacks on London as reprisals. The first daylight raid was titled Vergeltungsangriff (revenge attack).
On 7 September, a massive series of raids involving nearly four hundred bombers and more than six hundred fighters targeted docks in the East End of London, day and night. The RAF anticipated attacks on airfields, and 11 Group rose to meet them, in greater numbers than the Luftwaffe expected. The first official deployment of 12 Group's Leigh-Mallory's Big Wing took twenty minutes to form up, missing its intended target, but encountering another formation of bombers while still climbing. They returned, apologetic about their limited success, and blamed the delay on being scrambled too late.Putland, Alan L. "7 September 1940." Battle of Britain Historical Society. Retrieved: 12 August 2009.Putland, Alan L. "7 September 1940 – The Aftermath." Battle of Britain Historical Society. Retrieved: 12 August 2009.
The German press jubilantly announced that "one great cloud of smoke stretches tonight from the middle of London to the mouth of the Thames." Reports reflected the briefings given to crews before the raids – "Everyone knew about the last cowardly attacks on German cities, and thought about wives, mothers and children. And then came that word 'Vengeance!'" Pilots reported seeing ruined airfields as they flew towards London, appearances which gave intelligence reports the impression of devastated defences. Göring maintained that the RAF was close to defeat, making invasion feasible.
Fighter Command had been at its lowest ebb, short of men and machines, and the break from airfield attacks allowed them to recover. 11 Group had considerable success in breaking up daytime raids. 12 Group repeatedly disobeyed orders and failed to meet requests to protect 11 Group airfields, but their experiments with increasingly large Big Wings had some success. The Luftwaffe began to abandon their morning raids, with attacks on London starting late in the afternoon for fifty-seven consecutive nights.Putland, Alan L. "8 September – 9 September 1940." Battle of Britain Historical Society. Retrieved: 12 August 2009.
The most damaging aspect to the Luftwaffe of targeting London was the increased distance. The Bf 109E escorts had a limited fuel capacity, giving them only a 660 km (410-mile) maximum range solely on internal fuel, and when they arrived had only 10 minutes of flying time before turning for home, leaving the bombers undefended. Its eventual stablemate, the Focke-Wulf Fw 190A, was flying only in prototype form in mid-1940; the first 28 Fw 190s were not delivered until November 1940. The Fw 190A-1 had a maximum range of 940 km (584 miles) on internal fuel, 40% greater than the Bf 109E. The Messerschmitt Bf 109E-7 corrected this deficiency by adding a ventral centre-line ordnance rack to take either an SC 250 bomb or a standard 300-litre Luftwaffe drop tank to double the range to 1,325 km (820 mi). The ordnance rack was not retrofitted to earlier Bf 109Es until October 1940.
On 14 September, Hitler chaired a meeting with the OKW staff. Göring was in France directing the decisive battle, so Erhard Milch deputised for him. Hitler asked "Should we call it off altogether?" General Hans Jeschonnek, Luftwaffe Chief of Staff, begged for a last chance to defeat the RAF and for permission to launch attacks on civilian residential areas to cause mass panic. Hitler refused the latter, perhaps unaware of how much damage had already been done to civilian targets. He reserved for himself the power to unleash the terror weapon. Instead, political will was to be broken by destroying the material infrastructure, the weapons industry, and stocks of fuel and food.
On 15 September, two massive waves of German attacks were decisively repulsed by the RAF by deploying every aircraft in 11 Group. Sixty German and twenty-six RAF aircraft were shot down. The action was the climax of the Battle of Britain.
Two days after this German defeat Hitler postponed preparations for the invasion of Britain. Henceforth, in the face of mounting losses in men, aircraft and the lack of adequate replacements, the Luftwaffe completed their gradual shift from daylight bomber raids and continued with nighttime bombing. 15 September is commemorated as Battle of Britain Day.
Night time Blitz, fighter-bomber day raids
At the 14 September OKW conference, Hitler acknowledged that the Luftwaffe had still not gained the air superiority needed for the Operation Sea Lion invasion. In agreement with Raeder's written recommendation, Hitler said the campaign was to intensify regardless of invasion plans: "The decisive thing is the ceaseless continuation of air attacks." Jeschonnek proposed attacking residential areas to cause "mass panic", but Hitler turned this down: he reserved to himself the option of terror bombing. British morale was to be broken by destroying infrastructure, armaments manufacturing, fuel and food stocks. On 16 September, Göring gave the order for this change in strategy. This new phase was to be the first independent strategic bombing campaign, in hopes of a political success forcing the British to give up. Hitler hoped it might result in "eight million going mad" (referring to the population of London in 1940), which would "cause a catastrophe" for the British. In those circumstances, Hitler said, "even a small invasion might go a long way". Hitler was against cancelling the invasion as "the cancellation would reach the ears of the enemy and strengthen his resolve". On 19 September, Hitler ordered a reduction in work on Operation Sea Lion. He doubted if strategic bombing could achieve its aims, but ending the air war would be an open admission of defeat. He had to maintain the appearance of concentration on defeating Britain, to conceal from Joseph Stalin his covert aim to invade the Soviet Union.
Throughout the battle, most Luftwaffe bombing raids had been at night. They increasingly suffered unsustainable losses in daylight raids, and the last massive daytime attacks were on 15 September. A raid of 70 bombers on 18 September also suffered badly, and day raids were gradually phased out leaving the main attacks at night. Fighter Command still lacked any effective capacity to intercept night-time raiders. The night fighters, mostly Blenheims and Beaufighters, at this time lacked airborne radar and so could not find the bombers. Anti-aircraft guns were diverted to London's defences, but had a much-reduced success rate against night attacks.
From mid September, Luftwaffe daylight bombing was gradually taken over by Bf 109 fighters, adapted to take one 250 kg bomb. Small groups of fighter-bombers would carry out Störangriffe raids escorted by large escort formations of about 200 to 300 combat fighters. They flew at altitudes over where the Bf 109 had an advantage over RAF fighters, except the Spitfire.Steinhilper, op. cit., p.280,282, 295–297. The raids disturbed civilians, and continued the war of attrition against Fighter Command. The raids were intended to carry out precision bombing on military or economic targets, but it was hard to achieve sufficient accuracy with the single bomb. Sometimes, when attacked, the fighter-bombers had to jettison the bomb to function as fighters. The RAF was at a disadvantage and changed defensive tactics by introducing standing patrols of Spitfires at high altitude to monitor incoming raids. On a sighting, other patrols at lower altitude would fly up to join the battle.
A Junkers Ju 88 returning from a raid on London was shot down in Kent on 27 September resulting in the Battle of Graveney Marsh, the last action between British and foreign military forces on British mainland soil.Green, Ron and Mark Harrison. "Forgotten frontline exhibition tells how Luftwaffe fought with soldiers on Kent marshes." Kent Online, 30 September 2009. Retrieved: 21 August 2010.
German bombing of Britain reached its peak in October and November 1940. In post-war interrogation, Wilhelm Keitel described the aims as economic blockade, in conjunction with submarine warfare, and attrition of Britain's military and economic resources. The Luftwaffe wanted to achieve victory on its own and was reluctant to cooperate with the navy. Their strategy for the blockade was to destroy ports and storage facilities in towns and cities. Priorities were based on the pattern of trade and distribution, so for these months, London was the main target. In November their attention turned to other ports and industrial targets around Britain.
Hitler postponed the Sealion invasion on 13 October "until the spring of 1941". It was not until Hitler's Directive 21 was issued, on 18 December 1940, that the threat to Britain of invasion finally ended.
During the battle, and for the rest of the war, an important factor in keeping public morale high was the continued presence in London of King George VI and his wife Queen Elizabeth. When war broke out in 1939, the King and Queen decided to stay in London and not flee to Canada, as had been suggested.This proposal has since been confused, or conflated, with a possible flight by HMG in exile. George VI and Elizabeth officially stayed in Buckingham Palace throughout the war, although they often spent weekends at Windsor Castle to visit their daughters, Elizabeth (the future queen) and Margaret."George VI and Elizabeth during the war years." UK: Royal government. Retrieved: 30 June 2008. Buckingham Palace was damaged by bombs which landed in the grounds on 10 September and, on 13 September, more serious damage was caused by two bombs which destroyed the Royal Chapel. The royal couple were in a small sitting room about 80 yards from where the bombs exploded. On 24 September, in recognition of the bravery of civilians, King George VI inaugurated the award of the George Cross.
Attrition statistics
Overall, by 2 November, the RAF fielded 1,796 pilots, an increase of over 40% from July 1940's count of 1,259 pilots. Based on German sources (from a Luftwaffe intelligence officer Otto Bechtle attached to KG 2 in February 1944) translated by the Air Historical Branch, Stephen Bungay asserts German fighter and bomber "strength" declined without recovery, and that from August–December 1940, the German fighter and bomber strength declined by 30 and 25 per cent. In contrast, Williamson Murray argues (using translations by the Air Historical Branch) that 1,380 German bombers were on strength on 29 June 1940, 1,420 bombers on 28 September, 1,423 level bombers on 2 November and 1,393 bombers on 30 November 1940. In July–September the number of Luftwaffe pilots available fell by 136, but the number of operational pilots had shrunk by 171 by September. The training organisation of the Luftwaffe was failing to replace losses. German fighter pilots, in contrast to popular perception, were not afforded training or rest rotations, unlike their British counterparts. The first week of September accounted for 25% of Fighter Command's and 24% of the Luftwaffe's overall losses. Between the dates 26 August – 6 September, on only one day (1 September) did the Germans destroy more aircraft than they lost. Losses were 325 German and 248 British.
Luftwaffe losses for August numbered 774 aircraft to all causes, representing 18.5% of all combat aircraft at the beginning of the month. Fighter Command's losses in August were 426 fighters destroyed, amounting to 40 per cent of 1,061 fighters available on 3 August. In addition, 99 German bombers and 27 other types were destroyed between 1 and 29 August.
From July to September, the Luftwaffe's loss records indicate the loss of 1,636 aircraft, 1,184 to enemy action. This represented 47% of the initial strength of single-engined fighters, 66% of twin-engined fighters, and 45% of bombers. This indicates the Germans were running out of aircrew as well as aircraft.
Throughout the battle, the Germans greatly underestimated the size of the RAF and the scale of British aircraft production. Across the Channel, the Air Intelligence division of the Air Ministry consistently overestimated the size of the German air enemy and the productive capacity of the German aviation industry. As the battle was fought, both sides exaggerated the losses inflicted on the other by an equally large margin. The intelligence picture formed before the battle encouraged the Luftwaffe to believe that such losses pushed Fighter Command to the very edge of defeat, while the exaggerated picture of German air strength persuaded the RAF that the threat it faced was larger and more dangerous than was the case. This led the British to the conclusion that another fortnight of attacks on airfields might force Fighter Command to withdraw their squadrons from the south of England. The German misconception, on the other hand, encouraged first complacency, then strategic misjudgement. The shift of targets from air bases to industry and communications was taken because it was assumed that Fighter Command was virtually eliminated.
Between 24 August and 4 September, German serviceability rates, which were acceptable at Stuka units, were running at 75% with Bf 109s, 70% with bombers and 65% with Bf 110s, indicating a shortage of spare parts. All units were well below established strength. The attrition was beginning to affect the fighters in particular. By 14 September, the Luftwaffe's Bf 109 Geschwader possessed only 67% of their operational crews against authorised aircraft. For Bf 110 units it was 46 per cent; and for bombers it was 59 per cent. A week later the figures had dropped to 64 per cent, 52% and 52 per cent. Serviceability rates in Fighter Command's fighter squadrons, between 24 August and 7 September, were listed as: 64.8% on 24 August; 64.7% on 31 August and 64.25% on 7 September 1940.
Due to the failure of the Luftwaffe to establish air supremacy, a conference assembled on 14 September at Hitler's headquarters. Hitler concluded that air superiority had not yet been established and "promised to review the situation on 17 September for possible landings on 27 September or 8 October. Three days later, when the evidence was clear that the German Air Force had greatly exaggerated the extent of their successes against the RAF, Hitler postponed Sea Lion indefinitely."
Propaganda
Propaganda was an important element of the air war which began to develop over Britain from 18 June 1940 onwards, when the Luftwaffe began small, probing daylight raids to test RAF defences. One of many examples of these small-scale raids was the destruction of a school at Polruan in Cornwall, by a single raider. Into early July, the British media's focus on the air battles increased steadily, the press, magazines, BBC radio and newsreels daily conveying the contents of Air Ministry communiques. The German OKW communiques matched Britain's efforts in claiming the upper hand.
Central to the propaganda war on both sides of the Channel were aircraft claims, which are discussed under 'Attrition statistics' (above). These daily claims were important both for sustaining British home front morale and persuading America to support Britain, and were produced by the Air Ministry's Air Intelligence branch. Under pressure from American journalists and broadcasters to prove that the RAF's claims were genuine, RAF intelligence compared pilots' claims with actual aircraft wrecks and those seen to crash into the sea. It was soon realised that there was a discrepancy between the two, but the Air Ministry decided not to reveal this. In fact, it was not until May 1947 that the actual figures were released to the public, by which time it was no longer important. Many people refused to believe the revised figures, including Douglas Bader.
The place of the Battle of Britain in British popular memory partly stems from the Air Ministry's successful propaganda campaign from July to October 1940, and its praise of the defending fighter pilots from March 1941 onwards. The pamphlet The Battle of Britain sold in huge numbers internationally, leading even Goebbels to admire its propaganda value. Focusing only upon the fighter pilots, with no mention of RAF bomber attacks against invasion barges, the Battle of Britain was soon established as a major victory for Fighter Command. This inspired feature films, books, magazines, works of art, poetry, radio plays and MOI short films.
The Air Ministry also developed the Battle of Britain Sunday commemoration, supported a Battle of Britain clasp for issue to the pilots in 1945 and, from 1945, Battle of Britain Week. The Battle of Britain window in Westminster Abbey was also encouraged by the Air Ministry, with Trenchard and Dowding, now lords, on its committee. By July 1947 when the window was unveiled, the Battle of Britain had already attained central prominence as Fighter Command's most notable victory, the fighter pilots credited with preventing invasion in 1940. Although given widespread media coverage in September and October 1940, RAF Bomber and Coastal Command raids against invasion barge concentrations were less well-remembered.
Aftermath
The Battle of Britain marked the first major defeat of Germany's military forces, with air superiority seen as the key to victory. Pre-war theories had led to exaggerated fears of strategic bombing, and UK public opinion was buoyed by coming through the ordeal. For the RAF, Fighter Command had achieved a great victory in successfully carrying out Sir Thomas Inskip's 1937 air policy of preventing the Germans from knocking Britain out of the war.
The battle also significantly shifted American opinion. During the battle, many Americans accepted the view promoted by Joseph Kennedy, the American ambassador in London, who believed that the United Kingdom could not survive. Roosevelt wanted a second opinion, and sent William "Wild Bill" Donovan on a brief visit to the UK; he became convinced the UK would survive and should be supported in every possible way. Before the end of the year, American journalist Ralph Ingersoll, after returning from Britain, published a book concluding that "Adolf Hitler met his first defeat in eight years" in what might "go down in history as a battle as important as Waterloo or Gettysburg". The turning point was when the Germans reduced the intensity of daylight attacks after 15 September. According to Ingersoll, "[a] majority of responsible British officers who fought through this battle believe that if Hitler and Göring had had the courage and the resources to lose 200 planes a day for the next five days, nothing could have saved London"; instead, "[the Luftwaffe's] morale in combat is definitely broken, and the RAF has been gaining in strength each week."
Both sides in the battle made exaggerated claims of numbers of enemy aircraft shot down. In general, claims were two to three times the actual numbers. Postwar analysis of records has shown that between July and September, the RAF claimed 2,698 kills, while the Luftwaffe fighters claimed 3,198 RAF aircraft shot down. Total losses, and start and end dates for recorded losses, vary for both sides. Luftwaffe losses from 10 July to 30 October 1940 total 1,977 aircraft, including 243 twin- and 569 single-engined fighters, 822 bombers and 343 non-combat types. In the same period, RAF Fighter Command aircraft losses number 1,087, including 53 twin-engined fighters. To the RAF figure should be added 376 Bomber Command and 148 Coastal Command aircraft lost conducting bombing, mining, and reconnaissance operations in defence of the country.
Stephen Bungay describes Dowding and Park's strategy of choosing when to engage the enemy whilst maintaining a coherent force as vindicated; their leadership, and the subsequent debates about strategy and tactics, had created enmity among RAF senior commanders and both were sacked from their posts in the immediate aftermath of the battle. All things considered, the RAF proved to be a robust and capable organisation that was to use all the modern resources available to it to the maximum advantage. Richard Evans writes:
The Germans launched some spectacular attacks against important British industries, but they could not destroy the British industrial potential, and made little systematic effort to do so. Hindsight does not disguise that the threat to Fighter Command was very real, and for the participants it seemed as if there was a narrow margin between victory and defeat. Nevertheless, even if the German attacks on the 11 Group airfields which guarded southeast England and the approaches to London had continued, the RAF could have withdrawn to the Midlands out of German fighter range and continued the battle from there. The victory was as much psychological as physical. Writes Alfred Price:
The truth of the matter, borne out by the events of 18 August, is more prosaic: neither by attacking the airfields nor by attacking London, was the Luftwaffe likely to destroy Fighter Command. Given the size of the British fighter force and the general high quality of its equipment, training and morale, the Luftwaffe could have achieved no more than a Pyrrhic victory. During the action on 18 August, it had cost the Luftwaffe five trained aircrew killed, wounded or taken prisoner, for each British fighter pilot killed or wounded; the ratio was similar on other days in the battle. And this ratio of 5:1 was very close to that between the number of German aircrew involved in the battle and those in Fighter Command. In other words, the two sides were suffering almost the same losses in trained aircrew, in proportion to their overall strengths. In the Battle of Britain, for the first time during the Second World War, the German war machine had set itself a major task which it patently failed to achieve, and so demonstrated that it was not invincible. In stiffening the resolve of those determined to resist Hitler the battle was an important turning point in the conflict.
Some historians are more cautious in assessing the significance of Germany's failure to knock Britain out of the war. Bungay writes, "Victory in the air achieved a modest strategic goal, for it did not bring Britain any closer to victory in the war, but merely avoided her defeat." Overy says, "The Battle of Britain did not seriously weaken Germany and her allies, nor did it much reduce the scale of the threat facing Britain (and the Commonwealth) in 1940/41 until German and Japanese aggression brought the Soviet Union and the United States into the conflict."
The British victory in the Battle of Britain was achieved at a heavy cost. Total British civilian losses from July to December 1940 were 23,002 dead and 32,138 wounded, with one of the largest single raids on 19 December 1940, in which almost 3,000 civilians died. With the culmination of the concentrated daylight raids, Britain was able to rebuild its military forces and establish itself as an Allied stronghold, later serving as a base from which the liberation of Western Europe was launched.
Memorials and cultural impact
Winston Churchill summed up the battle with the words, "Never in the field of human conflict was so much owed by so many to so few".Speech to the House of Commons on 20 August 1940. Pilots who fought in the battle have been known as The Few ever since, at times being specially commemorated on 15 September, "Battle of Britain Day". On this day in 1940, the Luftwaffe embarked on their largest bombing attack yet, forcing the engagement of the entirety of the RAF in defence of London and the South East, which resulted in a decisive British victory that proved to mark a turning point in Britain's favour."Battle of Britain Day". BBC. Retrieved: 18 March 2015."Battle of Britain 70th Anniversary" . The Royal British Legion. Retrieved: 18 March 2015. Within the Commonwealth, Battle of Britain Day has been observed more usually on the third Sunday in September, and even on the 2nd Thursday in September in some areas in the British Channel Islands.
Plans for the Battle of Britain window in Westminster Abbey were begun during wartime, the committee chaired by Lords Trenchard and Dowding. Public donations paid for the window itself, which replaced a window destroyed during the campaign, this officially opened by King George VI on 10 July 1947. Although not actually an 'official' memorial to the Battle of Britain in the sense that government paid for it, the window and chapel have since been viewed as such. During the late 1950s and 1960, various proposals were advanced for a national monument to the Battle of Britain, this also the focus of several letters in The Times. In 1960 the Conservative government decided against a further monument, taking the view that the credit should be shared more broadly than Fighter Command alone, and there was little public appetite for one. All subsequent memorials are the result of private subscription and initiative, as discussed below.
There are numerous memorials to the battle. The most important ones are the Battle of Britain Monument in London and the Battle of Britain Memorial at Capel-le-Ferne in Kent. As well as Westminster Abbey, St James's Church, Paddington also has a memorial window to the battle, replacing a window destroyed during it. There is also a memorial at the former Croydon Airport, one of the RAF bases during the battle, and a memorial to the pilots at Armadale Castle on the Isle of Skye in Scotland, which is topped by a raven sculpture. The Polish pilots who served in the battle are among the names on the Polish War Memorial in west London.
There are also two museums to the battle: one at Hawkinge in Kent and one at Stanmore in London, at the former RAF Bentley Priory.
In 2015 the RAF created an online 'Battle of Britain 75th Anniversary Commemorative Mosaic' composed of pictures of "the few" – the pilots and aircrew who fought in the battle – and "the many" – 'the often unsung others whose contribution during the Battle of Britain was also vital to the RAF's victory in the skies above Britain', submitted by participants and their families.
Other post-war memorials include:
Battle of Britain Class steam locomotives of the Southern Railway
Battle of Britain Memorial Flight
Battle of Britain Memorial, Capel-le-Ferne
Battle of Britain Monument, London
Kent Battle of Britain Museum
Polish War Memorial
Spirit of the Few Monument
The battle was the subject of the film Battle of Britain (1969), starring Laurence Olivier as Hugh Dowding and Trevor Howard as Keith Park.Battle of Britain: Special Edition DVD (1969) BBC. Retrieved: 22 December 2011 It also starred Michael Caine, Christopher Plummer and Robert Shaw as squadron leaders. Former participants of the battle served as technical advisers, including Adolf Galland and Robert Stanford Tuck.
In the 2001 film Pearl Harbor, American participation in the Battle of Britain was exaggerated, as none of the "Eagle Squadrons" of American volunteers saw action in Europe before 1941.
, a Hollywood film named The Few was in preparation for release in 2008, based on the story of real-life US pilot Billy Fiske, who ignored his country's neutrality rules and volunteered for the RAF. Bill Bond, who conceived the Battle of Britain Monument in London, described a Variety magazine outline of the film's historical contentFleming, Michael. "New flight plan for Cruise." Variety, 9 September 2003. Retrieved: 28 December 2007. as "Totally wrong. The whole bloody lot."Moreton, Cole. "Hollywood updates history of Battle of Britain: Tom Cruise won it all on his own." The Independent, 11 April 2004. Retrieved: 28 December 2007.
The 1941 Allied propaganda film Churchill's Island was the winner of the first Academy Award for Documentary Short Subject."Churchill's Island." NFB.ca, National Film Board of Canada. Retrieved: 17 February 2009.
See also
Essential for finding and directing planes.
Notes
References
Bibliography
General
.
Buckley, John. Air Power in the Age of Total War. London: UCL Press, 1999. .
Buell, Thomas. The Second World War: Europe and the Mediterranean. New York: Square One Publishers, 2002. .
(hardcover), 2002, (paperback). (2015 paperback edition)
Collier, Basil. The Defence of the United Kingdom (1962, Official history)
Collier, Basil. The Battle of Britain (1962, Batsford's British Battles series)
Collier, Richard. Eagle Day: The Battle of Britain, 6 August – 15 September 1940. London: Pan Books, 1968.
Churchill, Winston S. The Second World War – The Grand Alliance (Volume 3). Bantam Books, 1962.
(Originally published: London: Jonathan Cape, 1977.) .
Ellis, John. Brute Force: Allied Strategy and Tactics in the Second World War. London: Andre Deutsch, 1990. .
Evans, Michael. "Never in the field of human conflict was so much owed by so many to ... the Navy." The Times, 24 August 2006. Retrieved: 3 March 2007.
Goodenough, Simon. War Maps: World War II, From September 1939 to August 1945, Air, Sea, and Land, Battle by Battle. New York: St. Martin's Press, 1982, .
Harding, Thomas. "Battle of Britain was won at sea." The Telegraph, 25 August 2006. Retrieved: 25 August 2006.
Keegan, John. The Second World War London: Pimlico, 1997. .
(hardcover, paperback, 2002)
Owen, R.E, New Zealanders with the Royal Air Force Government Printer, Wellington, New Zealand 1953.
Pope, Stephan. "Across the Ether: Part One". Aeroplane, Vol. 23, No. 5, Issue No. 265, May 1995.
Robinson, Derek, Invasion, 1940: Did the Battle of Britain Alone Stop Hitler? New York: Carroll & Graf, 2005. .
Shulman, Milton. Defeat in the West. London: Cassell, 2004 (First edition 1947). .
Stacey, C P. (1970) Arms, Men and Governments: The War Policies of Canada, 1939–1945 Queen's Printer, Ottawa (Downloadable PDF)
Terraine, John, A Time for Courage: The Royal Air Force in the European War, 1939–1945. London: Macmillan, 1985. .
Luftwaffe
Corum, James. The Luftwaffe: Creating the Operational Air War, 1918–1940. Lawrence, Kansas: Kansas University Press, 1997. .
de Zeng, Henry L., Doug G. Stankey and Eddie J. Creek. Bomber Units of the Luftwaffe 1933–1945: A Reference Source, Volume 1. Hersham, Surrey, UK: Ian Allan Publishing, 2007. .
Dildy, Douglas C. "The Air Battle for England: The Truth Behind the Failure of the Luftwaffe's Counter-Air Campaign in 1940." Air Power History 63.2 (2016): 27.
Dönitz, Karl. Ten years and Twenty Days. New York: Da Capo Press, First Edition, 1997. .
.
Kieser, Egbert. Operation Sea Lion; The German Plan to Invade Britain 1940. London: Cassel Military Paperbacks, 1999. .
Macksey, Kenneth. Invasion: The German Invasion of England, July 1940. London: Greenhill Books, 1990. .
Mason, Francis K. Battle Over Britain: A History of the German Air Assaults on Great Britain, 1917–18 and July–December 1940, and the Development of Air Defences Between the World Wars. New York: Doubleday, 1969. .
Raeder, Erich. Erich Rader, Grand Admiral. New York: Da Capo Press; United States Naval Institute, 2001. .
Autobiographies and biographies
Brew, Steve. A Ruddy Awful Waste: Eric Lock DSO, DFC & Bar; The Brief Life of a Battle of Britain Fighter Ace. London: Fighting High, 2016.
Collier, Basil. Leader of the Few: the Authorised Biography of Air Chief Marshal Lord Dowding of Bentley Priory. London: Jarrolds, 1957.
Franks, Norman, Wings of Freedom: Twelve Battle of Britain Pilots. London: William Kimber, 1980. .
Halpenny, Bruce, Fight for the Sky: Stories of Wartime Fighter Pilots. Cambridge, UK: Patrick Stephens, 1986. .
Halpenny, Bruce, Fighter Pilots in World War II: True Stories of Frontline Air Combat (paperback). Barnsley, UK: Pen and Sword Books Ltd, 2004. .
Aircraft
.
de Zeng, Henry L., Doug G. Stankey and Eddie J. Creek, Bomber Units of the Luftwaffe 1933–1945: A Reference Source, Volume 2. Hersham, Surrey, UK: Ian Allan Publishing, 2007. .
.
Goss, Chris, Dornier 17: In Focus. Surrey, UK: Red Kite Books, 2005. .
.
Huntley, Ian D., Fairey Battle, Aviation Guide 1. Bedford, UK: SAM Publications, 2004. .
Mason, Francis K., Hawker Aircraft since 1920. London: Putnam, 1991. .
Molson, Kenneth M. et al., Canada's National Aviation Museum: Its History and Collections. Ottawa: National Aviation Museum, 1988. .
Moyes, Philip, J. R., "The Fairey Battle." Aircraft in Profile, Volume 2 (nos. 25–48). Windsor, Berkshire, UK: Profile Publications, 1971.
Parry, Simon W., Intruders over Britain: The Story of the Luftwaffe's Night Intruder Force, the Fernnachtjager. Washington, DC: Smithsonian Books, 1989. .
Scutts, Jerry, Messerschmitt Bf 109: The Operational Record. Sarasota, Florida: Crestline Publishers, 1996. .
Additional references
Addison, Paul and Jeremy Crang. The Burning Blue: A New History of the Battle of Britain. London: Pimlico, 2000. .
Bergström, Christer. Barbarossa – The Air Battle: July–December 1941. London: Chevron/Ian Allan, 2007. .
Bergström, Christer. The Battle of Britain – An Epic Battle Revisited. Eskilstuna: Vaktel Books/Casemate, 2010. .
Bishop, Patrick. Fighter Boys: The Battle of Britain, 1940. New York: Viking, 2003 (hardcover, ); Penguin Books, 2004. . As Fighter Boys: Saving Britain 1940. London: Harper Perennial, 2004. .
Brittain, Vera. England's Hour. London: Continuum International Publishing Group, 2005 (paperback, ); Obscure Press (paperback, ).
.
Cooper, Matthew. The German Air Force 1933–1945: An Anatomy of Failure. New York: Jane's Publishing Incorporated, 1981. .
Craig, Phil and Tim Clayton. Finest Hour: The Battle of Britain. New York: Simon & Schuster, 2000. (hardcover); 2006, (paperback).
Cumming, Anthony J. The Royal Navy and The Battle of Britain. Annapolis, Maryland: Naval Institute Press, 2010. .
Fiedler, Arkady. 303 Squadron: The Legendary Battle of Britain Fighter Squadron. Los Angeles: Aquila Polonica, 2010. .
Fisher, David E. A Summer Bright and Terrible: Winston Churchill, Lord Dowding, Radar and the Impossible Triumph of the Battle of Britain. Emeryville, CA: Shoemaker & Hoard, 2005. (hardcover, ); 2006, (paperback).
Gaskin, Margaret. Blitz: The Story of 29 December 1940. New York: Harcourt, 2006. .
.
Haining, Peter. Where the Eagle Landed: The Mystery of the German Invasion of Britain, 1940. London: Robson Books, 2004. .
Halpenny, Bruce Barrymore. Action Stations: Military Airfields of Greater London v. 8. Cambridge, UK: Patrick Stephens, 1984. .
Harding, Thomas. "It's baloney, say RAF aces". The Telegraph, 24 August 2006. Retrieved: 3 March 2007.
Hough, Richard. The Battle of Britain: The Greatest Air Battle of World War II. New York: W.W. Norton, 1989. (hardcover); 2005, (paperback).
James, T.C.G. The Battle of Britain (Air Defence of Great Britain; vol. 2). London/New York: Frank Cass Publishers, 2000. (hardcover); (paperback, ).
James, T.C.G. Growth of Fighter Command, 1936–1940 (Air Defence of Great Britain; vol. 1). London; New York: Frank Cass Publishers, 2000. .
James, T.C.G. Night Air Defence During the Blitz. London/New York: Frank Cass Publishers, 2003. .
McGlashan, Kenneth B. with Owen P. Zupp. Down to Earth: A Fighter Pilot Recounts His Experiences of Dunkirk, the Battle of Britain, Dieppe, D-Day and Beyond. London: Grub Street Publishing, 2007. .
March, Edgar J. British Destroyers; a History of Development 1892–1953. London: Seely Service & Co. Limited, 1966.
. NB: This book is also published under the following title:
For Your Freedom and Ours: The Kościuszko Squadron – Forgotten Heroes of World War II.
Mason, Francis K. "Battle over Britain". McWhirter Twins Ltd. 1969 {A day by day accounting of RaF and Luftwaffe losses}
Prien, Jochen and Peter Rodeike.Messerschmitt Bf 109 F, G, and K: An Illustrated Study. Atglen, Pennsylvania: Schiffer Publishing, 1995. .
Ray, John Philip. The Battle of Britain: New Perspectives: Behind the Scenes of the Great Air War. London: Arms & Armour Press, 1994 (hardcover, ); London: Orion Publishing, 1996 (paperback, ).
Rongers, Eppo H. De oorlog in mei '40, Utrecht/Antwerpen: Uitgeverij Het Spectrum N.V., 1969, No ISBN
Townsend, Peter. Duel of Eagles (new edition). London: Phoenix, 2000. .
Wellum, Geoffrey. First Light: The Story of the Boy Who Became a Man in the War-Torn Skies Above Britain. New York: Viking Books, 2002. (hardcover); Hoboken, NJ: Wiley & Sons, 2003. (hardcover); London: Penguin Books, 2003. (paperback).
.
External links
The Battle of Britain Historical Timeline
Day by Day blog charting the progress of the Battle by ex RAF veteran
Battle Of Britain Historical Society
video: , (52 min.) complete film documentary by Frank Capra made for U.S. Army
The Battle of Britain "In Photos"
Royal Air Force history
Battle of Britain Memorial
BBC History Overview of Battle
Historical recording BBC: Churchill's "This Was Their Finest Hour" speech
Radio New Zealand 'Sounds Historical' ANZAC Day, 25 April 2008: Historical recording of Sir Keith Park describing the Battle of Britain. (Scroll down to 10:50 am).
Air Chief Marshal Hugh Dowding on the Battle of Britain (despatch to the Secretary of State, August 1941)
Royal Engineers Museum: Royal Engineers during the Second World War (airfield repair)
Shoreham Aircraft Museum
Tangmere Military Aviation Museum
Kent Battle of Britain Museum
ADLG Visits RAF Uxbridge Battle of Britain Operations Room
British Invasion Defences
The Falco and Regia Aeronautica in the Battle of Britain
History of North Weald Airfield
The Royal Mint Memorial website
New Zealanders in the Battle of Britain (NZHistory.net.nz)
New Zealanders in the Battle of Britain (official history)
Interactive map showing Battle of Britain airfields and squadrons by date
https://web.archive.org/web/20161220201254/http://garry-campion.com/
Battle of Britain
Battle of Britain
Category:Conflicts in 1940
Category:World War II aerial operations and battles of the Western European Theatre
Battle of Britain
Battle of Britain
Category:Aerial operations and battles of World War II involving the United Kingdom
Category:Aerial operations and battles of World War II involving Germany
Britain
Britain
Britain
Category:Aerial operations and battles of World War II involving Canada
Battle of Britain
Category:World War II operations and battles of the Western European Theatre
|
wars_military
| 20,595
|
60026
|
Guadalcanal campaign
|
https://en.wikipedia.org/wiki/Guadalcanal_campaign
|
The Guadalcanal campaign, also known as the Battle of Guadalcanal and codenamed Operation Watchtower by the United States, was an Allied offensive against forces of the Empire of Japan in the Solomon Islands during the Pacific Theater of World War II. It was fought between 7 August 1942 and 9 February 1943, and involved major land and naval battles on and surrounding the island of Guadalcanal. It was the first major Allied land offensive against Japan during the war.
In summer 1942, the Allies decided to mount major offensives in New Guinea and the Solomon Islands with the objectives of defending sea lines to Australia and eventually attacking the major Japanese base at Rabaul on New Britain. The Guadalcanal operation was under the command of Robert L. Ghormley, reporting to Chester W. Nimitz, while the Japanese defense consisted of the Combined Fleet under Isoroku Yamamoto and the Seventeenth Army under Harukishi Hyakutake.
On 7 August 1942, Allied forces, predominantly U.S. Marines, landed on Guadalcanal, Tulagi, and Florida Island in the southern Solomon Islands. The Japanese defenders, who had occupied the islands since May 1942, offered little initial resistance, but the capture of Guadalcanal soon turned into a lengthy campaign as both sides added reinforcements. The Allies captured and completed Henderson Field on Guadalcanal and established a defense perimeter. The Japanese made several attempts to retake the airfield, including in mid-September and in late October. The campaign also involved major naval battles, including the Battles of Savo Island, the Eastern Solomons, Cape Esperance, and the Santa Cruz Islands, culminating in a decisive Allied victory at the Naval Battle of Guadalcanal in mid-November. Further engagements took place at the Battle of Tassafaronga and Battle of Rennell Island. In December, the Japanese decided to abandon Guadalcanal to focus on the defense of the other Solomon Islands, and evacuated their last forces by 9 February 1943.
The campaign followed the successful Allied defensive actions at the Battle of the Coral Sea and the Battle of Midway in May and June 1942. Along with the battles at Milne Bay and Buna–Gona on New Guinea, the Guadalcanal campaign marked the Allies' transition from defensive operations to offensive ones, and effectively allowed them to seize the strategic initiative in the Pacific theater from the Japanese. The campaign was followed by other major Allied offensives in the Pacific, most notably: the Solomon Islands campaign, New Guinea campaign, the Gilbert and Marshall Islands campaign, the Mariana and Palau Islands campaign, the Philippines campaign of 1944 to 1945, and the Volcano and Ryukyu Islands campaign prior to the surrender of Japan in August 1945.
Background
Strategic considerations
On 7 December 1941, Japanese forces attacked the United States Pacific Fleet at Pearl Harbor, Hawaii. The attack killed almost 2,500 people and crippled much of the U.S. battleship fleet, precipitating formal declarations of war between the two nations the next day. The initial goals of the Japanese leadership were to neutralize the U.S. Navy, seize territories rich in natural resources, and establish strategic military bases with which to defend Japan's empire in the Pacific Ocean and Asia. Initially, Japanese forces captured the Philippines, Thailand, Malaya, Singapore, Burma, the Dutch East Indies, Wake Island, Gilbert Islands, New Britain and Guam. The U.S. was joined in the war against Japan by several of the Allied powers, including the British Empire and the Dutch government-in-exile, both of which had also been attacked by Japan.Murray pp. 169–195
The Japanese made two attempts to continue their offensive and extend their outer defensive perimeter in the south and central Pacific to a point at which they could threaten Australia, Hawaii, and the U.S. west coast. The first offensive was thwarted in the naval Battle of the Coral Sea, which was a tactical stalemate but a strategic Allied victory in retrospect. It was the Allies' first major victory against the Japanese and significantly reduced the offensive capability of Japan's carrier forces. However, the battle did not temper Japan's audacious offensive military posture for several crucial months, with Japanese forces attempting a failed attack on Port Moresby over the Kokoda track. The second major Japanese offensive was stopped at the Battle of Midway. Both sides suffered significant losses in carrier aircraft and aircrew during these engagements. Crucially, while the Americans were able to reconstitute their naval air strength in relatively short order, the Japanese ultimately proved unable to do so. These strategic victories allowed the Allies to transition to a more offensive stance in the Pacific theater, and attempt to seize the strategic initiative from Japan.Murray p. 196
The Allies chose the Solomon Islands (a protectorate of the United Kingdom), specifically the southern islands of Guadalcanal, Tulagi and Florida Island, as their first target, designated Task One (codename Pestilence), with the initial objectives ofDyer v. 1, p. 261Loxton, p. 3 occupying the Santa Cruz Islands (codename Huddle), Tulagi (codename Watchtower), and "adjacent positions".Dyer, v. 1, p. 261 Guadalcanal (codename Cactus), which eventually became the focus of the operation, was not even mentioned in the early directive, and only later took on the operation name Watchtower. Tulagi, although small, had a large natural harbor that was ideal for a float-plane base; Florida Island also had to be taken, as it dominated Tulagi. Guadalcanal, much larger than the other two islands and located to the south across the soon-to-be-named Ironbottom Sound, was added when it was discovered the Japanese were constructing an airbase there.
The Imperial Japanese Navy (IJN) had occupied Tulagi in May 1942 and had constructed a seaplane base nearby. Allied concern grew when, in early July, the IJN began constructing a large airfield at Lunga Point on nearby Guadalcanal. From such a base, Japanese long-range bombers could threaten the sea lines of communication and maritime trade & transportation routes from the west coast of the Americas to the populous east coast of Australia. By August, the Japanese had about 900 naval troops on Tulagi and nearby islands, and 2,800 personnel (including 2,200 Korean forced laborers and trustees, as well as Japanese construction specialists) on Guadalcanal. These bases were meant to protect Japan's major naval base at Rabaul, threaten Allied supply and communication lines, and establish a staging area for a planned offensive against Fiji, New Caledonia and Samoa (Operation FS). The Japanese planned to deploy 45 fighters and 60 bombers to Guadalcanal. In the overall strategy for 1942, these aircraft would provide ground-based air cover for Japanese naval forces advancing farther into the South Pacific.Alexander, p. 72; Frank, pp. 23–31, 129, 628; Smith, p. 5; Bullard, p. 119; Lundstrom, p. 39. The Japanese aircraft assigned to Guadalcanal were to come from the 26th Air Flotilla, then located at bases in the Central Pacific (Bullard, p. 127)
The Allied plan to invade the southern Solomons was conceived by U.S. Admiral Ernest King, Commander in Chief, United States Fleet. He proposed the offensive in order to deny the use of the islands to the Japanese as bases from which the supply routes between the United States and Australia could be threatened, and to use them as starting points for further Allied offensives in the South Pacific. With U.S. President Franklin D. Roosevelt's tacit consent, King also advocated for an invasion of Guadalcanal. Due to the Roosevelt administration's support for Great Britain's proposal that priority be given to defeating Germany before Japan, Allied commanders in Pacific theater had to compete for personnel and resources with the European theater.See Morison, Breaking the Bismarcks Barrier pp. 3–5.
An early obstacle was the desire of both the U.S. Army and the Roosevelt administration to initiate offensive action in Europe prior to a large-scale operation in the Pacific.Dyer v. 1, p. 259 In addition, it was initially unclear who would command the campaign: Tulagi lay in the area under the command of General Douglas MacArthur, whereas the Santa Cruz Islands lay in Admiral Chester W. Nimitz's Pacific Ocean Area, which would also supply almost all Allied offensive forces that would be staged, supplied and covered from that area.Dyer v. 1, pp. 259–260 Both problems were overcome, and the Chief of Staff of the U.S. Army, General George C. Marshall, gave the operation his full support, despite MacArthur's command being unable to directly assist in the operation and the U.S. Navy taking full operational responsibility.Dyer v. 1, p. 260 As a result, and in order to preserve the unity of command, the boundary between MacArthur's South West Pacific Area and Nimitz's Pacific Ocean Area was shifted to to the west, effective from 1 August 1942.
Chief of Staff to the Commander in Chief William D. Leahy established two goals for 1942–1943: first, that Guadalcanal would be taken, in conjunction with an Allied offensive in New Guinea under MacArthur; and second, that the Admiralty Islands and Bismarck Archipelago, including the major Japanese base at Rabaul, would be captured as well. The directive held that the eventual goal was the American reconquest of the Philippines, from which American forces had been evicted in early 1942.Morison, The Struggle for Guadalcanal p. 12; Frank, pp. 15–16; Miller, Cartwheel, p. 5. The U.S. Joint Chiefs of Staff created the South Pacific theater, with Vice Admiral Robert L. Ghormley taking command on 19 June, to direct the offensive in the Solomons. Nimitz, based at Pearl Harbor, was designated as overall Allied commander-in-chief for Allied forces in the Pacific.Murray, pp. 199–200; Jersey, p. 85; and Lundstrom, p. 5.
Task force
In preparation for the offensive in the Pacific in May 1942, U.S. Marine Major General Alexander Vandegrift was ordered to move his 1st Marine Division from the United States to New Zealand. Other Allied land, naval and air units were sent to establish or reinforce bases in Fiji, Samoa, New Hebrides and New Caledonia.Loxton, p. 5; Miller, p. 11.
The island of Espiritu Santo, in the New Hebrides, was selected as the headquarters and primary staging ground for the offensive, codenamed Operation Watchtower, with the commencement date set for 7 August. At first, only the seizure of Tulagi and the Santa Cruz Islands was planned, omitting a landing on Guadalcanal. After Allied reconnaissance discovered Japanese airfield construction efforts on Guadalcanal, its capture was added to the plan, and planned landings on the Santa Cruz islands were (eventually) abandoned.Frank pp. 35–37, 53 The Japanese were aware, via signals intelligence, of the large-scale movement of Allied forces in the South Pacific Area, but concluded that the Allies were reinforcing either Australia or Port Moresby on the southern coast of New Guinea.Bullard p. 122
The Watchtower force, numbering 75 warships and transports (including vessels from the U.S. and Australia), assembled near Fiji on 26 July and conducted a single rehearsal landing prior to leaving for Guadalcanal on 31 July.Morison, The Struggle for Guadalcanal p. 15; McGee, pp. 20–21. The commander of the Allied expeditionary force was U.S. Vice Admiral Frank Fletcher, Commander of Task Force 16 (whose flag was on the aircraft carrier ). Commanding the amphibious forces was U.S. Rear Admiral Richmond K. Turner. Vandegrift led the 16,000 Allied (primarily U.S. Marine) infantry earmarked for the amphibious landings.Frank pp. 57, 619–621 The troops sent to Guadalcanal were fresh from military training, armed with legacy bolt-action M1903 Springfield rifles and a meager 10-day supply of ammunition. Because of the need to get the troops into battle quickly, the Allied planners had reduced their supplies from 90 days to only 60. The men of the 1st Marine Division began referring to the coming battle as "Operation Shoestring".Ken Burns: The War, Episode 1
Events
Landings
Bad weather allowed the Allied expeditionary force to arrive unseen by the Japanese on the night of 6 August and morning of 7 August, taking the defenders by surprise. This is occasionally referred to the "Midnight Raid on Guadalcanal".McGee, p. 21, Bullard, pp. 125–126 A Japanese patrol aircraft from Tulagi had searched the general area that the Allied invasion fleet was moving through, but was unable to spot the Allied fleet due to severe storms and heavy clouds.Bullard; Masaichiro Miyagawa, a Japanese soldier on Tanambogo who was captured by American forces (one of only four of the 3,000 Japanese to survive the battle), wrote that every day four Japanese patrol planes were sent out from Florida Island in fan shape pattern, flying northeast, east, southeast and south of Florida Island to look for enemy activity. Because of poor weather conditions, he said the invading fleet escaped detection, and that if the invasion fleet had been spotted a day or two prior to 7 August, the Allied convoy, with its slow moving transports, probably would have been destroyed. Guadalcanal Echoes, Volume 21, No. 1 Winter 2009/2010 Edition, p. 8 (Publication of the Guadalcanal Campaign Veterans, [American veterans group]) The landing force split into two groups, with one group assaulting Guadalcanal and the other Tulagi, Florida, and other nearby islands.Frank, p. 60; Jersey, p. 95. The landing force, designated Task Force 62, included six heavy cruisers, two light cruisers, 15 destroyers, 13 transports, six cargo ships, four destroyer transports, and five minesweepers. Allied warships bombarded the invasion beaches, while U.S. carrier aircraft bombed Japanese positions on the target islands and destroyed 15 Japanese seaplanes at their base near Tulagi.Hammel, Carrier Clash, pp. 46–47; Lundstrom, p. 38.
Tulagi and two nearby small islands, Gavutu and Tanambogo, were assaulted by 3,000 U.S. Marines under the command of Brigadier General William Rupertus.Frank p. 51 The 886 IJN personnel manning the naval and seaplane bases on the three islands fiercely resisted the Marine landings.Frank, p. 50. The IJN personnel included Japanese and Korean construction specialists as well as trained combat troops. With some difficulty, the Marines secured all three islands: Tulagi on 8 August, and Gavutu and Tanambogo by 9 August.Shaw, pp. 8–9; McGee, pp. 32–34. The Japanese defenders were killed almost to the last man,Frank, p. 79. Approximately 80 Japanese personnel escaped to Florida Island, where they were found and killed by Marine patrols over the next two months. and the Marines suffered 248 casualties.
In contrast to Tulagi, Gavutu, and Tanambogo, the landings on Guadalcanal encountered much less resistance. At 09:10 on 7 August, Vandegrift and 11,000 U.S. Marines came ashore on Guadalcanal between Koli Point and Lunga Point. Advancing towards Lunga Point, they encountered scant Japanese resistance and secured the airfield by 16:00 on 8 August. The Japanese naval construction units and combat troops, under the command of Captain Kanae Monzen, had panicked after coming under naval bombardment and aerial bombing, and had abandoned the airfield and fled about west to the Matanikau River and Point Cruz area. Japanese troops left behind food, supplies, intact construction equipment and vehicles, and 13 dead at the airfield and surrounding area.Jersey, pp. 113–115, 190, 350; Morison, The Struggle for Guadalcanal p. 15; and Frank, pp. 61–62, 81.
During the landing operations on 7 and 8 August, Rabaul-based Japanese naval aircraft under the command of Yamada Sadayoshi attacked the Allied amphibious forces several times, setting on fire the transport , which sank two days later, and heavily damaging the destroyer .Loxton pp. 90–103 Over the course of two days of air attacks, Japanese air units lost 36 aircraft, while the U.S. lost 19 (including 14 carrier aircraft), both in combat and to accidents.Frank p. 80
After these aerial clashes, Fletcher became concerned about the unexpectedly high losses to his carrier fighter aircraft strength, anxious about the threat to his carriers from further Japanese air attacks, and worried about his ships' remaining fuel supply. Fletcher withdrew from the Solomon Islands area with his carrier task force on the evening of 8 August.Hammel, Carrier Clash, pp. 99–100; Loxton, pp. 104–105. Loxton, Frank p. 94; and Morison (The Struggle for Guadalcanal p. 28) contend Fletcher's fuel situation was not at all critical, but Fletcher implied it was in order to provide further justification for his withdrawal from the battle area. In response to the loss of carrier-based air cover, Turner decided to withdraw his ships from Guadalcanal, even though less than half of the supplies and heavy equipment needed by the troops ashore had been unloaded.Hammel, Carrier Clash, p. 100 Turner planned to unload as many supplies as possible on Guadalcanal and Tulagi throughout the night of 8 August, and then depart with his ships early on 9 August.Morison The Struggle for Guadalcanal p. 31
Battle of Savo Island
As the transports continued to unload on the night of 8–9 August, two groups of screening Allied cruisers and destroyers, under the command of British Rear Admiral Victor Crutchley, were surprised and defeated by a Japanese force of seven cruisers and one destroyer from the 8th Fleet based at Rabaul and Kavieng, commanded by Japanese Vice Admiral Gunichi Mikawa. The 8th fleet had been sighted at least five times over the course of the previous days, both by Allied submarines and aerial reconnaissance, but a combination of misidentification of ships and the Allied leadership's dismissal of Japanese night fighting capability contributed to an air of complacence and ignorance among the Allied surface fleet that proved disastrous. Japanese submarine activity and air attack continued to be the main source of concern to Turner and his staff, not the threat of Japanese surface action.
As a result, during the Battle of Savo Island on the night of 9 August, Mikawa's force was able to surprise and sink one Australian and three American cruisers, as well as damage another American cruiser and two destroyers. The Japanese suffered only moderate damage to one cruiser.Hornfischer pp. 44–92 Despite this success, Mikawa was unaware that Fletcher was preparing to withdraw with the U.S. aircraft carriers, and immediately retired to Rabaul without attempting to attack the (now defenseless) Allied transports, fearing daytime air attacks on his vessels once the cover of darkness had been lost. Bereft of his carrier air cover and concerned about Japanese submarine and surface attacks against his degraded fleet, Turner withdrew his badly mauled naval forces from the area on the evening of 9 August. This left the Marines ashore understrength (as some transports in the Allied fleet had retreated without disembarking all of their troops), and without much of their heavy equipment and provisions. Mikawa's decision not to attempt to destroy the Allied transport ships when he had the opportunity proved to be a crucial strategic mistake.Morison The Struggle for Guadalcanal pp. 19–59
Initial ground operations
The 11,000 Marines on Guadalcanal initially concentrated on forming a loose defensive perimeter centered around Lunga Point and the airfield, moving what supplies had been brought ashore within the perimeter, and completing the construction of the airfield. Over four days of intense effort, the supplies were moved from the landing beaches to dispersed dumps within the defensive perimeter. Work began on the airfield immediately, mainly using captured Japanese equipment. On 12 August the airfield was named Henderson Field after Lofton R. Henderson, a Marine aviator who was killed during the Battle of Midway. By 18 August the airfield was ready for operation.Smith, pp. 14–15. At this time there were exactly 10,819 Marines on Guadalcanal (Frank, pp. 125–127). Five days' worth of food had been landed from the transports, which, along with captured Japanese provisions, gave the Marines a total of 14 days' supply of food.Smith pp. 16–17 To conserve supplies, the troops were limited to two meals per day.Shaw p. 13
Allied troops suffered from a severe strain of dysentery soon after the landings, with one in five Marines afflicted by mid-August.Smith p. 26 Although some of the Korean construction workers surrendered to the Marines, most of the remaining Japanese and Korean personnel gathered just west of the Lunga perimeter on the west bank of the Matanikau River and subsisted mainly on coconuts. A Japanese naval outpost was also located at Taivu Point, about 35 kilometers (22 mi) east of the Lunga perimeter. On 8 August, a Japanese destroyer from Rabaul delivered 113 naval reinforcement troops to the Matanikau position.Smith pp. 20, 35–36
Goettge patrol
On the evening of 12 August, a 25-man U.S. Marine patrol, led by Division D-2 Lieutenant Colonel Frank Goettge and primarily consisting of intelligence personnel, landed by boat west of the U.S. Marine Lunga perimeter, east of Point Cruz and west of the Japanese perimeter at Matanikau River, on a reconnaissance mission with a secondary objective of contacting a group of Japanese troops that U.S. forces believed might be willing to surrender. Soon after the patrol landed, a nearby platoon of Japanese naval troops attacked it and almost completely wiped it out.Zimmerman, pp. 58–60; Smith, p. 35; and Jersey, pp. 196–199. Goettge was one of the first killed. Only three made it back to the Lunga Point perimeter. Seven Japanese were killed in the skirmish. More details of the event are at Clark, Jack, "Goettge Patrol", Pacific Wreck Database and Broderson, Ben, "Franklin native recalls key WWII battle".
In response, on 19 August, Vandegrift sent three companies of the U.S. 5th Marine Regiment to attack the Japanese troop concentration west of the Matanikau. One company attacked across the sandbar at the mouth of the Matanikau River while another crossed the river inland and attacked the Japanese forces located in Matanikau village. The third landed by boat further west and attacked Kokumbuna village. After briefly occupying the two villages, the three Marine companies returned to the Lunga perimeter, having killed about 65 Japanese soldiers while losing four Marines. This action, sometimes referred to as the "First Battle of the Matanikau", was the first of several major actions around the Matanikau River during the campaign.Frank, pp. 132–133; Jersey, p. 203; and Smith, pp. 36–42. The 500 Japanese involved were from the 84th Guard Unit, 11th and 13th Construction Units, and the recently arrived 1st Camp Relief Unit. After this engagement the Japanese naval personnel relocated deeper into the hills in the interior of the island.
On 20 August, the escort carrier delivered a squadron of 19 Grumman F4F Wildcats and a squadron of 12 Douglas SBD Dauntlesses to Henderson Field. The airfield's rudimentary nature meant that carrier aircraft, designed for rough landings on flight decks at sea, were more suited for use on Henderson Field than ground-based planes. The aircraft based at Henderson became known as the "Cactus Air Force", after the Allied codename for Guadalcanal, Cactus. The Marine fighters went into action the next day, which also saw the first of what would become almost-daily Japanese bomber air raids on the airfield. On 22 August five U.S. Army Bell P-400 Airacobras and their pilots arrived at Henderson Field.Shaw p. 18
Battle of the Tenaru
In response to the Allied landings, the Japanese Imperial General Headquarters assigned the task of retaking Guadalcanal to the Imperial Japanese Army's (IJA) 17th Army, a corps-sized command based at Rabaul under the command of Lieutenant General Harukichi Hyakutake. The army was to be supported by Japanese naval units, including the Combined Fleet under the command of Isoroku Yamamoto, which was headquartered at Truk. The 17th Army, at that time heavily involved in the Japanese campaign in New Guinea, had only a few units available to allocate to Guadalcanal. Of these, the 35th Infantry Brigade under Major General Kiyotake Kawaguchi was at Palau, the 4th (Aoba) Infantry Regiment under Major General Yumio Nasu was in the Philippines and the 28th (Ichiki) Infantry Regiment, under the command of Colonel Kiyonao Ichiki, was berthed on transport ships near Guam. These units began to move towards Guadalcanal via Truk and Rabaul immediately, but Ichiki's regiment, being the closest, arrived in the area first. A "First Element" of Ichiki's unit, consisting of about 917 soldiers, was landed by IJN destroyers at Taivu Point, east of the Lunga perimeter, after midnight on 19 August, then conducted a night march west toward the Marine perimeter.Frank p. 147Smith, p. 88; Evans, p. 158; and Frank, pp. 141–143. The Ichiki regiment was named after its commanding officer and was part of the 7th Division from Hokkaido. The Aoba regiment, from the 2nd Division, took its name from Aoba Castle in Sendai, because most of the soldiers in the regiment were from Miyagi Prefecture (Rottman, Japanese Army, p. 52). Ichiki's regiment had been assigned to invade and occupy Midway, but were on their way back to Japan after the invasion was cancelled following the Japanese defeat in the Battle of Midway. Although some histories state that Ichiki's regiment was at Truk, Raizō Tanaka, in Evans' book, states that he dropped off Ichiki's regiment at Guam after the Battle of Midway. Ichiki's regiment was subsequently loaded on ships for transport elsewhere but were rerouted to Truk after the Allied landings on Guadalcanal. Robert Leckie, who was at Guadalcanal, remembers the events of the Battle of the Tenaru in his book Helmet for My Pillow: "Everyone had forgotten the fight and was watching the carnage, when shouting swept up the line. A group of Japanese dashed along the opposite river edge, racing in our direction. Their appearance so surprised everyone that there were no shots." Leckie, pp. 82–83
Underestimating the strength of Allied forces on Guadalcanal, Ichiki's unit conducted a nighttime frontal assault on Marine positions at Alligator Creek (often called the "Ilu River" on U.S. Marine maps) on the east side of the Lunga perimeter in the early morning hours of 21 August. Jacob Vouza, a Solomon Islands Coastwatcher scout, warned the Americans of the impending attack minutes before it started; the attack was defeated with heavy losses to the Japanese. After daybreak, the Marine units counterattacked Ichiki's surviving troops, killing many more of them. The dead included Ichiki; it has been reported that he died by seppuku after realizing the magnitude of his defeat.Steinberg, Rafael (1978). Island Fighting. Time Life Books. p. 30 In total, 789 of the original 917 members of the Ichiki Regiment's First Element were killed in the battle. About 30 survived the battle and joined Ichiki's rear guard of about 100, and these 128 Japanese returned to Taivu Point, notified 17th Army headquarters of their defeat and awaited further reinforcements and orders from Rabaul.Frank, pp. 156–158, 681; and Smith, p. 43.
Battle of the Eastern Solomons
As the Tenaru battle was ending, more Japanese reinforcements were already on their way to Guadalcanal. Yamamoto had organized an extremely powerful naval expeditionary force, with the goal of destroying any American fleet units in the Solomons and subsequently eliminating Allied ground forces at Henderson Field. This force sortied from Truk on 23 August. Several other IJN units carrying reinforcements and supplies, and ships tasked with naval bombardment of the island, sortied from both Truk and Rabaul. Three slow transport ships departed from Truk on 16 August, carrying the remaining 1,400 soldiers from Ichiki's (28th) Infantry Regiment plus 500 naval marines from the 5th Yokosuka Special Naval Landing Force.Smith pp. 33–34 The transports were guarded by 13 warships commanded by Japanese Rear Admiral Raizō Tanaka, who planned to land the troops on Guadalcanal on 24 August.Zimmerman, p. 70; Frank, p. 159. To cover the landing of these troops and provide support for the operation to retake Henderson Field from Allied forces, Yamamoto directed Chūichi Nagumo to sortie with a carrier force from Truk on 21 August and sail toward the southern Solomon Islands. Nagumo's force included three carriers and 30 other warships.Hammel, Carrier Clash, pp. 124–125, 157 Yamamoto would send the light carrier ahead of the rest of the Japanese fleet to act as bait to draw the American aircraft into combat. The aircraft from the two fleet carriers would then attack the American fleet while it lacked air cover.
Simultaneously, the U.S. carrier task force under Fletcher approached Guadalcanal to counter the Japanese offensive efforts.Hammel, Carrier Clash, p. 147. On 24 August, the two carrier forces located and launched strikes against each other. The Japanese had two fleet carriers, the and , as well as the light carrier Ryūjō, with a total of 177 carrier-based aircraft. The American forces had two carriers, the Saratoga and , and their 176 aircraft. The Japanese light carrier Ryūjō, offered as bait to Allied naval aircraft, was hit by several bombs and an aerial torpedo; she was abandoned by her crew and sank that night. The two Japanese fleet carriers were not attacked, but Japanese aircraft successfully attacked Enterprise, badly damaging her flight deck. Both fleets subsequently retreated from the area. The Japanese lost the Ryūjō, along with dozens of carrier aircraft and most of their aircrew; the Americans lost a handful of planes and suffered damage to Enterprise requiring two months to repair in Hawaii.Frank, pp. 166–174; Lundstrom, p. 106 Unable to safely land on Enterprise's ruined flight deck, many of her remaining aircraft flew to Guadalcanal and reinforced the beleaguered American air units at Henderson Field.
Concurrently to the carrier air battle, on 25 August, Tanaka's convoy, headed by the flagship , was attacked near Taivu Point by Cactus Air Force aircraft based at Henderson Field. After suffering heavy damage during the battle, including the sinking of one of the transports, the convoy was forced to divert to the Shortland Islands in the northern Solomons in order to transfer the surviving troops to destroyers for later delivery to Guadalcanal.Hara, pp. 118–119; and Hough, p. 293. Though the exact number of the 5th Yokosuka troops killed in the sinking of their transport ship is unknown, the losses were considered to be substantial. A Japanese transport was sunk, and the older destroyer Mutsuki was so badly damaged that she had to be scuttled. Several other Japanese warships were damaged, including Tanaka's flagship Jintsū. At this point, Tanaka withdrew and rescheduled the supply run for the night of 28 August, to be carried out by the remaining destroyers. Japanese air raids against the Allied positions on Guadalcanal continued largely unabated during this time.
On 25 August, the American carrier , after refueling, positioned herself east of Guadalcanal, expecting Japanese movement to the area. No Japanese forces made any movement towards the area, however, and the Wasp was left idle.
The Americans had won a modest tactical victory with the destruction of the Ryūjō, destroying some 75 Japanese aircraft while losing 25 of their own. The forced withdrawal of Tanaka's troop convoy also bought valuable breathing room for the embattled Allied troops on Guadalcanal. While the Enterprise was taken out of action for repair for several months, she was able to return to sea later in the campaign. The temporary loss of Enterprise was offset by the timely arrival of the carrier Hornet. Additionally, the reinforcement of Henderson Field by Enterprises orphaned carrier aircraft bolstered ground-based Allied air strength on the island, while ground-based Japanese pilots based at Rabaul were forced to undertake a grueling day-long round-trip flight in order to make their attacks. These factors combined to render daylight supply runs to Guadalcanal impossible for the Japanese. Only weeks before this, the Japanese had total control of the sea in the region; now they were forced to make supply runs only under the cover of darkness. Japanese naval commanders began to recognize the reality that their ships could not safely operate in the Solomons in the daytime without first suppressing Allied airpower at Henderson Field.
Transport Division 12
For six weeks, from early August to the end of September, the U.S. Navy largely avoided the waters off Tulagi and Guadalcanal, and was ordered not to resupply the Marines or provide escort duty for slow transport ships, as American naval commanders feared a repeat of the disastrous defeat at Savo Island suffered by Australian and American surface vessels on 9 August. Transport Division 12 (Trans Div 12), consisting of six obsolete World War I-era s converted to high-speed transports, were the most heavily armed U.S. surface ships operating in Ironbottom Sound during this time. Their torpedo tubes were retrofitted to hold landing craft boats, enough to carry over 100 extra Marines for rapid transportation. They landed the first Marines onto Tulagi and later on Guadalcanal, conducted special operations missions with Marine Raiders, participated in anti-submarine warfare, and provided covering fire for the Marines on Guadalcanal. They also directly delivered crucial supplies to the Marines that helped to construct Henderson Field and to maintain the aircraft stationed there.
On 30 August was bombed by Japanese high-altitude horizontal bombers and sank with the loss of 51 men. On 4–5 September, and had finished landing a complement of Marine Raiders back onto Guadalcanal and proceeded to patrol the area for submarines, which had been surfacing and shelling the Marines nightly. Three Japanese destroyers, which did not know that enemy surface ships were patrolling the area, positioned themselves to attack Henderson Field. They were spotted by two American destroyer-transports, and initially identified as a submarine. A U.S. patrol plane also misidentified the destroyers as an enemy submarine at nighttime and dropped flares over the area, inadvertently silhouetting Little and Gregory. The Japanese destroyers immediately fired on and sank the overmatched American ships. 65 men from Little were killed and 24 men from Gregory were killed, including the commanding officer of Transport Division 12 and the commanding officers of both ships.
Air battles over Henderson Field and strengthening of the Lunga defenses
Throughout August, small numbers of American aircraft and their crews continued to arrive at Guadalcanal. By the end of August, 64 planes of various types were stationed at Henderson Field.Zimmerman p. 74 On 3 September, the commander of the 1st Marine Aircraft Wing, U.S. Marine Brigadier General Roy Geiger, arrived with his staff and took command of all air operations at Henderson Field.Hough p. 297 Air battles between the Allied aircraft at Henderson and Japanese bombers and fighters from Rabaul continued almost daily. Between 26 August and 5 September, the U.S. lost about 15 aircraft to the Japanese's approximately 19. More than half of the U.S. aircrews shot down were rescued; most of the Japanese aircrews were not. The eight-hour round-trip flight from Rabaul to Guadalcanal, about , seriously hampered Japanese efforts to establish air superiority over Henderson Field. Throughout the campaign, Rabaul-based Japanese aircrew had to fly almost 600 miles before combat with Allied pilots operating in the immediate area of Henderson Field. The Japanese navy also did not systematically rotate their veteran pilots out of combat zones. This steadily exhausted and depleted Japanese air power in the region. From a strategic standpoint, the overall quality of Japanese aviation in the Solomons deteriorated as worn-out veteran pilots were replaced by inexperienced aircrew with minimal combat experience. Australian coastwatchers on Bougainville and New Georgia islands were often able to provide Allied forces on Guadalcanal with advance notice of approaching Japanese air strikes, allowing the U.S. fighters time to take off and position themselves to attack the Japanese aircraft as they approached. The Japanese air forces were slowly losing a war of attrition in the skies above Guadalcanal.Frank, pp. 194–213; and Lundstrom, p. 45. In comparison to the separating Lunga Point from Rabaul, Berlin was about from Allied air bases in eastern England. Later United States Admiral of the Fleet, William F. Halsey, paid tribute to Australian Coastwatchers: "The Coastwatchers saved Guadalcanal, and Guadalcanal saved the South Pacific."
During this time, Vandegrift continued to direct efforts to strengthen and improve the defenses of the Lunga perimeter. Between 21 August and 3 September, he relocated three Marine battalions, including the 1st Raider Battalion, under Merritt A. Edson (Edson's Raiders), and the 1st Parachute Battalion from Tulagi and Gavutu to Guadalcanal. These units added about 1,500 troops to Vandegrift's original 11,000 men defending Henderson Field.Morison, The Struggle for Guadalcanal p. 15; and Hough, p. 298. The 1st Parachute Battalion, which had suffered heavy casualties in the Battle of Tulagi and Gavutu–Tanambogo in August, was placed under Edson's command.Smith, p. 103; Hough, p. 298.
The other relocated battalion, the 1st Battalion, 5th Marine Regiment, was landed by boat west of the Matanikau near Kokumbuna village on 27 August with the mission of attacking Japanese units in the area, much as in the first Matanikau action of 19 August. The Marines were impeded by difficult terrain, hot sun, and well-emplaced Japanese defenses. The next morning, the Marines found that the Japanese defenders had departed during the night, so the Marines returned to the Lunga perimeter by boat.Zimmerman, pp. 78–79 These actions resulted in the loss of 20 Japanese and 3 Marines.Frank, Guadalcanal, p. 197.
Small Allied naval convoys arrived at Guadalcanal on 23 and 29 August, and 1 and 8 September to provide the Marines at Lunga with more food, ammunition, aircraft fuel, aircraft technicians, and other supplies. The convoy on 1 September also brought 392 Seabees to maintain and improve Henderson Field.Smith, pp. 79, 91–92, 94–95. In addition, on 3 September, Marine Aircraft Group 25 began airlifting high-priority cargo, including personnel, aviation gasoline, munitions, and other supplies, to Henderson Field.Armstrong, Marine Air Group 25 and SCAT, pp. 23–26.
Tokyo Express
By 23 August, Kawaguchi's 35th Infantry Brigade reached Truk and was loaded onto slow transport ships for the rest of the trip to Guadalcanal. The damage done to Tanaka's convoy during the Battle of the Eastern Solomons caused the Japanese to reconsider trying to deliver more troops to Guadalcanal via slow transport. Instead, the ships carrying Kawaguchi's soldiers were rerouted to Rabaul. From there, the Japanese planned to deliver Kawaguchi's unit to Guadalcanal using fast destroyers at night, staging through a Japanese naval base in the Shortland Islands. The Japanese destroyers were usually able to make round trips down "The Slot" (New Georgia Sound) to Guadalcanal and back in a single night throughout the campaign, which minimized their exposure to daytime Allied air attack. These runs became known as the "Tokyo Express" to Allied forces, and were labeled "rat transportation" by the Japanese.Griffith, p. 113; Frank, pp. 198–199, 205, 266. The term "rat transportation" was used because, like a rat, the Japanese ships were active only at night. The 35th Infantry Brigade, from the 18th Division, contained 3,880 troops and was centered on the 124th Infantry Regiment with various attached supporting units (Alexander, p. 139). While troops could be transported in this manner, most of the heavy equipment such as heavy artillery and vehicles, and supplies such as food and ammunition, could not. In addition, this activity tied up destroyers that the IJN desperately needed to escort convoys elsewhere in the Pacific. The Byzantine nature of the Japanese navy's command setup in the region exacerbated these logistical problems; Tanaka was receiving contradictory orders from the Combined Fleet headquarters and two rival subordinate naval commands at Rabaul, the Eleventh Air Fleet and the Eighth Fleet. Regardless, Tanaka's persistent destroyer operations gradually increased the strength of the forces available to Kawaguchi on the island. A combination of inability and unwillingness prevented Allied naval commanders from frequently challenging Japanese naval forces at night, so the Japanese effectively controlled the seas around the Solomon Islands after sunset. Conversely, the growing Allied airpower at Henderson Field (which was further reinforced on September 11–12 by 24 Wildcats that had been made homeless by the torpedoing of the carrier Saratoga in early September by IJN submarine I-26) meant that any Japanese vessel within range () of Guadalcanal in daylight was at great risk from air attack. This tactical situation, wherein Japanese naval forces operated freely at night and Allied aircraft enjoyed local air superiority during the day, persisted for the next several months of the campaign.Morison The Struggle for Guadalcanal pp. 113–114
Between 29 August and 4 September, Japanese light cruisers, destroyers, and patrol boats were able to land almost 5,000 troops at Taivu Point, including most of the 35th Infantry Brigade, much of the Aoba (4th) Regiment, and the rest of Ichiki's regiment. General Kawaguchi, who landed at Taivu Point on 31 August, was placed in command of all Japanese forces on Guadalcanal.Frank, pp. 201–203; Griffith, pp. 116–124; and Smith, pp. 87–112. A barge convoy took another 1,000 soldiers of Kawaguchi's brigade, under the command of Colonel Akinosuke Oka, to Kamimbo, west of the Lunga perimeter.Frank pp. 218–219
Battle of Edson's Ridge
On 7 September, Kawaguchi issued his attack plan to "rout and annihilate the enemy in the vicinity of the Guadalcanal Island airfield". Kawaguchi's plan called for the forces under his command, split into three divisions, to approach the Lunga perimeter inland, culminating with a surprise night attack. Oka's forces would attack the perimeter from the west, while Ichiki's Second Echelon, renamed the Kuma Battalion, would attack from the east. The main attack would be conducted from the jungle south of the Lunga perimeter by Kawaguchi's "Center Body", numbering 3,000 men in three battalions.Frank, pp. 219–220; and Smith, pp. 113–115, 243. Most of the men in Ichiki's second echelon were from Asahikawa, Hokkaidō. "Kuma" refers to the brown bears that lived in that area. By 7 September, most of Kawaguchi's troops had departed Taivu to begin marching towards Lunga Point along the coastline. About 250 Japanese troops remained behind to guard the brigade's supply base at Taivu.Frank, p. 220; Smith, p. 121.
Meanwhile, native scouts under the direction of Martin Clemens, a coastwatcher officer in the British Solomon Islands Protectorate Defence Force and the British district officer for Guadalcanal, brought reports to the U.S. Marines of Japanese troops at Taivu near the village of Tasimboko. Edson subsequently planned a raid on the Japanese troop concentration at Taivu.Zimmerman, p. 80; Griffith, p. 125. On 8 September, after being dropped off near Taivu by boat, Edson's men captured Tasimboko and forced the Japanese defenders to retreat into the jungle.Hough, pp. 298–299; Frank, pp. 221–222; Smith, p. 129; Griffith, pp. 129–130. In Tasimboko, Edson's troops discovered Kawaguchi's main supply depot, including large stockpiles of food, ammunition, medical supplies, and a powerful shortwave radio. After destroying everything in sight, aside from some documents and equipment that were carried back with them, the Marines returned to the Lunga perimeter. Intelligence gathered from the captured documents indicated that at least 3,000 Japanese troops were on the island, planning to initiate a large-scale ground assault on the airfield in short order.Griffith, pp. 130–132; Frank, pp. 221–222; and Smith, p. 130.
Edson, along with Colonel Gerald C. Thomas, Vandegrift's operations officer, correctly anticipated that the main Japanese attack would fall upon Lunga Ridge, a narrow, grassy, coral ridge that ran parallel to the Lunga River, just south of Henderson Field. The ridge offered a natural avenue of approach to the airfield, commanded the surrounding area, and was almost undefended. On 11 September, the 840 men of Edson's battalion were deployed onto and around the ridge and began digging in.Frank, pp. 223, 225–226; Griffith, pp. 132, 134–135; and Smith, pp. 130–131, 138.
On the night of 12 September, Kawaguchi's 1st Battalion attacked the Raiders between the Lunga River and ridge, forcing one Marine company to fall back to the ridge before the Japanese halted their attack for the night. The next night Kawaguchi faced Edson's 840 Raiders with 3,000 troops of his brigade, reinforced by an assortment of light artillery. The Japanese began their attack just after nightfall, with Kawaguchi's 1st Battalion assaulting Edson's right flank just to the west of the ridge. After breaking through the Marine lines, the battalion's assault was eventually stopped by Marine units occupying the northern section of the ridge.Smith, pp. 161–167. The Marine defenders that finally defeated Kokusho's charge were most likely from the 11th Marines with assistance from the 1st Pioneer Battalion (Smith, p. 167; and Frank, p. 235).
Two companies from Kawaguchi's 2nd Battalion charged up the southern edge of the ridge and pushed Edson's troops back to Hill 123, in the center section of the ridge. Throughout the night the Marines at this position, supported by a battery of howitzers brought up from Lunga Point, turned back wave after wave of frontal Japanese infantry attacks, several of which devolved into hand-to-hand combat. The weight of these repeated assaults eventually pressed the Marines back to within a quarter mile of the airfield. At this stage, as the intensity of the battle reached its apex, small groups of Japanese soldiers managed to break through Edson's lines, with some reaching the edge of the airfield itself. Several Japanese soldiers were killed as they attempted to climb onto and destroy parked aircraft, and General Vandegrift's command post even came under direct attack at dawn, with several Japanese infiltrators killed within sight of the general. Nonetheless, Kawaguchi's units were spent, and the main Japanese attack on Edson's positions ground to a halt. The supporting attacks by the Kuma Battalion and Oka's unit at other locations on the Lunga perimeter were likewise defeated. On 14 September, Kawaguchi led the survivors of his shattered brigade on a five-day march west to the Matanikau Valley to join with Oka's unit.Smith, pp. 162–193; Frank, pp. 237–246; and Griffith, pp. 141–147. In total Kawaguchi's forces lost about 850 killed, with the Marines suffering 104.Griffith, p. 144; and Smith, pp. 184–194.
On 15 September at Rabaul, Hyakutake learned of Kawaguchi's defeat and forwarded the news to Imperial General Headquarters in Japan. In an emergency meeting, the senior Japanese IJA and IJN command staffs concluded that "Guadalcanal might develop into the decisive battle of the war". The results of the battle now began to exert significant strategic impact on Japanese operations in other areas of the Pacific. Hyakutake realized that he could not send sufficient men and materiel to defeat the Allied forces on Guadalcanal while simultaneously supporting the major ongoing Japanese offensive on the Kokoda Track in New Guinea. Hyakutake, with the concurrence of General Headquarters, ordered his troops on New Guinea, who were within of their objective of Port Moresby, to withdraw until the "Guadalcanal matter" was resolved. Hyakutake prepared to send more troops to Guadalcanal for another attempt to recapture Henderson Field.Smith pp. 197–198
Allied reinforcement
As the Japanese regrouped west of the Matanikau, the U.S. forces concentrated on shoring up and strengthening their Lunga defenses. On 14 September Vandegrift moved another battalion (3rd Battalion, 2nd Marine Regiment) from Tulagi to Guadalcanal. On 18 September, an Allied naval convoy delivered 4,157 men from the 3rd Provisional Marine Brigade (the 7th Marine Regiment plus a battalion from the 11th Marine Regiment and some additional support units), 137 vehicles, tents, aviation fuel, ammunition, rations, and engineering equipment to Guadalcanal. These crucial reinforcements allowed Vandegrift, beginning on 19 September, to establish an unbroken line of defense around the Lunga perimeter. While covering this convoy, the aircraft carrier was scuttledEvans, Japanese Navy, pp. 179–180; Hammel, Carrier Strike, pp. 24–41. after being struck by torpedoes from the Japanese submarine southeast of Guadalcanal. This stretched Allied naval airpower thin, with only one aircraft carrier () remaining in operation in the entire South Pacific Area.Evans, pp. 179–180; Frank, pp. 247–252; Griffith, p. 156; and Smith, pp. 198–200. Vandegrift also made some changes in the senior leadership of his combat units, transferring several officers who did not meet his performance standards off the island and promoting junior officers who had proven themselves to take their place. One of these was the recently promoted Colonel Merritt Edson, who was placed in command of the 5th Marine Regiment.Frank p. 263
A lull occurred in the air war over Guadalcanal, with no Japanese air raids between 14 and 27 September because of bad weather, during which both sides reinforced their respective air units. The Japanese delivered 85 fighters and bombers to their air units at Rabaul, while the U.S. sent a further 23 fighters and attack aircraft to Henderson Field. By 20 September the Japanese had 117 total aircraft at Rabaul, while the Allies tallied 71 aircraft at Henderson Field.Frank pp. 264–265 The air war resumed with a Japanese air raid on Guadalcanal on 27 September, which was contested by U.S. Navy and Marine fighters from Henderson Field.Frank p. 272
The Japanese immediately began to prepare for their next attempt to recapture Henderson Field. The 3rd Battalion, 4th (Aoba) Infantry Regiment had landed at Kamimbo Bay on the western end of Guadalcanal on 11 September - too late to join Kawaguchi's attack, but in time to join Oka's forces near the Matanikau. Tokyo Express runs by IJN destroyers on 14, 20, 21 and 24 September brought food and ammunition as well as 280 men from the 1st Battalion, Aoba Regiment, to Kamimbo Bay. Meanwhile, the Japanese 2nd and 38th Infantry Divisions were transported from the Dutch East Indies to Rabaul, beginning on 13 September. The Japanese planned to transport a total of 17,500 troops from these two divisions to Guadalcanal to take part in the next major attack on the Lunga perimeter by late October.Griffith, pp. 152; Frank, pp. 224, 251–254, 266; Jersey, pp. 248–249; and Smith, pp. 132, 158.
Actions along the Matanikau
Vandegrift and his staff were aware that Kawaguchi's troops had retreated to the area west of the Matanikau, and that numerous groups of Japanese stragglers were scattered throughout the area between the Lunga perimeter and the Matanikau River. Vandegrift therefore decided to conduct another series of small unit operations around the Matanikau Valley. Their purpose was to mop up scattered groups of Japanese troops east of the Matanikau and to keep the main body of Japanese soldiers off-balance, preventing them from consolidating positions so close to the main Marine defenses at Lunga Point.Smith, p. 204; and Frank, p. 270.
An attack on Japanese forces west of the Matanikau was conducted between 23 and 27 September by elements of three U.S. Marine battalions. The attack was repulsed by Kawaguchi's troops under Akinosuke Oka's local command. During the action three Marine companies were surrounded by Japanese forces near Point Cruz west of the Matanikau, took heavy losses, and escaped only due to assistance from the destroyer and landing craft crewed by U.S. Coast Guard personnel. One of those was piloted by Douglas Munro, who was killed as he maneuvered his craft to protect the escaping Marines and became the only Coast Guardsman to be awarded the Medal of Honor.Smith, pp. 204–215; Frank, pp. 269–274; Zimmerman, pp. 96–101.
Between 6 and 9 October a larger force of Marines successfully crossed the Matanikau River, attacked newly landed Japanese forces from the 2nd Infantry Division under the command of Generals Masao Maruyama and Yumio Nasu, and inflicted heavy losses on the Japanese 4th Infantry Regiment. This action forced the Japanese to retreat from their positions east of the Matanikau and hindered Japanese preparations for their planned major offensive on the U.S. Lunga defenses.Griffith, pp. 169–176; Frank, pp. 282–290; and Hough, pp. 318–322. Between 9 and 11 October the U.S. 1st Battalion, 2nd Marines raided two small Japanese outposts about east of the Lunga perimeter at Gurabusu and Koilotumaria near Aola Bay. These raids killed 35 Japanese at a cost of 17 Marines and 3 U.S. Navy personnel killed.Frank, pp. 290–291. 15 of the Marines and the three U.S. Navy sailors were killed when the Higgins boat carrying them from Tulagi to Aola Bay on Guadalcanal was lost. One of the Japanese killed in the raid was "Ishimoto", a Japanese intelligence agent and interpreter who had worked in the Solomon Islands area prior to the war and was alleged to have participated in the murder of two Catholic priests and two nuns at Tasimboko on 3 September 1942. (The Mysterious Mr. Moto on Guadalcanal)
Battle of Cape Esperance
Throughout the last week of September and the first week of October, Tokyo Express runs continually delivered troops from the Japanese 2nd Infantry Division to Guadalcanal. The Japanese Navy promised to support the IJA's planned offensive by delivering the necessary troops, equipment, and supplies to the island, and also by stepping up air attacks on Henderson Field and sending warships to bombard the airfield.Rottman, p. 61; Griffith, p. 152; Frank, pp. 224, 251–254, 266–268, 289–290; Dull, pp. 225–226; and Smith, pp. 132, 158.
In the meantime, Millard F. Harmon, commander of U.S. Army forces in the South Pacific, convinced Ghormley that U.S. Marine forces on Guadalcanal needed to be reinforced immediately if the Allies were to successfully defend the island from the next expected Japanese offensive. Thus, on 8 October, the 2,837 men of the 164th Infantry Regiment from the Americal Division boarded ships at New Caledonia for the trip to Guadalcanal with a projected arrival date of 13 October. To protect the transports carrying the 164th to Guadalcanal, Ghormley ordered Task Force 64, consisting of four cruisers and five destroyers under U.S. Rear Admiral Norman Scott, to intercept and combat any Japanese ships that approached Guadalcanal and threatened the arrival of the transport convoy.Frank, pp. 293–297; Morison, The Struggle for Guadalcanal pp. 147–149; and Dull, p. 225. Since not all of the Task Force 64 warships were available, Scott's force was designated as Task Group 64.2. The U.S. destroyers were from Squadron 12, commanded by Captain Robert G. Tobin in Farenholt.
Mikawa's 8th Fleet staff scheduled a substantial Express run for the night of 11 October. Two seaplane tenders and six destroyers were ordered to put 728 soldiers, along with artillery and ammunition, ashore on Guadalcanal. At the same time, in a separate operation, three heavy cruisers and two destroyers under the command of Rear Admiral Aritomo Gotō were to bombard Henderson Field with special explosive shells with the objective of destroying the Cactus Air Force and the airfield's facilities. Because U.S. Navy warships had not yet attempted to interdict any Tokyo Express missions to Guadalcanal, the Japanese were not expecting any opposition from Allied naval surface forces that night.Frank, pp. 295–296; Hackett, IJN Aoba: Tabular Record of Movement; Morison, The Struggle for Guadalcanal pp. 149–151; D'Albas, p. 183; and Dull, p. 226.
Just before midnight, Scott's warships detected Gotō's force on radar near the entrance to the strait between Savo Island and Guadalcanal. Scott's force was in a position to cross the T on Gotō's unsuspecting formation. Opening fire, Scott's warships sank a cruiser and a destroyer, heavily damaged another cruiser, mortally wounded Gotō, and forced the rest of Gotō's warships to abandon their bombardment mission and retreat. During the exchange of gunfire, one of Scott's destroyers was sunk, and one cruiser and another destroyer were heavily damaged. In the meantime, the Japanese supply convoy successfully completed unloading at Guadalcanal and began its return journey without being discovered by Scott's force.Hornfischer, p. 157–188
Later on the morning of 12 October, four Japanese destroyers from the supply convoy turned back to assist Gotō's retreating, damaged warships. Air attacks by CAF aircraft from Henderson Field sank two of these destroyers later that day. Meanwhile, the convoy of U.S. Army troops reached Guadalcanal as scheduled on 13 October, successfully delivering its cargo and passengers to the island.Frank, pp. 299–324; Morison, The Struggle for Guadalcanal pp. 154–171; and Dull, pp. 226–230.
Henderson Field
Battleship bombardment
Despite the U.S. victory off Cape Esperance, the Japanese continued with plans and preparations for their large offensive scheduled for later in October. The Japanese decided to risk a rare departure from their usual practice of only using fast warships to deliver men and materiel to the island. On 13 October, a convoy comprising six cargo ships escorted by eight screening destroyers departed the Shortland Islands for Guadalcanal. The convoy carried 4,500 troops from the 16th and 230th Infantry Regiments, some naval marines, two batteries of heavy artillery, and one company of tanks.Frank, pp. 313–315. The 16th was from the 2nd Division and the 230th from the 38th Division.
To protect the approaching convoy from attack by CAF aircraft, Yamamoto sent the 3rd Battleship Division from Truk to bombard Henderson Field, under the command of Takeo Kurita. At 01:33 on 14 October, IJN battleships and , escorted by one light cruiser and nine destroyers, reached Guadalcanal and opened fire on Henderson Field from a distance of . At this range, over ten miles (16 km), Allied shore batteries had no prospect of returning effective fire. Over the next one hour and 23 minutes, the two battleships fired 973 shells into the Lunga perimeter, most of which fell in and around the area of the airfield. Many of the shells were fragmentation shells, specifically designed to destroy land targets. The bombardment heavily damaged both runways, burned almost all of the available aviation fuel, and destroyed 48 of the CAF's 90 aircraft. 41 men were killed, including six CAF pilots. Few CAF aircraft survived entirely unscathed, and only about a dozen remained in flyable condition the next day. Wrecked and damaged planes were lined up wingtip to wingtip, in the hopes of diverting Japanese attention from the few surviving aircraft. After exhausting their ammunition around 3 a.m., the Japanese battleship force immediately returned to Truk.Evans, pp. 181–182; Frank, pp. 315–320; Morison, The Struggle for Guadalcanal pp. 171–175. Raizo Tanaka commanded Destroyer Squadron 2 which was part of the battleship's screen. Allied troops stationed at Henderson Field colloquially referred to this bombardment, the heaviest they had endured thus far in the campaign, as "The Night".
Despite the heavy damage, Henderson personnel were able to restore one of the runways to operational condition within a few hours. Seventeen SBD-3 Dauntless dive bombers and 20 F4F Wildcats at Espiritu Santo were quickly flown to Henderson, and U.S. Army and Marine transport aircraft shuttled aviation gasoline from Espiritu Santo to Guadalcanal. Aware of the approach of the large Japanese reinforcement convoy, the U.S. desperately sought a way to interdict the convoy before it could reach Guadalcanal. Using fuel drained from destroyed aircraft and from a cache in the nearby jungle, the CAF attacked the convoy twice on 14 October but caused no damage.Frank pp. 319–321
The Japanese convoy reached Tassafaronga Point at midnight on 14 October and began unloading. Throughout the day of 15 October, a string of CAF aircraft from Henderson bombed and strafed the unloading convoy, destroying three of the cargo ships. The remainder of the convoy departed that night, having unloaded all of the troops and about two-thirds of the supplies and equipment. Several Japanese heavy cruisers also bombarded Henderson on the nights of 14 and 15 October, destroying a few additional CAF aircraft but failing to inflict further significant damage to the airfield.Frank, pp. 321–326; Hough, pp. 327–328.
Battle for Henderson Field
Between 1 and 17 October, the Japanese delivered 15,000 troops to Guadalcanal, giving Hyakutake 20,000 total troops to employ for his planned offensive. Because of the loss of their positions on the east side of the Matanikau, the Japanese decided that an attack on the U.S. defenses along the coast would be prohibitively difficult. Therefore, Hyakutake decided that the main thrust of his planned attack would be from south of Henderson Field. His 2nd Division (augmented by troops from the 38th Division), under Maruyama and comprising 7,000 soldiers in three infantry regiments of three battalions each was ordered to march through the jungle and attack the American defenses from the south, near the east bank of the Lunga River.Shaw, p. 34; and Rottman, p. 63. The date of the attack was set for 22 October, then changed to 23 October. To distract the Americans from the planned attack from the south, Hyakutake's heavy artillery plus five battalions of infantry (about 2,900 men) under Major General Tadashi Sumiyoshi were to attack the American defenses from the west along the coastal corridor. The Japanese estimated that there were 10,000 American troops on the island, when in fact there were about 23,000.Rottman, p. 61; Frank, pp. 289–340; Hough, pp. 322–330; Griffith, pp. 186–187; Dull, pp. 226–230; Morison, The Struggle for Guadalcanal pp. 149–171. The Japanese troops delivered to Guadalcanal during this time comprised the entire 2nd (Sendai) Infantry Division, two battalions from the 38th Infantry Division, and various artillery, tank, engineer, and other support units. Kawaguchi's forces also included what remained of the 3rd Battalion, 124th Infantry Regiment, which was originally part of the 35th Infantry Brigade commanded by Kawaguchi during the Battle of Edson's Ridge. Despite this, American commanders were pessimistic about their ability to repulse another concerted Japanese attack on the airfield. Units were given orders to fight as guerrillas should they be overrun by the Japanese, and the 1st Marine Division's intelligence staff began burning their classified records. Speaking to reporters in Washington, D.C., Secretary of the Navy Frank Knox refused to publicly guarantee that Guadalcanal could be held.
On 12 October, a company of Japanese engineers began to break a trail, called the "Maruyama Road", from the Matanikau towards the southern portion of the U.S. Lunga perimeter. The trail traversed some of the most difficult terrain on Guadalcanal, including numerous rivers and streams, deep, muddy ravines, steep ridges, and dense jungle. Between 16 and 18 October, the 2nd Division began their march along the Maruyama Road.Miller, p. 155; Frank, pp. 339–341; Hough, p. 330; Rottman, p. 62; Griffith, pp. 187–188. Hyakutake sent Colonel Masanobu Tsuji, a member of his staff, to monitor the 2nd Division's progress along the trail and to report to him on whether the attack could begin on 22 October as scheduled. Masanobu Tsuji has been identified by some historians as the most likely culprit behind the Bataan death march.
By 23 October, Maruyama's forces were still struggling through the jungle to reach the American lines. That evening, after learning that his forces had yet to reach their attack positions, Hyakutake postponed the attack to 19:00 on 24 October. The Americans remained unaware of the approach of Maruyama's forces.Griffith, p. 193; Frank, pp. 346–348; Rottman, p. 62.
Sumiyoshi was informed by Hyakutake's staff of the postponement of the offensive to 24 October, but he was unable to contact his troops to inform them of the delay. Thus, at dusk on 23 October, two battalions of the 4th Infantry Regiment and the nine tanks of the 1st Independent Tank Company launched attacks on the U.S. Marine defenses at the mouth of the Matanikau. U.S. Marine artillery, cannon, and small arms fire repulsed the attacks, destroying all the tanks and killing many of the Japanese soldiers while suffering only light casualties.Hough, pp. 332–333; Frank, pp. 349–350; Rottman, pp. 62–63; Griffith, pp. 195–196; Miller, pp. 157–158. The Marines lost 2 killed in the action. Japanese infantry losses are not recorded but were, according to Frank, "unquestionably severe." Griffith says that 600 Japanese soldiers were killed. Only 17 of the 44 members of the 1st Independent Tank Company survived the battle.
Finally, late on 24 October, Maruyama's forces reached the Lunga perimeter. Over two consecutive nights Maruyama's forces conducted numerous frontal assaults on positions defended by troops of the 1st Battalion, 7th Marines under Lieutenant Colonel Chesty Puller and the U.S. Army's 3rd Battalion, 164th Infantry Regiment, commanded by Lieutenant Colonel Robert Hall. U.S. Marine and Army units armed with rifles, machine guns, mortars, and artillery, including direct canister fire from 37 mm anti-tank guns, "wrought terrible carnage" on the Japanese.Frank pp. 361–362 A few small groups of Japanese broke through the American defenses, but were hunted down and killed over the next several days. More than 1,500 of Maruyama's troops were killed in the attacks, while the Americans lost about 60 killed. Over the same two days American aircraft from Henderson Field defended against attacks by Japanese aircraft and warships, destroying 14 aircraft and sinking light cruiser .Hough, p. 336; Frank, pp. 353–362; Griffith, pp. 197–204; Miller, pp. 147–151, 160–162; Lundstrom, pp. 343–352. The 164th became the first Army unit to engage in combat in the war and was later awarded the Presidential Unit Citation.
Further Japanese attacks near the Matanikau on 26 October were also repulsed with heavy losses for the Japanese. As a result, by 08:00 on 26 October, Hyakutake called off any further attacks and ordered his forces to retreat. About half of Maruyama's survivors were ordered to retreat back to the upper Matanikau Valley while the 230th Infantry Regiment under Colonel Toshinari Shōji was told to head for Koli Point, east of the Lunga perimeter. Leading elements of the 2nd Division reached the 17th Army headquarters area at Kokumbona, west of the Matanikau on 4 November. The same day, Shōji's unit reached Koli Point and made camp. Decimated by combat losses, malnutrition, and tropical diseases, the 2nd Division was incapable of further offensive action and fought as a defensive force along the coast for the rest of the campaign. In total, the Japanese lost 2,200–3,000 troops in the battle while the Americans lost around 80 killed.Frank, pp. 63–406, 418, 424, and 553; Zimmerman, pp. 122–123; Griffith, p. 204; Hough, p. 337; Rottman, p. 63. Silver Star medals were awarded to Sgt. Norman Greber of Ohio, Pvt. Don Reno of Texas, Pvt. Jack Bando of Oregon, Pvt. Stan Ralph of New York, and Cpl. Michael Randall of New York for their actions during the battle.
Battle of the Santa Cruz Islands
At the same time that Hyakutake's troops were attacking the Lunga perimeter, a large Japanese naval force consisting of two fleet carriers (Shokaku and Zuikaku), two other light carriers, four battleships and various supporting vessels moved into a position near the southern Solomon Islands. Under the overall command of Yamamoto, this fleet was the largest that the Japanese had assembled since the Battle of Midway. Yamamoto's goal was to draw the bulk of Allied naval strength in the region, specifically the American aircraft carriers, into a decisive sea battle at the same time that Japanese troops on Guadalcanal were attacking the airfield in force. Allied naval carrier forces in the area, under the overall command of William Halsey Jr., also hoped to meet the Japanese naval forces in battle. Nimitz had replaced Ghormley with Admiral Halsey on 18 October after concluding that Ghormley had become too pessimistic and myopic to effectively continue leading Allied forces in the South Pacific Area.Morison, The Struggle for Guadalcanal pp. 199–207; Frank, pp. 368–378; Dull, pp. 235–237. Due to faulty reports from Hyakutake that his ground forces had seized the airfield over the night of October 25, Yamamoto ordered his task force to sail south and seek out the American fleet.
The two opposing carrier forces confronted each other on the morning of 26 October, in what became known as the Battle of the Santa Cruz Islands, the last carrier battle of the war in the Pacific until the Battle of the Philippine Sea nearly two years later. After an exchange of carrier air attacks, Allied surface ships were forced to retreat from the battle area with the loss of one carrier sunk (Hornet) and another () heavily damaged. The participating Japanese carrier forces, however, also retired because of high aircraft and aircrew losses and significant damage to two carriers. Although the Japanese had apparently secured a tactical victory in terms of ships sunk and damaged, their loss of almost 150 veteran carrier pilots provided a long-term strategic advantage for the Allies, whose aircrew losses in the battle were relatively low. Throughout the Guadalcanal campaign, Allied forces were far more successful in recovering downed pilots (both ground-based and carrier-based) than the Japanese. Japanese carriers would play no further significant role in the campaign.Dull, pp. 237–244; Frank, pp. 379–403; Morison, The Struggle for Guadalcanal pp. 207–224.
November land actions
In order to exploit the victory in the Battle for Henderson Field, Vandegrift sent six Marine battalions, later joined by one Army battalion, on an offensive west of the Matanikau. The operation was commanded by Merritt Edson and its goal was to capture Kokumbona, headquarters of the 17th Army, west of Point Cruz. Defending the Point Cruz area were Japanese army troops from the 4th Infantry Regiment commanded by Nomasu Nakaguma. The 4th Infantry was severely understrength because of battle damage, tropical disease, and malnutrition.Hough, p. 343; Hammel, Carrier Clash p. 135; Griffith, pp. 214–215; Frank, p. 411; Anderson; Shaw, pp. 40–41; Zimmerman, pp. 130–131.
The American offensive began on 1 November and succeeded in destroying Japanese forces defending the Point Cruz area by 3 November, including troops sent to reinforce Nakaguma's battered regiment. The Americans appeared to be on the verge of breaking through the Japanese defenses and capturing Kokumbona. At this time, however, other American forces discovered and engaged newly landed Japanese troops near Koli Point on the eastern side of the Lunga perimeter. To counter this new threat, Vandegrift temporarily halted the Matanikau offensive on 4 November. The Americans suffered 71 killed and the Japanese around 400 killed in the offensive.Shaw, pp. 40–41; Griffith, pp. 215–218; Hough, pp. 344–345; Zimmerman, pp. 131–133; Frank, pp. 412–420; Hammel, Carrier Clash pp. 138–139.
At Koli Point early in the morning 3 November, five Japanese destroyers delivered 300 army troops to support Shōji and his troops who were en route to Koli Point after the Battle for Henderson Field. Having learned of the planned landing, Vandegrift sent a battalion of Marines under Herman H. Hanneken to intercept the Japanese at Koli. Soon after landing, the Japanese soldiers encountered and drove Hanneken's battalion back towards the Lunga perimeter. In response, Vandegrift ordered Puller's Marine battalion plus two of the 164th infantry battalions, along with Hanneken's battalion, to move towards Koli Point to attack the Japanese forces there.Zimmerman, pp. 133–138; Griffith, pp. 217–219; Hough, pp. 347–348; Frank, pp. 414–418; Miller, pp. 195–197; Hammel, Carrier Clash p. 141; Shaw, pp. 41–42; Jersey, p. 297. Jersey states that the troops landed were from the 2nd Company, 230th Infantry commanded by 1st Lt Tamotsu Shinno plus the 6th Battery, 28th Mountain Artillery Regiment with the two guns.
As the American troops began to move, Shōji and his soldiers began to arrive at Koli Point. Beginning on 8 November, the American troops attempted to encircle Shōji's forces at Gavaga Creek near Koli Point. Meanwhile, Hyakutake ordered Shōji to abandon his positions at Koli and rejoin Japanese forces at Kokumbona in the Matanikau area. A gap existed by way of a swampy creek in the southern side of the American lines. Between 9 and 11 November, Shōji and between 2,000 and 3,000 of his men escaped into the jungle to the south. On 12 November, the Americans completely overran and killed all the remaining Japanese soldiers left in the pocket. The Americans counted the bodies of 450–475 Japanese dead in the Koli Point area and captured most of Shōji's heavy weapons and provisions. The American forces suffered 40 killed and 120 wounded in the operation.Zimmerman, pp. 133–141; Griffith, pp. 217–23; Hough, pp. 347–350; Frank, pp. 414–423; Miller, pp. 195–200; Hammel, Carrier Clash pp. 141–44; Shaw, pp. 41–42; Jersey, pp. 297–305.
Meanwhile, on 4 November, two companies from the 2nd Marine Raider Battalion, commanded by Lieutenant Colonel Evans Carlson landed by boat at Aola Bay, east of Lunga Point. Carlson's raiders, along with troops from the Army's 147th Infantry Regiment, were to provide security for 500 Seabees as they attempted to construct an airfield at that location. Halsey, acting on a recommendation by Turner, had approved the Aola Bay airfield construction effort; however it was abandoned at the end of November because of unsuitable terrain.Peatross, pp. 132–133; Frank, pp. 420–421; Hoffman. The two 2nd Raider companies sent to Aola were Companies C and E. The Aola construction units moved to Koli Point where they successfully built an auxiliary airfield beginning on 3 December 1942. (Miller, p. 174.)
On 5 November, Vandegrift ordered Carlson and his raiders to march overland from Aola and attack any of Shōji's forces that had escaped from Koli Point. With the rest of the companies from his battalion, which arrived a few days later, Carlson and his troops set off on a 29-day patrol from Aola to the Lunga perimeter. During the patrol, the raiders fought several battles with Shōji's retreating forces, killing almost 500 of them, while suffering 16 killed. Tropical diseases and a lack of food felled more of Shōji's men. By the time Shōji's forces reached the Lunga River in mid-November, about halfway to the Matanikau, only 1,300 men remained with the main body. When Shōji reached the 17th Army positions west of the Matanikau, only 700 to 800 survivors were still with him. Most of the survivors from Shōji's force joined other Japanese units defending the Mount Austen and upper Matanikau River area.Hough, pp. 348–350; Shaw, pp. 42–43; Frank, pp. 420–424; Griffith, p. 246; Miller, pp. 197–200; Zimmerman, pp. 136–145, Jersey, p. 361.
Tokyo Express runs on 5, 7, and 9 November delivered additional troops from the Japanese 38th Infantry Division, including most of the 228th Infantry Regiment. These fresh troops were quickly placed in the Point Cruz and Matanikau area and helped successfully resist further attacks by American forces on 10 and 18 November. The Americans and Japanese remained facing each other along a line just west of Point Cruz for the next six weeks.Frank, pp. 420–421, 424–25, 493–497; Anderson; Hough, pp. 350–358; Zimmerman, pp. 150–152.
Naval Battle of Guadalcanal
After the defeat in the Battle for Henderson Field, the IJA planned to try again to retake the airfield in November 1942, but further reinforcements were needed before the operation could proceed. The IJA requested assistance from Yamamoto to deliver the needed reinforcements to the island and to support the next offensive. Yamamoto provided 11 large transport ships to carry the remaining 7,000 troops from the 38th Infantry Division, their ammunition, food, and heavy equipment from Rabaul to Guadalcanal. He also provided a warship support force that included two battleships, and , equipped with special fragmentation shells, which were to bombard Henderson Field on the night of 12–13 November. The goal of the bombardment was to destroy the airfield and the aircraft stationed there, to allow the slow transports to reach Guadalcanal and unload safely the next day.Hammel, Guadalcanal: Decision at Sea, 41–46 The warship force was commanded from Hiei by recently promoted Vice Admiral Hiroaki Abe.Hammel, Guadalcanal: Decision at Sea, p. 93
In early November, Allied intelligence learned that the Japanese were preparing again to try to retake Henderson Field.Hammel, Guadalcanal: Decision at Sea, p. 37 In response, on 11 November the U.S. sent Turner's Task Force 67 to Guadalcanal, carrying Marine replacements, two U.S. Army infantry battalions, ammunition and food. The supply ships were protected by two task groups, commanded by Rear Admirals Daniel J. Callaghan and Norman Scott, as well as by aircraft from Henderson Field.Hammel, Guadalcanal: Decision at Sea, pp. 38–39; Frank, pp. 429–430. The American reinforcements totaled 5,500 men and included the 1st Marine Aviation Engineer Battalion, replacements for ground and air units, the 4th Marine Replacement Battalion, two battalions of the U.S. Army's 182nd Infantry Regiment, and ammunition and supplies. The ships were attacked several times on 11 and 12 November by Japanese aircraft from Rabaul staging through an air base at Buin, Bougainville, but most unloaded their cargo without serious damage.Frank, p. 432; Hammel, Guadalcanal: Decision at Sea, pp. 50–90.
U.S. reconnaissance aircraft spotted the approach of Abe's bombardment force and passed a warning to Allied commanders.Hara p. 137 Thus warned, Turner detached all usable combat ships under Callaghan to protect the troops ashore from the expected Japanese naval attack and troop landing, and ordered the supply ships at Guadalcanal to depart by early evening 12 November.Hammel, Guadalcanal: Decision at Sea, p. 92 Callaghan's force comprised two heavy cruisers, three light cruisers, and eight destroyers.Hammel, Guadalcanal: Decision at Sea, pp. 99–107
Around 01:30 on 13 November, Callaghan's force intercepted Abe's bombardment group between Guadalcanal and Savo Island. In addition to the two battleships, Abe's force included one light cruiser and 11 destroyers. In the pitch darknessNew moon 8 November 1942 15:19 hours: Fred Espenak, Phases of the Moon: 1901 to 2000 the two warship forces became intermingled before opening fire at unusually close range. In the resulting mêlée, Abe's warships sank or severely damaged all but one cruiser and one destroyer in Callaghan's force; both Callaghan and Scott were killed. Two Japanese destroyers were sunk, and another destroyer and the battleship Hiei were heavily damaged. Despite his defeat of Callaghan's force, Abe ordered his warships to retire without bombarding Henderson Field. The Hiei sank later that day after repeated air attacks by aircraft from Henderson Field and the carrier Enterprise. Because of Abe's failure to neutralize Henderson Field, Yamamoto ordered Tanaka's troop transport convoy, located near the Shortland Islands, to wait an additional day before heading towards Guadalcanal. Yamamoto ordered Nobutake Kondō to assemble another bombardment force using warships from Truk and Abe's force to attack Henderson Field on 15 November.Frank, pp. 428–461; Hammel, Guadalcanal: Decision at Sea, pp. 103–401; Hara, pp. 137–156.
In the meantime, around 02:00 on 14 November, a cruiser and destroyer force under Gunichi Mikawa from Rabaul conducted an unopposed bombardment of Henderson Field. The bombardment caused some damage, but failed to put the airfield or most of its aircraft out of operation. As Mikawa's force retired towards Rabaul, Tanaka's transport convoy, trusting that Henderson Field was destroyed or heavily damaged, began its run down "the Slot" towards Guadalcanal. Throughout the day of 14 November, aircraft from Henderson Field and the Enterprise attacked Mikawa and Tanaka's ships, sinking one heavy cruiser and seven of the transports. Most of the troops were rescued from the transports by Tanaka's escorting destroyers and returned to the Shortlands. After dark, Tanaka and the remaining four transports continued towards Guadalcanal as Kondō's force approached to bombard Henderson Field.Frank, pp. 465–474; Hammel, Guadalcanal: Decision at Sea, pp. 298–345. The American air sorties were possible due to a supply of 488 55-gallon drums of 100-octane gas that was hidden in a secluded area under the jungle canopy by Cub-1 sailor, August Martello.
In order to intercept Kondō's force, Halsey, who was low on undamaged ships, detached two battleships, the and , and four destroyers from the Enterprise task force. This force, under the command of Willis A. Lee aboard the Washington, reached Guadalcanal and Savo Island just before midnight on 14 November, shortly before Kondō's bombardment force arrived. Kondō's force consisted of the battleship Kirishima, two heavy cruisers, two light cruisers, and nine destroyers. After the two forces made contact, Kondō's force quickly sank three of the U.S. destroyers and heavily damaged the fourth. The Japanese warships then sighted, opened fire, and damaged the South Dakota. As Kondō's warships concentrated on the South Dakota, the Washington approached the Japanese ships unobserved and opened fire on the Kirishima, inflicting severe damage upon the Japanese battleship. After fruitlessly chasing the Washington towards the Russell Islands, Kondō ordered his warships to retire without bombarding Henderson Field. One of Kondō's destroyers was also sunk during the engagement.Hammel, Guadalcanal: Decision at Sea, pp. 349–395; Frank, pp. 469–486.
As Kondō's ships retired, the four Japanese transports beached near Tassafaronga Point on Guadalcanal at 04:00. At 05:55, U.S. aircraft and artillery began attacking the beached transports, destroying all four, along with most of the supplies that they carried. Only 2,000–3,000 of the IJA troops reached the shore. Because of the failure to deliver most of the troops and supplies, the Japanese were forced to cancel their planned November offensive on Henderson Field, making the battle a significant strategic victory for the Allies and marking the beginning of the end of Japanese attempts to retake Henderson Field.Frank, pp. 484–488, 527; Hammel, Guadalcanal: Decision at Sea, pp. 391–395.
On 26 November, Japanese Lieutenant General Hitoshi Imamura took command of the newly formed Eighth Area Army at Rabaul. The new command encompassed both Hyakutake's 17th Army and the 18th Army in New Guinea. One of Imamura's first priorities upon assuming command was the continuation of the attempts to retake Henderson Field and Guadalcanal. The Allied offensive at Buna in New Guinea, however, changed Imamura's priorities. Because the Allied attempt to take Buna was considered a more severe threat to Rabaul, Imamura postponed further major reinforcement efforts to Guadalcanal, in order to concentrate on the situation in New Guinea.Dull, p. 261, Frank, pp. 497–499. On 24 December, the 8th Fleet, 11th Air Fleet, and all other Japanese naval units in the New Guinea and Solomon Islands areas were combined under one command, designated the Southeast Area Fleet with Jinichi Kusaka in command.
Battle of Tassafaronga
By this stage of the campaign, the Japanese were experiencing severe difficulty delivering sufficient supplies to sustain their troops on Guadalcanal. Attempts to use submarines for resupply runs in the last two weeks in November failed to provide sufficient food for Hyakutake's forces. A separate attempt to establish bases in the central Solomon Islands, which could facilitate barge convoys to Guadalcanal, also failed because of destructive and frequent Allied air attacks. On 26 November, the 17th Army notified Imamura that it faced an acute food crisis. Some front-line units had not been resupplied for six days, and even the rear-area troops were on one-third rations. The situation forced the Japanese to return to delivering supplies with destroyers, which were unable to bring the amounts required by the beleaguered IJA troops on Guadalcanal.Evans, pp. 197–198, Crenshaw, p. 136, Frank, pp. 499–502.
The staff of the IJN Eighth Fleet devised a plan to help reduce the exposure of destroyers delivering supplies to Guadalcanal. Large oil or gas drums were cleaned and filled with medical supplies and food, with enough air space to provide buoyancy, and strung together with rope. When the destroyers arrived at Guadalcanal they would make a sharp turn and the drums would be cut loose. A swimmer or boat from shore was to pick up the buoyed end of a rope attached to the drums and return it to the beach, where teams of soldiers could haul in the supplies.Hara, pp. 160–161; Roscoe, p. 206; Dull, p. 262; Evans, pp. 197–198; Crenshaw, p. 137; Toland, p. 419; Frank, p. 502; Morison, The Struggle for Guadalcanal p. 295.
The Eighth Fleet's Guadalcanal Reinforcement Unit (the Tokyo Express), commanded by Tanaka, was tasked by Mikawa with making the first of five scheduled runs to Tassafaronga using the drum method on the night of 30 November. Tanaka's unit was centered on eight destroyers, with six destroyers assigned to carry between 200 and 240 drums of supplies apiece.Dull, pp. 262–263; Evans, pp. 198–199; Crenshaw, p. 137; Morison, The Struggle for Guadalcanal p. 297; Frank, pp. 502–504. Notified by intelligence sources of the Japanese supply attempt, Halsey ordered the newly formed Task Force 67, comprising four cruisers and four destroyers under the command of Rear Admiral Carleton H. Wright, to intercept Tanaka's force off Guadalcanal. Two additional destroyers joined Wright's force en route to Guadalcanal from Espiritu Santo during the day of 30 November.Brown, pp. 124–125; USSBS, p. 139; Roscoe, p. 206; Dull; p. 262; Crenshaw, pp. 26–33; Kilpatrick, pp. 139–142; Morison, The Struggle for Guadalcanal pp. 294–296; Frank, p. 504.
At 22:40 on 30 November, Tanaka's force arrived off Guadalcanal and prepared to unload the supply barrels. Meanwhile, Wright's warships were approaching through Ironbottom Sound from the opposite direction. Wright's destroyers detected Tanaka's force on radar, and the destroyer commander requested permission to attack with torpedoes. Wright waited four minutes before giving permission, allowing Tanaka's force to escape from an optimum firing setup. All of the American torpedoes missed their targets. At the same time, Wright's cruisers opened fire, hitting and destroying one of the Japanese guard destroyers. The rest of Tanaka's warships abandoned the supply mission, increased speed, turned, and launched a total of 44 torpedoes in the direction of Wright's cruisers.Hara, pp. 161–164; Dull, p. 265; Evans, pp. 199–202; Crenshaw, pp. 34, 63, 139–151; Morison, The Struggle for Guadalcanal pp. 297–305; Frank, pp. 507–510. The Japanese torpedoes hit and sank the U.S. cruiser and heavily damaged the cruisers , , and . The rest of Tanaka's destroyers escaped without damage but failed to deliver any of the provisions to Guadalcanal.Dull, p. 265; Crenshaw, pp. 56–66; Morison, The Struggle for Guadalcanal pp. 303–312; Frank, pp. 510–515.
By 7 December 1942, Hyakutake's forces were losing about 50 men each day from malnutrition, disease, and Allied ground or air attacks.Frank, Guadalcanal, p. 527. Further attempts by Tanaka's destroyer forces to deliver provisions on 3, 7 and 11 December failed to alleviate the crisis, and one of Tanaka's destroyers was sunk by a U.S. PT boat torpedo.Dull, pp. 266–267; Evans, pp. 203–205; Morison, The Struggle for Guadalcanal pp. 318–319; Frank, pp. 518–521. Tanaka privately informed Admiral Mikawa that the Japanese forces on Guadalcanal could no longer be supplied by sea, and advised that they be withdrawn from the island. Tanaka was subsequently transferred to an administrative post in Singapore.
Japanese decision to withdraw
On 12 December, the Japanese Navy proposed that Guadalcanal be abandoned. At the same time, several army staff officers at the Imperial General Headquarters (IGH) also suggested that further efforts to retake Guadalcanal would be impossible. A delegation led by Colonel Joichiro Sanada, chief of the IGH's operations section, visited Rabaul on 19 December and consulted Imamura and his staff. Upon the delegation's return to Tokyo, Sanada recommended that Guadalcanal be abandoned. The IGH's top leaders agreed with Sanada's recommendation on 26 December and ordered their staffs to begin drafting plans for a withdrawal from Guadalcanal, establishment of a new defense line in the central Solomons, and shifting priorities and resources to the campaign in New Guinea.Jersey, p. 384; Frank, pp. 536–538; Griffith, p. 268; Hayashi, pp. 62–64; Toland, p. 426.
On 28 December, General Hajime Sugiyama and Admiral Osami Nagano personally informed Emperor Hirohito of the decision to withdraw from Guadalcanal. On 31 December, Hirohito formally endorsed the decision. The Japanese secretly began to prepare for the evacuation, called Operation Ke, scheduled to begin during the latter part of January 1943.Hayashi, pp. 62–64; Griffith, p. 268; Frank, pp. 534–539; Toland, pp. 424–426; Dull, p. 261; Morison, The Struggle for Guadalcanal pp. 318–321. During the conference with Sugiyama and Nagano, the Emperor asked Nagano, "Why was it that it took the Americans just a few days to build an air base and the Japanese more than a month or so?" (The IJN originally occupied Guadalcanal and began constructing the airfield). Nagano apologized and replied that the Americans had used machines while the Japanese had to rely on manpower. (Toland, p. 426). By now, Japanese forces on the island had dwindled to fewer than 15,000 men.
Battle of Mount Austen, the Galloping Horse, and the Sea Horse
By December, the weary 1st Marine Division was withdrawn for recuperation, and over the course of the next month the U.S. XIV Corps took over operations on the island. This corps consisted of the 2nd Marine Division and the U.S. Army's 25th Infantry and 23rd "Americal" Divisions. U.S. Army Major General Alexander Patch replaced Vandegrift as commander of Allied forces on Guadalcanal, which by January totaled just over 50,000 men.Frank, pp. 247–252, 293, 417–420, 430–431, 521–522, 529; Griffith, pp. 156, 257–259, 270; Miller, pp. 143, 173–177, 183, 189, 213–219; Jersey, pp. 304–305, 345–346, 363, 365; Hough, pp. 360–362; Shaw, pp. 46–47; Zimmerman, pp. 156–157, 164. The Americal Division infantry regiments were National Guard units. The 164th was from North Dakota, the 182nd from Massachusetts, and the 132nd from Illinois. The 147th had previously been part of the 37th Infantry Division. During its time on Guadalcanal, the 1st Marine Division suffered 650 killed, 31 missing, 1,278 injured, and 8,580 who contracted some type of disease, mainly malaria. The 2nd Marine Regiment had arrived at Guadalcanal with most of the 1st Marine Division, but remained behind to rejoin its parent unit, the 2nd Marine Division. The U.S. Army's 25th Infantry Division's 35th Regiment arrived at Guadalcanal on 17 December, the 27th Regiment on 1 January, and the 161st Regiment on 4 January. The 2nd Marine Division's headquarters units, the 6th Marine Regiment, and various Marine weapons and support units also arrived on 4 and 6 January. U.S. Major General John Marston, commander of the 2nd Marine Division, remained in New Zealand because he was superior in time in rank to Patch. Instead, Brigadier General Alphonse DeCarre commanded the 2nd Marine Division on Guadalcanal. The total number of Marines on Guadalcanal and Tulagi on 6 January 1943 was 18,383.
On 18 December, Allied (mainly U.S. Army) forces began attacking Japanese positions on Mount Austen. A strong Japanese fortified position, called the Gifu, stymied the attacks and the Americans were forced to temporarily halt their offensive on 4 January.Frank, pp. 529–534; Miller, pp. 231–237, 244, 249–252; Jersey, pp. 350–351; Anderson, Hough, pp. 363–364; Griffith, pp. 263–265. The Allies renewed the offensive on 10 January, attacking the Japanese on Mount Austen as well as on two nearby ridges called the Sea Horse and the Galloping Horse. After some difficulty, the Allies captured all three by 23 January. At the same time, U.S. Marines advanced along the north coast of the island, making significant gains. The Americans lost about 250 killed in the operation while the Japanese suffered around 3,000 killed, about 12 to 1 in the Americans' favor.Frank, pp. 563–567; Miller, pp. 290–305; Jersey, pp. 367–371.
Ke evacuation
On 14 January, a Tokyo Express run delivered a battalion of troops to act as a rear guard for the Ke evacuation. A staff officer from Rabaul accompanied the troops to notify Hyakutake of the decision to withdraw. At the same time, Japanese warships and aircraft moved into position around the Rabaul and Bougainville areas in preparation to execute the withdrawal operation. Allied intelligence detected the Japanese movements but misinterpreted them as preparations for another attempt to retake Henderson Field and Guadalcanal.Miller, p. 338; Frank, pp. 540–560; Morison, The Struggle for Guadalcanal pp. 333–339; Rottman, p. 64; Griffith, pp. 269–279; Jersey, pp. 384–388; Hayashi, p. 64.
Patch, wary of what he thought to be an imminent Japanese offensive, committed only a relatively small portion of his troops to continue a slow-moving offensive against Hyakutake's forces. On 29 January, Halsey, acting on the same intelligence, sent a resupply convoy to Guadalcanal screened by a cruiser task force. Sighting the cruisers, Japanese naval torpedo bombers attacked that same evening and heavily damaged the cruiser . The next day, more torpedo aircraft attacked and sank Chicago. Halsey ordered the remainder of the task force to return to base and directed the rest of his naval forces to take station in the Coral Sea, south of Guadalcanal, to be ready to counter a Japanese offensive.Hough, pp. 367–368; Frank, pp. 568–576; Miller, pp. 319–342; Morison, The Struggle for Guadalcanal pp. 342–350. After unloading their cargo, the U.S. transports evacuated the 2nd Marine Regiment from the island, which had been on Guadalcanal since the beginning of the campaign.
In the meantime, the Japanese 17th Army withdrew to the west coast of Guadalcanal while rear guard units checked the American offensive. On the night of 1 February, a force of 20 destroyers from Mikawa's 8th Fleet under Shintarō Hashimoto successfully extracted 4,935 soldiers, mainly from the 38th Division, from the island. The Japanese and Americans each lost a destroyer from an air and naval attack related to the evacuation mission.Frank, pp. 582–588, 757–758; Jersey, pp. 376–378; Morison, The Struggle for Guadalcanal pp. 364–368; Miller, pp. 343–345; Zimmerman, p. 162; Dull, p. 268.
On the nights of 4 and 7 February, Hashimoto and his destroyers evacuated the remaining Japanese forces from Guadalcanal. Apart from some air attacks, Allied forces were still anticipating a large Japanese offensive and did not attempt to interdict Hashimoto's evacuation runs. In total, the Japanese successfully evacuated 10,652 men from Guadalcanal. Their last troops left the island on the evening of 7 February, six months to the day from when the U.S. forces first landed.Jersey, pp. 397–400. Two days later, on 9 February, Patch realized that the Japanese were gone and declared Guadalcanal secure.Frank, pp. 589–597; Jersey, pp. 378–383, 383, 400–401; Miller pp. 342–348.
Aftermath
After the Japanese defeat, Guadalcanal and Tulagi were developed into major bases supporting the Allied advance further up the Solomon Islands chain. Besides Henderson Field, two additional fighter runways were constructed at Lunga Point, and a bomber airfield was built at Koli Point. Extensive naval port and logistics facilities were established at Guadalcanal, Tulagi, and Florida. The anchorage around Tulagi became an important forward base for Allied warships and transport ships supporting the Solomon Islands campaign. Major ground units were staged through large encampments and barracks on Guadalcanal before deployment further up the Solomons.U.S. Navy, Building the Navy's Bases in World War II, pp. 246–256.
After Guadalcanal the Japanese were clearly on the defensive in the Pacific. The constant pressure to reinforce Guadalcanal had weakened Japanese efforts in other theaters, contributing to a successful Australian and American counteroffensive in New Guinea which culminated in the capture of the key bases of Buna and Gona in early 1943. The Allies had gained a strategic initiative which they never relinquished. In June, the Allies launched Operation Cartwheel which, after modification in August 1943, formalized the strategy of isolating Rabaul and cutting its sea lines of communication. The subsequent successful neutralization of Rabaul and the forces centered there facilitated the South West Pacific campaign under MacArthur and Central Pacific island-hopping campaign under Nimitz, with both efforts successfully advancing toward Japan. The remaining Japanese defenses in the South Pacific Area were then either destroyed or bypassed by Allied forces as the war progressed.Hough, p. 374; Zimmerman, p. 166.
Medal of Honor recipients
Marine Corps
Kenneth D. Bailey, Major – 12–13 September 1942 (posth.)
Merritt A. Edson, Colonel – 13–14 September 1942
John Basilone, Sergeant – 24–25 October 1942
Mitchell Paige, Platoon Sergeant – 26 October 1942
Joseph J. Foss, Captain (pilot) – 9 October – 19 November 1942, Jan 1943
Alexander A. Vandegrift, Major General – 7 August – 9 December 1942
Army
William G. Fournier, Sergeant – 10 January 1943 (posth.)
Lewis Hall, Technician 5th Grade – 10 January 1943 (posth.)
Charles W. Davis, Captain – 12 January 1943
Navy
Daniel J. Callaghan, Rear Admiral – 12–13 September 1942 (posth.)
Coast Guard
Douglas A. Munro, Signalman First class – 27 September 1942 (posth.)
Significance
Resources
The Battle of Guadalcanal was one of the first prolonged campaigns in the Pacific Ocean theater of World War II. It strained logistical capabilities of the combatant nations. For the U.S., this need prompted the development of effective combat air transport for the first time. A failure to achieve air supremacy forced Japan to rely on reinforcement by barges, destroyers, and submarines, with very uneven results. Early in the campaign, the Americans were hindered by a lack of resources, as they suffered heavy losses in cruisers and carriers, with replacements from ramped-up shipbuilding programs still months away from materializing.Murray, p. 215; Hough, p. 372.
The U.S. Navy suffered such high personnel losses during the campaign that it refused to publicly release total casualty figures for years. However, as the campaign continued, and the American public became more and more aware of the plight and perceived heroism of the American forces on Guadalcanal, more forces were dispatched to the area. This spelled trouble for Japan as its military-industrial complex was unable to match the output of American industry and manpower. Thus, as the campaign wore on the Japanese were losing irreplaceable units while the Americans were rapidly replacing and even augmenting their forces.Murray, p. 215, Hough, p. 372
The Guadalcanal campaign was costly to Japan strategically and in material losses and manpower. Roughly 30,000 personnel, including 25,000 experienced ground troops, died during the campaign. As many as three-quarters of the deaths were from non-combat causes such as starvation and various tropical diseases. The drain on resources directly contributed to Japan's failure to achieve its objectives in the New Guinea campaign. Japan also lost control of the southern Solomons and the ability to interdict Allied shipping to Australia. Japan's major base at Rabaul became further directly threatened by Allied air power. Most importantly, scarce Japanese land, air, and naval forces had disappeared forever into the Guadalcanal jungle and surrounding sea. The Japanese could not replace the aircraft destroyed and ships sunk in this campaign, as well as their highly trained and veteran crews, especially the naval aircrews, nearly as quickly as the Allies.Hough p. 350
Strategy
While the Battle of Midway is viewed as a turning point in the Pacific War, Japan remained on the offensive, as shown by its advances down the Solomon Islands. Only after the Allied victories in Guadalcanal and New Guinea (at Milne Bay and Buna–Gona)Dean 2013, p. 236; Keogh 1965, p. 249; James 2012, p. 213. were these large-scale Japanese offensive actions stopped. Strategic initiative passed to the Allies, as it proved, permanently. The Guadalcanal campaign ended all Japanese expansion attempts in the Pacific and placed the Allies in a position of clear supremacy.Willmott, Barrier and the Javelin, pp. 522–523; Parshall and Tully, Shattered Sword, pp. 416–430. The Allied victory at Guadalcanal was the first step in a long string of successes that eventually led to the surrender and occupation of Japan.Hough, p. 350Hough, p. 372; Miller, p. 350; Zimmerman, p. 166.
The "Europe first" policy agreed to by the Allies had initially only allowed for defensive actions against Japanese expansion in order to focus resources on defeating Germany. However, Admiral King's argument for the Guadalcanal invasion, as well as its successful implementation, convinced Roosevelt that the Pacific Theater could be pursued offensively as well.Hornfischer, Neptune's Inferno, pp. 11–15 By the end of 1942, it was clear that Japan had lost the Guadalcanal campaign, a serious blow to Japan's strategic plans for the defense of their empire and an unanticipated defeat at the hands of the Americans.; Miller, p. 350; Shaw, p. 52; Alexander, p. 81.
Perhaps as important as the military victory for the Allies was the psychological victory. On a level playing field, the Allies had beaten Japan's best land, air, and naval forces. After Guadalcanal, Allied personnel regarded the Japanese military with much less fear and awe than previously. In addition, the Allies viewed the eventual outcome of the Pacific War with greatly increased optimism.Murray p. 215
Tokyo Express no longer has terminus on Guadalcanal.
—Major General Alexander Patch, USA, Commander, U.S. Forces on Guadalcanal
Guadalcanal is no longer merely a name of an island in Japanese military history. It is the name of the graveyard of the Japanese army.
— Major General Kiyotake Kawaguchi, IJA, Commander, 35th Infantry Brigade at GuadalcanalQuoted in Leckie (1999) p. 9 and others
Beyond Kawaguchi, several Japanese political and military leaders, including Naoki Hoshino, Nagano, and Torashirō Kawabe, stated shortly after the war that Guadalcanal was the decisive turning point in the conflict. Said Kawabe, "As for the turning point [of the war], when the positive action ceased or even became negative, it was, I feel, at Guadalcanal."Zimmerman p. 167
Vilu War Museum and Guadalcanal American Memorial
The Vilu War Museum is on Guadalcanal, about west of Honiara, the capital of the Solomon Islands. The remains of military equipment and of several aircraft can be seen in the open-air museum. Several memorials for the American, Australian, Fijian, New Zealand and Japanese soldiers who died are erected there.Michael Brillat: Südsee, p. 40. Munich 2011
To mark the 50th anniversary of the Red Beach landings, the Guadalcanal American Memorial was dedicated in Honiara on 7 August 1992.
Remaining ordnance
An unknown amount of unexploded bombs from the battle remain on the island, and residents of the island have been killed or severely injured by unexpected explosions from hidden explosives. The threat to people's lives from unexploded bombs remain high. The Solomon Islands police force has disposed most of the discovered bombs; however, clearance work is expensive, and the island does not have sufficient resources to clear the remaining explosives. The Solomon Islands have urged both the U.S. and Japanese governments to clear the remaining bombs from the island. In 2012, 18 years after the U.S. ended its aid program in the South Pacific, the U.S. provided funds to assist efforts to find and remove unexploded bombs. Australia and Norway also established programs to help the Solomon Islands remove unexploded bombs.
News reporting
The Guadalcanal campaign was the subject of a large amount of high-quality reporting. News agencies sent some of their most talented writers, as it was the first major American offensive combat operation of the war. Richard Tregaskis, who wrote for International News Service, gained fame with the publication of his bestselling Guadalcanal Diary in 1943.Tregaskis, Richard. Guadalcanal Diary. New York: Modern Library, 2000. Hanson Baldwin, a Navy correspondent, filed stories for The New York Times and won a Pulitzer Prize for his coverage of the early days of World War II. Tom Yarbrough wrote for the Associated Press, Bob Miller for the United Press, John Hersey for Time and Life, Ira Wolfert for the North American Newspaper Alliance (his series of articles about the November 1942 Naval Battle of Guadalcanal won him a Pulitzer Prize), Sergeant James Hurlbut for the Marine Corps, and Mack Morriss for Yank magazine. Commander Vandegrift placed few restrictions on the reporters who were generally allowed to go wherever they wanted and write what they wanted.
Notes
References
Books
Alexander, Joseph H. Edson's Raiders: The 1st Marine Raider Battalion in World War II. Annapolis, MD: Naval Institute Press, 2000.
Armstrong, William M. Marine Air Group 25 and SCAT (Images of Aviation). Charleston, SC: Arcadia, 2017. .
Bergerud, Eric M. Touched with Fire: The Land War in the South Pacific. New York: Penguin Books, 1997.
Clemens, Martin. Alone on Guadalcanal: A Coastwatcher's Story. Annapolis, MD: Naval Institute Press, 2004.
Crenshaw, Russell Sydnor. South Pacific Destroyer: The Battle for the Solomons from Savo Island to Vella Gulf. Annapolis, MD: Naval Institute Press, 1998.
D'Albas, Andrieu. Death of a Navy: Japanese Naval Action in World War II. New York: Devin-Adair Co., 1957.
Dull, Paul S. A Battle History of the Imperial Japanese Navy, 1941–1945. Annapolis, MD: Naval Institute Press, 1978.
Evans, David C. The Japanese Navy in World War II: In the Words of Former Japanese Naval Officers. Annapolis, MD: Naval Institute Press, 1986.
Frank, Richard. Guadalcanal: The Definitive Account of the Landmark Battle. New York: Random House, 1990.
Gilbert, Oscar E. Marine Tank Battles of the Pacific. Conshohocken, PA: Combined Pub., 2001.
Griffith, Samuel B. The Battle for Guadalcanal. Champaign, IL: University of Illinois Press, 2000.
Hadden, Robert Lee. 2007. "The Geology of Guadalcanal: a Selected Bibliography of the Geology, Natural History, and the History of Guadalcanal." Alexandria, VA: Topographic Engineering Center. 360 pages. Lists sources of information regarding the bodies of the US Marines of the Lt Col. Frank B. Goettge Reconnaissance patrol that was ambushed in August 1942.
Hammel, Eric. Carrier Clash: The Invasion of Guadalcanal & The Battle of the Eastern Solomons August 1942. St. Paul, MN: Zenith Press, 2004.
Hammel, Eric. Carrier Strike: The Battle of the Santa Cruz Islands, October 1942. Pacifica, CA: Pacifica Press, 2000.
Hammel, Eric. Guadalcanal: Decision at Sea: The Naval Battle of Guadalcanal, November 13–15, 1942. New York: Crown, 1988. .
Hara, Tameichi. Japanese Destroyer Captain. New York: Ballantine Books, 1961.
Hayashi, Saburo. Kogun: The Japanese Army in the Pacific War. Quantico: Marine Corps Association, 1959.
Hornfischer, James D. Neptune's Inferno: The U.S. Navy at Guadalcanal. New York: Bantam Books, 2011
Jersey, Stanley Coleman. Hell's Islands: The Untold Story of Guadalcanal. College Station: Texas A&M University Press, 2008.
Kilpatrick, C. W. Naval Night Battles of the Solomons. Pompano Beach, FL: Exposition Press of Florida, 1987.
Leckie, Robert. Helmet for my Pillow. [S.l.]: Ibooks, 2006.
Loxton, Bruce and Chris Coulthard-Clark. The Shame of Savo: Anatomy of a Naval Disaster. St. Leonards, N.S.W.: Allen & Unwin, 1997.
Lundstrom, John B. The First Team and the Guadalcanal Campaign: Naval Fighter Combat from August to November 1942. Annapolis, MD: Naval Institute Press, 2005.
Manchester, William. Goodbye, Darkness A Memoir of the Pacific. Boston: Little, Brown and Company, 1980.
McGee, William L. The Solomons Campaigns, 1942–1943: From Guadalcanal to Bougainville – Pacific War Turning Point, Volume 2. Santa Barbara, CA: BMC Publications, 2002.
Miller, Thomas G. The Cactus Air Force. Fredericksburg, TX: Admiral Nimitz Foundation, 1969.
Morison, Samuel Eliot The Struggle for Guadalcanal, August 1942 – February 1943, vol. V of History of United States Naval Operations in World War II. Boston: Little, Brown and Company, 1969.
Morison, Samuel Eliot, Breaking the Bismarcks Barrier, 22 July 1942 – 1 May 1944, vol. VI of History of United States Naval Operations in World War II. Boston: Little, Brown and Company 1950.
Murray, Williamson and Allan R. Millett A War To Be Won: Fighting the Second World War. Cambridge, MA: Belknap Press of Harvard University Press, 2000.
Peatross, Oscar F. Bless 'em All: The Raider Marines of World War II. Irvine, CA: ReView Publications, 1995.
Rottman, Gordon L. Japanese Army in World War II: The South Pacific and New Guinea, 1942–43. Oxford: Osprey, 2005.
Smith, Michael T. Bloody Ridge: The Battle That Saved Guadalcanal. Novato, CA: Pocket Books, 2003.
Toland, John The Rising Sun: The Decline and Fall of the Japanese Empire, 1936–1945. New York: Modern Library, 2003.
Web
Further reading
Books
Web
– Translation of the official record by the Japanese Demobilization Bureaux detailing the Imperial Japanese Army and Navy's participation in the Southwest Pacific area of the Pacific War.
Audio/visual
One episode from a 26-episode series about naval combat during World War II.
Biographical film about Admiral Halsey during the Guadalcanal campaign.
"Part One" and "Part Two" deal with the Guadalcanal campaign.
Video including historical footage of the Battle for Guadalcanal
External links
Presentation by James Hornfischer on his book Neptune's Inferno: The U.S. Navy at Guadalcanal at the Colby Military Writers' Symposium, 11 April 2012
Category:1942 in Japan
Category:1942 in the Solomon Islands
Category:1943 in Japan
Category:1943 in the Solomon Islands
Category:Battles and operations of World War II involving the Solomon Islands
Category:Battles of World War II involving Australia
Category:Battles of World War II involving Japan
Category:Battles of World War II involving the United States
Category:Campaigns of World War II
Category:Conflicts in 1942
Category:Conflicts in 1943
Category:Guadalcanal
Category:Pacific Ocean theater of World War II
Category:United States Marine Corps in World War II
Category:World War II operations and battles of the Pacific theatre
Category:World War II sites in the Solomon Islands
Category:Campaigns, operations and battles of World War II involving the United Kingdom
Category:Amphibious operations of World War II
Category:Amphibious operations involving the United States
Category:Military campaigns involving Japan
|
wars_military
| 17,653
|
60520
|
Boxer Rebellion
|
https://en.wikipedia.org/wiki/Boxer_Rebellion
|
The Boxer Rebellion, also known as the Boxer Uprising, Boxer Movement, or Yihetuan Movement (), was an anti-foreign, anti-imperialist, and anti-Christian uprising in North China between 1899 and 1901, towards the end of the Qing dynasty, by the Society of Righteous and Harmonious Fists. Its members were known as the "Boxers" in English, owing to many of them practicing Chinese martial arts, which at the time were referred to as "Chinese boxing". It was defeated by the Eight-Nation Alliance of foreign powers.
Following the First Sino-Japanese War, villagers in North China feared the expansion of foreign spheres of influence and resented Christian missionaries who ignored local customs and used their power to protect their followers in court. In 1898, North China experienced natural disasters, including the Yellow River flooding and droughts, which Boxers blamed on foreign and Christian influence. Beginning in 1899, the movement spread across Shandong and the North China Plain, destroying foreign property such as railroads, and attacking or murdering Chinese Christians and missionaries. The events came to a head in June 1900, when Boxer fighters, convinced they were invulnerable to foreign weapons, converged on Beijing with the slogan "Support the Qing government and exterminate the foreigners".
Diplomats, missionaries, soldiers, and some Chinese Christians took refuge in the Legation Quarter, which the Boxers besieged. The Eight-Nation Alliance—comprising American, Austro-Hungarian, British, French, German, Italian, Japanese, and Russian troops—invaded China to lift the siege and on 17 June stormed the Dagu Fort at Tianjin. Empress Dowager Cixi, who had initially been hesitant, supported the Boxers and on 21 June issued an imperial decree that was a de facto declaration of war on the invading powers. Chinese officialdom was split between those supporting the Boxers and those favouring conciliation, led by Prince Qing. The supreme commander of the Chinese forces, the Manchu general Ronglu, later claimed he acted to protect the foreigners. Officials in the southern provinces ignored the imperial order to fight against foreigners.
The Eight-Nation Alliance, after initially being turned back by the Imperial Chinese military and Boxer militia, brought 20,000 armed troops to China. They defeated the Imperial Army in Tianjin and arrived in Beijing on 14 August, relieving the 55-day Siege of the International Legations. Plunder and looting of the capital and the surrounding countryside ensued, along with summary execution of those suspected of being Boxers in retribution. The Boxer Protocol of 7 September 1901 provided for the execution of government officials who had supported the Boxers, for foreign troops to be stationed in Beijing, and for 450 million taels of silver—more than the government's annual tax revenue—to be paid as indemnity over the course of the next 39 years to the eight invading nations. The Qing dynasty's handling of the Boxer Rebellion further weakened both their credibility and control over China, and led to the Late Qing reforms, and to a greater extent the Xinhai Revolution.
Background
Origin of the Boxers
The Righteous and Harmonious Fists arose in the inland sections of the northern coastal province of Shandong, a region which had long been plagued by social unrest, religious sects, and martial societies. American Christian missionaries were probably the first people who referred to the well-trained, athletic young men as the "Boxers", because of the martial arts which they practised and the weapons training which they underwent. Their primary practice was a type of spiritual possession which involved the whirling of swords, violent prostrations, and incantations to deities.
The opportunities to fight against Western encroachment were especially attractive to unemployed village men, many of whom were teenagers. The tradition of possession and invulnerability went back several hundred years but took on special meaning against the powerful new weapons of the West. The Boxers, armed with rifles and swords, claimed supernatural invulnerability against cannons, rifle shots, and knife attacks. The Boxer groups popularly claimed that millions of soldiers would descend out of heaven to assist them in purifying China of foreign oppression. Members demonstrated their claimed invulnerability to new initiates by firing guns loaded with blank rounds at one another.
In 1895, despite ambivalence toward their heterodox practices, Yuxian, a Manchu who was the then prefect of Cao Prefecture and would later become provincial governor, cooperated with the Big Swords Society, whose original purpose was to fight bandits. The German Catholic missionaries of the Society of the Divine Word had built up their presence in the area, partially by taking in a significant portion of converts who were "in need of protection from the law". On one occasion in 1895, a large bandit gang defeated by the Big Swords Society claimed to be Catholics to avoid prosecution. "The line between Christians and bandits became increasingly indistinct", remarks historian Paul Cohen.
Some missionaries such as Georg Maria Stenz also used their privileges to intervene in lawsuits. The Big Swords responded by attacking Catholic properties and burning them. As a result of diplomatic pressure in the capital, Yuxian executed several Big Sword leaders but did not punish anyone else. More martial secret societies started emerging after this.
The early years saw a variety of village activities, not a broad movement with a united purpose. Martial folk-religious societies such as the Baguadao ('Eight Trigrams') prepared the way for the Boxers. Like the Red Boxing school or the Plum Flower tradition, the Boxers of Shandong were more concerned with traditional social and moral values, such as filial piety, than with foreign influences. One leader, Zhu Hongdeng (Red Lantern Zhu), started as a wandering healer, specialising in skin ulcers, and gained wide respect by refusing payment for his treatments. Zhu claimed descent from Ming dynasty emperors, since his surname was the surname of the Ming imperial family. He announced that his goal was to "Revive the Qing and destroy the foreigners" ( ).
The enemy was seen as foreign influence. They decided the "primary devils" were the Christian missionaries while the "secondary devils" were the Chinese converts to Christianity, which both had either to repent, be driven out or killed.
Causes
The movement had multiple causes, both domestic and international. Escalating tensions caused Chinese to turn against "foreign devils" who engaged in the Scramble for China in the late 19th century. The Western success at controlling China, growing anti-imperialist sentiment, and extreme weather conditions sparked the movement. A drought followed by floods in Shandong province in 1897–98 forced farmers to flee to cities and seek food.
A major source of discontent in northern China was missionary activity. The Boxers opposed German missionaries in Shandong and in the German concession in Qingdao. The Treaty of Tientsin and the Convention of Peking, signed in 1860 after the Second Opium War, had granted foreign missionaries the freedom to preach anywhere in China and to buy land on which to build churches. There was strong public indignation over the dispossession of Chinese temples that were replaced by Catholic churches which were viewed as deliberately anti-feng shui. A further cause of discontent among Chinese people were the destruction of Chinese burial sites to make way for German railroads and telegraph lines. In response to Chinese protests against German railroads, Germans shot the protestors.
Economic conditions in Shandong also contributed to rebellion. Northern Shandong's economy focused significantly on cotton production and was hampered by the importation of foreign cotton. Traffic along the Grand Canal was also decreasing, further eroding the economy. The area had also experienced periods of drought and flood.
A major precipitating incident was anger at the German Catholic Priest Georg Stenz, who had allegedly serially raped Chinese women in Juye County, Shandong. In an attack known as the Juye Incident, Chinese rebels attempted to kill Stenz in his missionary quarters, but failed to find him and killed two other missionaries. The German Navy's East Asia Squadron dispatched to occupy Jiaozhou Bay on the southern coast of the Shandong peninsula.
In December 1897, Wilhelm declared his intent to seize territory in China, which triggered a "scramble for concessions" by which Britain, France, Russia and Japan also secured their own sphere of influence in China. Germany gained exclusive control of developmental loans, mining, and railway ownership in Shandong province. Russia gained influence of all territory north of the Great Wall, plus the previous tax exemption for trade in Mongolia and Xinjiang, economic powers similar to Germany's over Fengtian, Jilin and Heilongjiang. France gained influence of Yunnan, most of Guangxi and Guangdong, Japan over Fujian. Britain gained influence of the whole Yangtze valley (defined as all provinces adjoining the Yangtze, as well as Henan and Zhejiang), parts of Guangdong and Guangxi provinces and part of Tibet. Only Italy's request for Zhejiang was declined by the Chinese government. These do not include the lease and concession territories where the foreign powers had full authority. The Russian government militarily occupied their zone, imposed their law and schools, seized mining and logging privileges, settled their citizens, and even established their municipal administration on several cities.
In October 1898, a group of Boxers attacked the Christian community of Liyuantun village where a temple to the Jade Emperor had been converted into a Catholic church. Disputes had surrounded the church since 1869, when the temple had been granted to the Christian residents of the village. This incident marked the first time the Boxers used the slogan "Support the Qing, destroy the foreigners" () that later characterised them.
The Boxers called themselves the "Militia United in Righteousness" for the first time in October 1899, at the Battle of Senluo Temple, a clash between Boxers and Qing government troops. By using the word "Militia" rather than "Boxers", they distanced themselves from forbidden martial arts sects and tried to give their movement the legitimacy of a group that defended orthodoxy.
Violence toward missionaries and Christians drew sharp responses from diplomats protecting their nationals, including Western seizure of harbors and forts and the moving in of troops in preparation for all-out war, as well as taking control of more land by force or by coerced long-term leases from the Qing. In 1899, the French minister in Beijing helped the missionaries to obtain an edict granting official status to every order in the Roman Catholic hierarchy, enabling local priests to support their people in legal or family disputes and bypass the local officials. After the German government took over Shandong, many Chinese feared that the foreign missionaries and possibly all Christian activities were imperialist attempts at "carving the melon", i.e., to colonise China piece by piece. A Chinese official expressed the animosity towards foreigners succinctly, "Take away your missionaries and your opium and you will be welcome."
In 1899, the Boxer Rebellion developed into a mass movement. The previous year, the Hundred Days' Reform, in which progressive Chinese reformers persuaded the Guangxu Emperor to engage in modernizing efforts, was suppressed by Empress Dowager Cixi and Yuan Shikai. The Qing political elite struggled with the question of how to retain its power. The Qing government came to view the Boxers as a means to help oppose foreign powers. The national crisis was widely perceived as caused by "foreign aggression" inside, even though afterwards a majority of Chinese were grateful for the actions of the alliance. The Qing government was corrupt, common people often faced extortions from government officials and the government offered no protection from the violent actions of the Boxers.
Qing forces
The military of the Qing dynasty had been dealt a severe blow by the First Sino-Japanese War and this had prompted military reform that was still in its early stages when the Boxer rebellion occurred and they were expected to fight. The bulk of the fighting was conducted by the forces already around Zhili with troops from other provinces only arriving after the main fighting had ended.
+Estimates of Qing strength 1898–1900ArmyThe Boards of
War/Revenue
(field troops only)Russian General
Staff
(field troops only)E.H. Parker
(Zhili alone)The London Times
(Zhili alone)Total360,000205,000125,000–130,000110,000–140,000
The failure of the Qing forces to withstand the Allied forces was not surprising given the limited time for reform and the fact that the best troops of China were not committed to the fight, remaining instead in Huguang and Shandong. The officer corps was particularly deficient; many lacked basic knowledge of strategy and tactics, and even those with training had not actively commanded troops in the field. In addition, the regular soldiers were noted for their poor marksmanship and inaccuracy, while cavalry was ill-organised and was not used to its full extent. Tactically, the Chinese still retained their belief in the superiority of defence, often withdrawing as soon as they were flanked, a tendency attributable to their lack of combat experience and training as well as a lack of initiative from commanders who would rather retreat than counterattack. However, accusations of cowardice were minimal; this was a marked improvement from the Sino-Japanese War of 1894–1895, as Chinese troops did not flee en masse as before. If led by courageous officers, the troops would often fight to the death as occurred under Nie Shicheng and Ma Yukun.
On the other hand, Chinese artillery was well-regarded, and caused far more casualties than the infantry at Tientsin, proving themselves superior to Allied artillery in counter-battery fire. The infantry, for their part, were commended for their good usage of cover and concealment in addition to their tenacity in resistance.
The Boxers also targeted Jewish groups in the region destroying their reputation and leading to Britain temporarily vacating their civilian workers from the front lines.
Boxer War
Intensifying crisis
In January 1900, with a majority of conservatives in the imperial court, Cixi changed her position on the Boxers and issued edicts in their defence, causing protests from foreign powers. Cixi urged provincial authorities to support the Boxers, although few did so. In the spring of 1900, the Boxer movement spread rapidly north from Shandong into the countryside near Beijing. Boxers burned Christian churches, killed Chinese Christians and intimidated Chinese officials who stood in their way. American Minister Edwin H. Conger cabled Washington, "the whole country is swarming with hungry, discontented, hopeless idlers".
On 30 May the diplomats, led by British Minister Claude Maxwell MacDonald, requested that foreign soldiers come to Beijing to defend the legations. The Chinese government reluctantly acquiesced, and the next day a multinational force of 435 navy troops from eight countries debarked from warships and travelled by train from the Taku Forts to Beijing. They set up defensive perimeters around their respective missions.
On 5 June 1900, the railway line to Tianjin was cut by Boxers in the countryside, and Beijing was isolated. On 11 June, at Yongdingmen, the secretary of the Japanese legation, Sugiyama Akira, was attacked and killed by the forces of General Dong Fuxiang, who were guarding the southern part of the Beijing walled city. Armed with Mauser rifles but wearing traditional uniforms, Dong's troops had threatened the foreign legations in the fall of 1898 soon after arriving in Beijing, so much that United States Marines had been called to Beijing to guard the legations.
Wilhelm was so alarmed by the Chinese Muslim troops that he requested Ottoman caliph Abdul Hamid II to find a way to stop the Muslim troops from fighting. Abdul Hamid agreed to the Kaiser's request and sent Enver Pasha (not to be confused with the later Young Turk leader) to China in 1901, but the rebellion was over by that time.
On 11 June, the first Boxer was seen in the Peking Legation Quarter. The German Minister Clemens von Ketteler and German soldiers captured a Boxer boy and inexplicably executed him.Weale, B. L. (Bertram Lenox Simpson), Indiscreet Letters from Peking. New York: Dodd, Mead, 1907, pp. 50–51. In response, thousands of Boxers burst into the walled city of Beijing that afternoon and burned many of the Christian churches and cathedrals in the city, burning some victims alive. American and British missionaries took refuge in the Methodist Mission, and an attack there was repulsed by US Marines. The soldiers at the British Embassy and German legations shot and killed several Boxers. The Kansu Braves and Boxers, along with other Chinese, then attacked and killed Chinese Christians around the legations in revenge for foreign attacks on Chinese.
Seymour Expedition
As the situation grew more violent, the Eight Powers authorities at Dagu dispatched a second multinational force to Beijing on 10 June 1900. This force of 2,000 sailors and marines was under the command of Vice Admiral Edward Hobart Seymour, the largest contingent being British. The force moved by train from Dagu to Tianjin with the agreement of the Chinese government, but the railway had been severed between Tianjin and Beijing. Seymour resolved to continue forward by rail to the break and repair the railway, or progress on foot from there, if necessary, as it was only 120 km from Tianjin to Beijing. The court then replaced Prince Qing at the Zongli Yamen with Manchu Prince Duan, a member of the imperial Aisin Gioro clan (foreigners called him a "Blood Royal"), who was anti-foreigner and pro-Boxer. He soon ordered the Imperial army to attack the foreign forces. Confused by conflicting orders from Beijing, General Nie Shicheng let Seymour's army pass by in their trains.
After leaving Tianjin, the force quickly reached Langfang, but the railway was destroyed there. Seymour's engineers tried to repair the line, but the force found itself surrounded, as the railway in both behind directions was destroyed. They were attacked from all sides by Chinese irregulars and imperial troops. Five thousand of Dong Fuxiang's Gansu Braves and an unknown number of Boxers won a costly but major victory over Seymour's troops at the Battle of Langfang on 18 June. The Seymour force could not locate the Chinese artillery, which was raining shells upon their positions. Chinese troops employed mining, engineering, flooding, and simultaneous attacks. The Chinese also employed pincer movements, ambushes, and sniping with some success.
On 18 June, Seymour learned of attacks on the Legation Quarter in Beijing, and decided to continue advancing, this time along the Beihe River, toward Tongzhou, from Beijing. By 19 June, the force was halted by progressively stiffening resistance and started to retreat southward along the river with over 200 wounded. The force was now very low on food, ammunition, and medical supplies. They happened upon The Great Hsi-Ku Arsenal, a hidden Qing munitions cache of which the Eight Powers had had no knowledge until then.
There they dug in and awaited rescue. A Chinese servant slipped through the Boxer and Imperial lines, reached Tianjin, and informed the Eight Powers of Seymour's predicament. His force was surrounded by Imperial troops and Boxers, attacked nearly around the clock, and at the point of being overrun. The Eight Powers sent a relief column from Tianjin of 1,800 men (900 Russian troops from Port Arthur, 500 British seamen, and other assorted troops). On 25 June the relief column reached Seymour. The Seymour force destroyed the Arsenal: they spiked the captured field guns and set fire to any munitions that they could not take (an estimated £3 million worth). The Seymour force and the relief column marched back to Tientsin, unopposed, on 26 June. Seymour's casualties during the expedition were 62 killed and 228 wounded.
Conflict within the Qing imperial court
Meanwhile, in Beijing, on 16 June, Empress Dowager Cixi summoned the imperial court for a mass audience and addressed the choice between using the Boxers to evict the foreigners from the city, and seeking a diplomatic solution. In response to a high official who doubted the efficacy of the Boxers, Cixi replied that both sides of the debate at the imperial court realised that popular support for the Boxers in the countryside was almost universal and that suppression would be both difficult and unpopular, especially when foreign troops were on the march.
Siege of the Beijing legations
On 15 June, Qing imperial forces deployed electric naval mines in the Beihe River to prevent the Eight-Nation Alliance from sending ships to attack. With a difficult military situation in Tianjin and a total breakdown of communications between Tianjin and Beijing, the allied nations took steps to reinforce their military presence significantly. On 17 June, Allied forces under Russian Admiral Yevgeni Alekseyev took the Dagu Forts commanding the approaches to Tianjin, and from there brought increasing numbers of troops on shore. When Cixi received an ultimatum that same day demanding that China surrender total control over all its military and financial affairs to foreigners, she defiantly stated before the entire Grand Council, "Now they [the Powers] have started the aggression, and the extinction of our nation is imminent. If we just fold our arms and yield to them, I would have no face to see our ancestors after death. If we must perish, why don't we fight to the death?" It was at this point that Cixi began to blockade the legations with the armies of the Peking Field Force, which began the siege. Cixi stated that "I have always been of the opinion, that the allied armies had been permitted to escape too easily in 1860. Only a united effort was then necessary to have given China the victory. Today, at last, the opportunity for revenge has come", and said that millions of Chinese would join the cause of fighting the foreigners since the Manchus had provided "great benefits" on China. On receipt of the news of the attack on the Dagu Forts on 19 June, Empress Dowager Cixi immediately sent an order to the legations that the diplomats and other foreigners depart Beijing under escort of the Chinese army within 24 hours.Tan, p. 75
The next morning, diplomats from the besieged legations met to discuss the Empress's offer. The majority quickly agreed that they could not trust the Chinese army. Fearing that they would be killed, they agreed to refuse the Empress's demand. The German Imperial Envoy, Baron Clemens von Ketteler, was infuriated with the actions of the Chinese army troops and determined to take his complaints to the royal court. Against the advice of the fellow foreigners, the baron left the legations with a single aide and a team of porters to carry his sedan chair. On his way to the palace, von Ketteler was killed on the streets of Beijing by a Manchu captain. His aide managed to escape the attack and carried word of the baron's death back to the diplomatic compound. At this news, the other diplomats feared they also would be murdered if they left the legation quarter and they chose to continue to defy the Chinese order to depart Beijing. The legations were hurriedly fortified. Most of the foreign civilians, which included a large number of missionaries and businessmen, took refuge in the British legation, the largest of the diplomatic compounds. Chinese Christians were primarily housed in the adjacent palace (Fu) of Prince Su, who was forced to abandon his property by the foreign soldiers.
On 21 June, Cixi issued an imperial decree stating that hostilities had begun and ordering the regular Chinese army to join the Boxers in their attacks on the invading troops. This was a declaration of war, but the Allies also made no formal declaration of war. Regional governors in the south, who commanded substantial modernised armies, such as Li Hongzhang at Guangzhou, Yuan Shikai in Shandong, Zhang Zhidong at Wuhan, and Liu Kunyi at Nanjing, formed the Mutual Defense Pact of the Southeastern Provinces. They refused to recognise the imperial court's declaration of war, which they declared a (illegitimate order) and withheld knowledge of it from the public in the south. Yuan Shikai used his own forces to suppress Boxers in Shandong, and Zhang entered into negotiations with the foreigners in Shanghai to keep his army out of the conflict. The neutrality of these provincial and regional governors left the majority of Chinese military forces out of the conflict. The republican revolutionary Sun Yat-sen even took the opportunity to submit a proposal to Li Hongzhang to declare an independent democratic republic, although nothing came of the suggestion.
The legations of the United Kingdom, France, Germany, Italy, Austria-Hungary, Spain, Belgium, the Netherlands, the United States, Russia and Japan were located in the Beijing Legation Quarter south of the Forbidden City. The Chinese army and Boxer irregulars besieged the Legation Quarter from 20 June to 14 August 1900. A total of 473 foreign civilians, 409 soldiers, marines and sailors from eight countries, and about 3,000 Chinese Christians took refuge there. Under the command of the British minister to China, Claude Maxwell MacDonald, the legation staff and military guards defended the compound with small arms, three machine guns, and one old muzzle-loaded cannon, which was nicknamed the International Gun because the barrel was British, the carriage Italian, the shells Russian and the crew American. Chinese Christians in the legations led the foreigners to the cannon and it proved important in the defence. Also under siege in Beijing was the Northern Cathedral (Beitang) of the Catholic Church. The cathedral was defended by 43 French and Italian soldiers, 33 Catholic foreign priests and nuns, and about 3,200 Chinese Catholics. The defenders suffered heavy casualties from lack of food and from mines which the Chinese exploded in tunnels dug beneath the compound. The number of Chinese soldiers and Boxers besieging the Legation Quarter and the Beitang is unknown. Zaiyi's bannermen in the Tiger and Divine Corps led attacks against the Catholic cathedral church.
On 22 and 23 June, Chinese soldiers and Boxers set fire to areas north and west of the British Legation, using it as a "frightening tactic" to attack the defenders. The nearby Hanlin Academy, a complex of courtyards and buildings that housed "the quintessence of Chinese scholarship ... the oldest and richest library in the world", caught fire. Each side blamed the other for the destruction of the invaluable books it contained.
After the failure to burn out the foreigners, the Chinese army adopted an anaconda-like strategy. The Chinese built barricades surrounding the Legation Quarter and advanced, brick by brick, on the foreign lines, forcing the foreign legation guards to retreat a few feet at a time. This tactic was especially used in the Fu, defended by Japanese and Italian sailors and soldiers, and inhabited by most of the Chinese Christians. Fusillades of bullets, artillery and firecrackers were directed against the Legations almost every night—but did little damage. Sniper fire took its toll among the foreign defenders. Despite their numerical advantage, the Chinese did not attempt a direct assault on the Legation Quarter although in the words of one of the besieged, "it would have been easy by a strong, swift movement on the part of the numerous Chinese troops to have annihilated the whole body of foreigners ... in an hour". American missionary Francis Dunlap Gamewell and his crew of "fighting parsons" fortified the Legation Quarter,Weale, Putnam. Indiscreet Letters from Peking. New York: Dodd, Mead, 1907, pp. 142–143 but impressed Chinese Christians to do most of the physical labour of building defences.Payen, Cecile E. "Besieged in Peking". The Century Magazine, January 1901, pp. 458–460
The Germans and the Americans occupied perhaps the most crucial of all defensive positions: the Tartar Wall. Holding the top of the tall and wide wall was vital. The German barricades faced east on top of the wall and west were the west-facing American positions. The Chinese advanced toward both positions by building barricades even closer. "The men all feel they are in a trap", said the US commander Capt. John Twiggs Myers, "and simply await the hour of execution".Myers, Captain John T. "Military Operations and Defenses of the Siege of Peking". Proceedings of the U.S. Naval Institute, September 1902, pp. 542–550. On 30 June, the Chinese forced the Germans off the Wall, leaving the American Marines alone in its defence. In June 1900, one American described the scene of 20,000 Boxers storming the walls:
At the same time, a Chinese barricade was advanced to within a few feet of the American positions, and it became clear that the Americans had to abandon the wall or force the Chinese to retreat. At 2 am on 3 July 56 British, Russian and American marines and sailors, under the command of Myers, launched an assault against the Chinese barricade on the wall. The attack caught the Chinese sleeping, killed about 20 of them, and expelled the rest of them from the barricades.Oliphant, Nigel, A Diary of the Siege of the Legations in Peking. London: Longman, Greens, 1901, pp 78–80 The Chinese did not attempt to advance their positions on the Tartar Wall for the remainder of the siege.Martin, W. A. P. The Siege in Peking. New York: Fleming H. Revell, 1900, p. 83
Sir Claude MacDonald said 13 July was the "most harassing day" of the siege. The Japanese and Italians in the Fu were driven back to their last defence line. The Chinese detonated a mine beneath the French Legation pushing the French and Austrians out of most of the French Legation. On 16 July, the most capable British officer was killed and the journalist George Ernest Morrison was wounded. American Minister Edwin H. Conger established contact with the Chinese government and on 17 July, and an armistice was declared by the Chinese.
Infighting among officials and commanders
General Ronglu concluded that it was futile to fight all of the powers simultaneously and declined to press home the siege. Zaiyi wanted artillery for Dong's troops to destroy the legations. Ronglu blocked the transfer of artillery to Zaiyi and Dong, preventing them from attacking. Ronglu forced Dong Fuxiang and his troops to pull back from completing the siege and destroying the legations, thereby saving the foreigners and making diplomatic concessions. Ronglu and Prince Qing sent food to the legations and used their bannermen to attack the Gansu Braves of Dong Fuxiang and the Boxers who were besieging the foreigners. They issued edicts ordering the foreigners to be protected, but the Gansu warriors ignored it, and fought against bannermen who tried to force them away from the legations. The Boxers also took commands from Dong Fuxiang. Ronglu also deliberately hid an Imperial Decree from Nie Shicheng. The Decree ordered him to stop fighting the Boxers because of the foreign invasion, and also because the population was suffering. Due to Ronglu's actions, Nie continued to fight the Boxers and killed many of them even as the foreign troops were making their way into China. Ronglu also ordered Nie to protect foreigners and save the railway from the Boxers. Because parts of the railway were saved under Ronglu's orders, the foreign invasion army was able to transport itself into China quickly. Nie committed thousands of troops against the Boxers instead of against the foreigners, but was already outnumbered by the Allies by 4,000 men. He was blamed for attacking the Boxers, and decided to sacrifice his life at Tietsin by walking into the range of Allied guns.
Xu Jingcheng, who had served as the envoy to many of the same states under siege in the Legation Quarter, argued that "the evasion of extraterritorial rights and the killing of foreign diplomats are unprecedented in China and abroad". Xu and five other officials urged Empress Dowager Cixi to order the repression of Boxers, the execution of their leaders, and a diplomatic settlement with foreign armies. The Empress Dowager was outraged, and sentenced Xu and the five others to death for "willfully and absurdly petitioning the imperial court" and "building subversive thought". They were executed on 28 July 1900 and their severed heads placed on display at Caishikou Execution Grounds in Beijing.
Reflecting this vacillation, some Chinese soldiers were quite liberally firing at foreigners under siege from its very onset. Cixi did not personally order imperial troops to conduct a siege, and on the contrary had ordered them to protect the foreigners in the legations. Prince Duan led the Boxers to loot his enemies within the imperial court and the foreigners, although imperial authorities expelled Boxers after they were let into the city and went on a looting rampage against both the foreign and the Qing imperial forces. Older Boxers were sent outside Beijing to halt the approaching foreign armies, while younger men were absorbed into the Muslim Gansu army.
With conflicting allegiances and priorities motivating the various forces inside Beijing, the situation in the city became increasingly confused. The foreign legations continued to be surrounded by both Qing imperial and Gansu forces. While Dong's Gansu army, now swollen by the addition of the Boxers, wished to press the siege, Ronglu's imperial forces seem to have largely attempted to follow Cixi's decree and protect the legations. However, to satisfy the conservatives in the imperial court, Ronglu's men also fired on the legations and let off firecrackers to give the impression that they, too, were attacking the foreigners. Inside the legations and out of communication with the outside world, the foreigners simply fired on any targets that presented themselves, including messengers from the imperial court, civilians and besiegers of all persuasions. Dong Fuxiang was denied artillery held by Ronglu which stopped him from levelling the legations, and when he complained to Empress Dowager Cixi on 23 June, she dismissively said that "Your tail is becoming too heavy to wag." The Alliance discovered large amounts of unused Chinese Krupp guns and shells after the siege was lifted.
Gaselee Expedition
Foreign navies started building up their presence along the northern China coast from the end of April 1900. Several international forces were sent to the capital, with varying success, and the Chinese forces were ultimately defeated by the Alliance. Independently, the Netherlands dispatched three cruisers in July to protect its citizens in Shanghai.
British Lieutenant-General Alfred Gaselee acted as the commanding officer of the Eight-Nation Alliance, which eventually numbered 55,000. Japanese forces, led by Fukushima Yasumasa and Yamaguchi Motomi and numbering over 20,840 men, made up the majority of the expeditionary force. French forces in the campaign, led by general Henri-Nicolas Frey, consisted mostly of inexperienced Vietnamese and Cambodian conscripts from French Indochina. The "First Chinese Regiment" (Weihaiwei Regiment) which was praised for its performance, consisted of Chinese collaborators serving in the British military. Notable events included the seizure of the Dagu Forts commanding the approaches to Tianjin and the boarding and capture of four Chinese destroyers by British Commander Roger Keyes. Among the foreigners besieged in Tianjin was a young American mining engineer named Herbert Hoover, who would go on to become the 31st President of the United States.
The international force captured Tianjin on 14 July. The international force suffered its heaviest casualties of the Boxer Rebellion in the Battle of Tientsin. With Tianjin as a base, the international force marched from Tianjin to Beijing (about ), with 20,000 allied troops. On 4 August, there were approximately 70,000 Qing imperial troops and anywhere from 50,000 to 100,000 Boxers along the way. The allies only encountered minor resistance, fighting battles at Beicang and Yangcun. At Yangcun, Russian general Nikolai Linevich led the 14th Infantry Regiment of the US and British troops in the assault. The weather was a major obstacle. Conditions were extremely humid with temperatures sometimes reaching . These high temperatures and insects plagued the Allies. Soldiers became dehydrated and horses died. Chinese villagers killed Allied troops who searched for wells.
The heat killed Allied soldiers, who foamed at the mouth. The tactics along the way were gruesome on either side. Allied soldiers beheaded already dead Chinese corpses, bayoneted or beheaded live Chinese civilians, and raped Chinese girls and women.. Cossacks were reported to have killed Chinese civilians almost automatically and Japanese kicked a Chinese soldier to death. The Chinese responded to the Alliance's atrocities with similar acts of violence and cruelty, especially towards captured Russians. Lieutenant Smedley Butler saw the remains of two Japanese soldiers nailed to a wall, who had their tongues cut off and their eyes gouged. Lieutenant Butler was wounded during the expedition in the leg and chest, later receiving the Brevet Medal in recognition for his actions.
The international force reached Beijing on 14 August. Following Beiyang army's defeat in the First Sino-Japanese War, the Chinese government had invested heavily in modernising the imperial army, which was equipped with modern Mauser repeater rifles and Krupp artillery. Three modernised divisions consisting of Manchu bannermen protected the Beijing Metropolitan region. Two of them were under the command of the anti-Boxer Prince Qing and Ronglu, while the anti-foreign Prince Duan commanded the ten-thousand-strong Hushenying, or "Tiger Spirit Division", which had joined the Gansu Braves and Boxers in attacking the foreigners. It was a Hushenying captain who had assassinated the German diplomat, Ketteler. The Tenacious Army under Nie Shicheng received Western style training under German and Russian officers in addition to their modernised weapons and uniforms. They effectively resisted the Alliance at the Battle of Tientsin before retreating and astounded the Alliance forces with the accuracy of their artillery during the siege of the Tianjin concessions (the artillery shells failed to explode upon impact due to corrupt manufacturing). The Gansu Braves under Dong Fuxiang, which some sources described as "ill disciplined", were armed with modern weapons but were not trained according to Western drill and wore traditional Chinese uniforms. They led the defeat of the Alliance at Langfang in the Seymour Expedition and were the most ferocious in besieging the Legations in Beijing. The British won the race among the international forces to be the first to reach the besieged Legation Quarter. The US was able to play a role due to the presence of US ships and troops stationed in Manila since the US conquest of the Philippines during the Spanish–American War and the subsequent Philippine–American War. The US military refers to this as the China Relief Expedition. United States Marines scaling the walls of Beijing is an iconic image of the Boxer Rebellion.
The British Army reached the legation quarter on the afternoon of 14 August and relieved the Legation Quarter. The Beitang was relieved on 16 August, first by Japanese soldiers and then, officially, by the French.
Qing court flight to Xi'an
As the foreign armies reached Beijing, the Qing court fled to Xi'an, with Cixi disguised as a Buddhist nun. The journey was made all the more arduous by the lack of preparation, but the Empress Dowager insisted this was not a retreat, rather a "tour of inspection". After weeks of travel, the party arrived in Xi'an, beyond protective mountain passes where the foreigners could not reach, deep in Chinese Muslim territory and protected by the Gansu Braves. The foreigners had no orders to pursue Cixi, so they decided to stay put.
Russian invasion of Manchuria
The Russian Empire and the Qing dynasty had maintained a long peace, starting with the Treaty of Nerchinsk in 1689, but Russian forces took advantage of Chinese defeats to impose the Aigun Treaty of 1858 and the Treaty of Peking of 1860 which ceded formerly Chinese territory in Manchuria to Russia, much of which is held by Russia to the present day (Primorye). The Russians aimed for control over the Amur River for navigation, and the all-weather ports of Dairen and Port Arthur in the Liaodong peninsula. The rise of Japan as an Asian power provoked Russia's anxiety, especially in light of expanding Japanese influence in Korea. Following Japan's victory in the First Sino-Japanese War of 1895, the Triple Intervention of Russia, Germany and France forced Japan to return the territory won in Liaodong, leading to a de facto Sino-Russian alliance.
Local Chinese in Manchuria were incensed at these Russian advances and began to harass Russians and Russian institutions, such as the Chinese Eastern Railway, which was guarded by Russian troops under Pavel Mishchenko. In June 1900, the Chinese bombarded the town of Blagoveshchensk on the Russian side of the Amur. The Russian government, at the insistence of war minister Aleksey Kuropatkin, used the pretext of Boxer activity to move some 200,000 troops led by Paul von Rennenkampf into the area to crush the Boxers. The Chinese used arson to destroy a bridge carrying a railway and a barracks on 27 July. The Boxer attacks on Chinese Eastern Railway and burned the Yantai mines.
Massacre of missionaries and Chinese Christians
A total of 136 Protestant missionaries, 53 children, 47 Catholic priests and nuns, 30,000 Chinese Catholics, 2,000 Chinese Protestants, and 200–400 of the 700 Russian Orthodox Christians in Beijing are estimated to have been killed during the uprising. The Protestant dead were collectively termed the China Martyrs of 1900.
Orthodox, Protestant, and Catholic missionaries and their Chinese parishioners were massacred throughout northern China, some by Boxers and others by government troops and authorities. After the declaration of war on Western powers in June 1900, Yuxian, who had been named governor of Shanxi in March of that year, implemented a brutal anti-foreign and anti-Christian policy. On 9 July, reports circulated that he had executed forty-four foreigners (including women and children) from missionary families whom he had invited to the provincial capital Taiyuan under the promise to protect them. Although the purported eyewitness accounts have recently been questioned as improbable, this event became a notorious symbol of Chinese anger, known as the Taiyuan massacre. Points out that the widely circulated accounts were by people who could not have seen the events, and that these accounts closely followed (often word for word) well-known earlier martyr literature.
The England-based Baptist Missionary Society opened its mission in Shanxi in 1877. In 1900, all its missionaries there were killed, along with all 120 converts. By the summer's end, more foreigners and as many as 2,000 Chinese Christians had been put to death in the province. Journalist and historical writer Nat Brandt has called the massacre of Christians in Shanxi "the greatest single tragedy in the history of Christian evangelicalism".
Some 222 Russian–Chinese martyrs, including Chi Sung as St. Metrophanes, were locally canonised as New Martyrs on 22 April 1902, after Archimandrite Innocent (Fugurovsky), head of the Russian Orthodox Mission in China, solicited the Most Holy Synod to perpetuate their memory. This was the first local canonisation for more than two centuries.
Aftermath
Allied occupation and atrocities
The Eight Nation Alliance occupied Zhili province while Russia occupied Manchuria, but the rest of China was not occupied due to the actions of several Han governors who formed the Mutual Protection of Southeast China that refused to obey the declaration of war and kept their armies and provinces out of the war. Zhang Zhidong told Everard Fraser, the Hankou-based British consul general, that he despised Manchus so that the Eight Nation Alliance would not occupy provinces under the Mutual Defense Pact.
Beijing, Tianjin and Zhili province were occupied for more than one year by the international expeditionary force under the command of German Field Marshal Alfred von Waldersee, who had initially been appointed commander of the Eight-Nation Alliance during the rebellion but did not arrive in China until after most of the fighting had ended. The Americans and British paid General Yuan Shikai and his army (the Right Division) to help the Eight Nation Alliance suppress the Boxers. Yuan Shikai's forces killed tens of thousands of people in their anti-Boxer campaign in Zhili province and Shandong after the Alliance captured Beijing. The majority of the hundreds of thousands of people living in inner Beijing during the Qing were Manchus and Mongol bannermen from the Eight Banners after they were moved there in 1644, when Han Chinese were expelled. Sawara Tokusuke, a Japanese journalist, wrote in "Miscellaneous Notes about the Boxers" about the rapes of Manchu and Mongol banner girls. He alleged that soldiers of the Eight-Nation Alliance raped a large number of women in Peking, including all seven daughters of Viceroy Yulu of the Hitara clan. Likewise, a daughter and a wife of Mongol banner noble Chongqi of the Alute clan were allegedly gang-raped by soldiers of the Eight-Nation Alliance.Sawara Tokusuke, Miscellaneous Notes about the Boxers (Quanshi zaji), in Compiled Materials on the Boxers (Yihetuan wenxian huibian), ed. Zhongguo shixue hui (Taipei: Dingwen, 1973), 1: 266–268. Chongqi killed himself on 26 August 1900, and some other relatives, including his son, Baochu, did likewise shortly afterward.Chao-ying Fang. "Chongqi". In Eminent Chinese of the Qing Period: (1644–1911/2), 74–75. Great Barrington, Massachusetts: Berkshire Publishing Group. 2018.
During attacks on suspected Boxer areas from September 1900 to March 1901, European and American forces engaged in tactics which included public decapitations of Chinese with suspected Boxer sympathies, systematic looting, routine shooting of farm animals and crop destruction, destruction of religious buildings and public buildings, burning of religious texts, and widespread rape of Chinese women and girls.
Contemporary British and American observers levelled their greatest criticism at German, Russian, and Japanese troops for their ruthlessness and willingness to execute Chinese of all ages and backgrounds, sometimes burning villages and killing their entire populations. The German force arrived too late to take part in the fighting but undertook punitive expeditions to villages in the countryside. According to missionary Arthur Henderson Smith, in addition to burning and looting, Germans "cut off the heads of many Chinese within their jurisdiction, many of them for absolutely trivial offenses". US Army Lieutenant C. D. Rhodes reported that German and French soldiers set fire to buildings where innocent peasants were sheltering and would shoot and bayonet peasants who fled the burning buildings. According to Australian soldiers, Germans extorted ransom payments from villages in exchange for not torching their homes and crops. British journalist George Lynch wrote that German and Italian soldiers engaged in a practice of raping Chinese women and girls before burning their villages. According to Lynch, German soldiers would attempt to cover up these atrocities by throwing rape victims into wells as staged suicides. Lynch said, "There are things that I must not write, and that may not be printed in England, which would seem to show that this Western civilisation of ours is merely a veneer over savagery".
On 27 July, during departure ceremonies for the German relief force, Kaiser Wilhelm II included an impromptu but intemperate reference to the Hun invaders of continental Europe:
One newspaper called the aftermath of the siege a "carnival of ancient loot", and others called it "an orgy of looting" by soldiers, civilians and missionaries. These characterisations called to mind the sacking of the Summer Palace in 1860. Each nationality accused the others of being the worst looters. An American diplomat, Herbert G. Squiers, filled several railway carriages with loot and artefacts. The British Legation held loot auctions every afternoon and proclaimed, "Looting on the part of British troops was carried out in the most orderly manner." However, one British officer noted, "It is one of the unwritten laws of war that a city which does not surrender at the last and is taken by storm is looted." For the rest of 1900 and 1901, the British held loot auctions every day except Sunday in front of the main-gate to the British Legation. Many foreigners, including Claude Maxwell MacDonald and Lady Ethel MacDonald and George Ernest Morrison of The Times, were active bidders among the crowd. Many of these looted items ended up in Europe. The Catholic Beitang or North Cathedral was a "salesroom for stolen property".Chamberlin, Wilbur J. letter to his wife (11 December 1900), in Ordered to China: Letters of Wilbur J. Chamberlin: Written from China While Under Commission from the New York Sun During the Boxer Uprising of 1900 and the International Complications Which Followed, (New York: Frederick A. Stokes, 1903), p. 191 The American general Adna Chaffee banned looting by American soldiers, but the ban was ineffectual. According to Chaffee, "it is safe to say that where one real Boxer has been killed, fifty harmless coolies or laborers, including not a few women and children, have been slain".
A few Western missionaries took an active part in calling for retribution. To provide restitution to missionaries and Chinese Christian families whose property had been destroyed, William Scott Ament, a missionary of American Board of Commissioners for Foreign Missions, guided American troops through villages to punish those he suspected of being Boxers and confiscate their property. When Mark Twain read of this expedition, he wrote a scathing essay, "To the Person Sitting in Darkness", that attacked the "Reverend bandits of the American Board", especially targeting Ament, one of the most respected missionaries in China. The controversy was front-page news during much of 1901. Ament's counterpart on the distaff side was British missionary Georgina Smith, who presided over a neighbourhood in Beijing as judge and jury.
While one historical account reported that Japanese troops were astonished by other Alliance troops raping civilians, others noted that Japanese troops were "looting and burning without mercy", and that Chinese "women and girls by hundreds have committed suicide to escape a worse fate at the hands of Russian and Japanese brutes". Roger Keyes, who commanded the British destroyer Fame and accompanied the Gaselee Expedition, noted that the Japanese had brought their own "regimental wives" (prostitutes) to the front to keep their soldiers from raping Chinese civilians.
The Daily Telegraph journalist E. J. Dillon stated that he witnessed the mutilated corpses of Chinese women who were raped and killed by the Alliance troops. The French commander dismissed the rapes, attributing them to "gallantry of the French soldier". According to U.S. Captain Grote Hutcheson, French forces burned each village they encountered during a 99-mile march and planted the French flag in the ruins.
Many bannermen supported the Boxers, and shared their anti-foreign sentiment. Bannermen had been devastated in the First Sino-Japanese War in 1895 and Banner armies were destroyed while resisting the invasion. In the words of historian Pamela Crossley, their living conditions went "from desperate poverty to true misery". When thousands of Manchus fled south from Aigun during the fighting in 1900, their cattle and horses were stolen by Russian Cossacks who then burned their villages and homes to ashes. Manchu Banner armies were destroyed while resisting the invasion, many annihilated by Russians. Manchu Shoufu killed himself during the battle of Peking and the Manchu Lao She's father was killed by Western soldiers in the battle as the Manchu banner armies of the Center Division of the Guards Army, Tiger Spirit Division and Peking Field Force in the Metropolitan banners were slaughtered by the western soldiers. The Inner-city Legation Quarters and Catholic cathedral (Church of the Saviour, Beijing) were both attacked by Manchu bannermen. Manchu bannermen were slaughtered by the Eight Nation Alliance all over Manchuria and Beijing because most of the Manchu bannermen supported the Boxers.The clan system of the Manchus in Aigun was obliterated by the despoliation of the area at the hands of the Russian invaders. There were 1,266 households including 900 Daurs and 4,500 Manchus in Sixty-Four Villages East of the River and Blagoveshchensk until the Blagoveshchensk massacre and Sixty-Four Villages East of the River massacre committed by Russian Cossack soldiers.俄罗斯帝国总参谋部. 《亚洲地理、地形和统计材料汇编》. 俄罗斯帝国: 圣彼得堡. 1886年: 第三十一卷·第185页 (俄语). Many Manchu villages were burned by Cossacks in the massacre according to Victor Zatsepine.
Manchu royals, officials and officers like Yuxian, Qixiu, Zaixun, Prince Zhuang and Captain Enhai were executed or forced to commit suicide by the Eight Nation Alliance. Manchu official Gangyi's execution was demanded, but he had already died. (Draft History of Qing Volume 465) Japanese soldiers arrested Qixiu before he was executed.佐原篤介《拳亂紀聞》:「兵部尚書啟秀因曾力助舊黨,並曾奏保五臺山僧人普靜為聖僧,調令攻襲什庫,八月廿七日為日兵拘禁。」 Zaixun, Prince Zhuang was forced to commit suicide on 21 February 1901. They executed Yuxian on 22 February 1901. On 31 December 1900 German soldiers beheaded the Manchu captain Enhai for killing Clemens von Ketteler.
Indemnity
After the capture of Peking by the foreign armies, some of Cixi's advisers advocated that the war be carried on, arguing that China could have defeated the foreigners as it was disloyal and traitorous people within China who allowed Beijing and Tianjin to be captured by the Allies, and that the interior of China was impenetrable. They also recommended that Dong Fuxiang continue fighting. The Empress Dowager Cixi was practical however, and decided that the terms were generous enough for her to acquiesce when she was assured of her continued reign after the war and that China would not be forced to cede any territory.
On 7 September 1901, the Qing imperial court agreed to sign the Boxer Protocol, also known as Peace Agreement between the Eight-Nation Alliance and China. The protocol ordered the execution of 10 high-ranking officials linked to the outbreak and other officials who were found guilty for the slaughter of foreigners in China. Alfons Mumm, Ernest Satow, and Komura Jutaro signed on behalf of Germany, Britain, and Japan, respectively.
China was fined war reparations of 450,000,000 taels of fine silver () for the loss that it caused. The reparation was to be paid by 1940, within 39 years, and would be 982,238,150 taels with interest (4 per cent per year) included. The existing tariff increased from 3.18 to 5 per cent, and formerly duty-free merchandise was newly taxed, to help meet these indemnity demands. The sum of reparations was estimated by the Chinese population size (roughly 450 million in 1900) at one tael per person. Chinese customs income and salt taxes guaranteed the reparation. China paid 668,661,220 taels of silver from 1901 to 1939 – equivalent in 2010 to US$61 billion on a purchasing-power-parity basis.
A large portion of the reparations paid to the United States was diverted to pay for the education of Chinese students in US universities under the Boxer Indemnity Scholarship Program. To prepare the students chosen for this program, an institute was established to teach the English language and to serve as a preparatory school. When the first of these students returned to China, they undertook the teaching of subsequent students; from this institute was born Tsinghua University.
The US China Inland Mission lost more members than any other missionary agency: 58 adults and 21 children were killed. However, in 1901, when the allied nations were demanding compensation from the Chinese government, Hudson Taylor refused to accept payment for loss of property or life, to demonstrate the meekness and gentleness of Christ to the Chinese.
The Belgian Catholic vicar apostolic of Ordos wanted foreign troops garrisoned in Inner Mongolia, but the Governor refused. Bermyn petitioned the Manchu Enming to send troops to Hetao where Prince Duan's Mongol troops and General Dong Fuxiang's Muslim troops allegedly threatened Catholics. It turned out that Bermyn had created the incident as a hoax. Western Catholic missionaries forced Mongols to give up their land to Han Chinese Catholics as part of the Boxer indemnities according to Mongol historian Shirnut Sodbilig. Mongols had participated in attacks against Catholic missions in the Boxer rebellion.
The Qing government did not capitulate to all the foreign demands. The Manchu governor Yuxian was executed, but the imperial court refused to execute the Han Chinese General Dong Fuxiang, although he had also encouraged the killing of foreigners during the rebellion. Empress Dowager Cixi intervened when the Alliance demanded him executed and Dong was only cashiered and sent back home. Instead, Dong lived a life of luxury and power in "exile" in his home province of Gansu. Upon Dong's death in 1908, all honours which had been stripped from him were restored and he was given a full military burial. The indemnity was never fully paid and was lifted during World War II.
Long-term consequences
The occupation of Beijing by foreign powers and the failure of the rebellion further eroded support for the Qing state. Support for reforms decreased, while support for revolution increased. In the ten years after the Boxer Rebellion, uprisings in China increased, particularly in the south. Support grew for the Tongmenghui, an alliance of anti-Qing groups which later became the Kuomintang.
Cixi was returned to Beijing, the foreign powers believing that maintaining the Qing government was the best way to control China. The Qing state made further efforts to reform. It abolished the imperial examinations in 1905 and sought to gradually introduce consultative assemblies. Along with the formation of new military and police organisations, the reforms also simplified central bureaucracy and made a start at revamping taxation policies. These efforts failed to maintain the Qing dynasty, which was overthrown in the 1911 Xinhai Revolution.
In October 1900, Russia occupied the provinces of Manchuria, a move that threatened Anglo-American hopes of maintaining the country's openness to commerce under the Open Door Policy.
The historian Walter LaFeber has argued that President William McKinley's decision to send 5,000 American troops to quell the rebellion marks "the origins of modern presidential war powers":Woods, Thomas (7 July 2005) Presidential War Powers, LewRockwell.com
Arthur M. Schlesinger Jr., concurred and wrote:Schlesinger, Arthur. The Imperial Presidency (Popular Library 1974), p. 96.
Analysis of the Boxers
From the beginning, views differed as to whether the Boxers were better seen as anti-imperialist, patriotic and proto-nationalist, or as backward, irrational, and futile opponents of what was inevitable change. The historian Joseph W. Esherick comments that "confusion about the Boxer Uprising is not simply a matter of popular misconceptions" since "there is no major incident in China's modern history on which the range of professional interpretation is as great".
The Boxers drew condemnation from those who wanted to modernise China according to a Western model of civilisation. Sun Yat-sen, considered the founding father of modern China, at the time worked to overthrow the Qing but believed that government spread rumours that "caused confusion among the populace" and stirred up the Boxer Movement. He delivered "scathing criticism" of the Boxers' "anti-foreignism and obscurantism". Sun praised the Boxers for their "spirit of resistance" but called them "bandits". Students studying in Japan were ambivalent. Some stated that while the uprising originated from the ignorant and stubborn people, their beliefs were brave and righteous and could be transformed into a force for independence. After the fall of the Qing dynasty in 1911, nationalistic Chinese became more sympathetic to the Boxers. In 1918, Sun praised their fighting spirit and said that the Boxers were courageous and fearless in fighting to the death against the Alliance armies, specifically the Battle of Yangcun.Sun Yat-sen, A Letter to the Governor of Hong Kong", quoted in Li Weichao, "Modern Chinese Nationalism and the Boxer Movement", Chinese liberals such as Hu Shih, who called on China to modernise, still condemned the Boxers for their irrationality and barbarity.顾则徐:清末民初思想领袖评价义和团总览 The leader of the New Culture Movement, Chen Duxiu, forgave the "barbarism of the Boxer ... given the crime foreigners committed in China", and contended that it was those "subservient to the foreigners" that truly "deserved our resentment".
In other countries, views of the Boxers were complex and contentious. Mark Twain said that "the Boxer is a patriot. He loves his country better than he does the countries of other people. I wish him success." The Russian writer Leo Tolstoy also praised the Boxers and accused Nicholas II of Russia and Wilhelm II of Germany of being chiefly responsible for the lootings, rapes, murders, and "Christian brutality" of the Russian and Western troops. The Russian revolutionary Vladimir Lenin mocked the Russian government's claim that it was protecting Christian civilisation: "Poor Imperial Government! So Christianly unselfish, and yet so unjustly maligned! Several years ago it unselfishly seized Port Arthur, and now it is unselfishly seizing Manchuria; it has unselfishly flooded the frontier provinces of China with hordes of contractors, engineers, and officers, who, by their conduct, have roused to indignation even the Chinese, known for their docility."V. I. Lenin, "The War in China", Iskra, No. 1 (December 1900), in Lenin Collected Works (Moscow: Progress Publishers, 1964), Volume 4, pp. 372–377, online Marxists Internet Archive. The Russian newspaper Amurskii Krai criticised the killing of innocent civilians and charged that restraint would have been more becoming of a "civilized Christian nation", asking: "What shall we tell civilized people? We shall have to say to them: 'Do not consider us as brothers anymore. We are mean and terrible people; we have killed those who hid at our place, who sought our protection. Lenin saw the Boxers as an avant-garde Proletarian force fighting against imperialism.
Some American churchmen spoke out in support of the Boxers. In 1912, the evangelist George F. Pentecost said that the Boxer uprising was a:
The Indian Bengali Rabindranath Tagore attacked the European colonialists. A number of Indian soldiers in the British Indian Army sympathised with the cause of the Boxers, and in 1994 the Indian military returned a bell looted by British soldiers in the Temple of Heaven to China.
The events also left a longer impact. Historian Robert Bickers noted that the Boxer Rebellion served as an equivalent to the Indian Rebellion of 1857 for the British government, and agitated the Yellow Peril among the British public. He adds that later events like the Northern Expedition during the 1920s, and even the activities of the Red Guards during the 1960s, were perceived as standing in the shadow of the Boxers.
History textbooks in Taiwan and Hong Kong often present the Boxer as irrational, but the central government textbooks in mainland China have described the Boxer movement as an anti-imperialist, patriotic peasant movement that failed by the lack of leadership from the modern working class—and the international army as an invading force. In recent decades, however, large-scale projects of village interviews and explorations of archival sources have led historians in China to take a more nuanced view. Some non-Chinese scholars, such as Joseph Esherick, have seen the movement as anti-imperialist, but others hold that the concept "nationalistic" is anachronistic because the Chinese nation had not been formed, and the Boxers were more concerned with regional issues. Paul Cohen's recent study includes a survey of "the Boxers as myth", which shows how their memory was used in changing ways in 20th-century China from the New Culture Movement to the Cultural Revolution.
In recent years, the Boxer question has been debated in the People's Republic of China. In 1998, the critical scholar Wang Yi argued that the Boxers had features in common with the extremism of the Cultural Revolution. Both events had the external goal of "liquidating all harmful pests" and the domestic goal of "eliminating bad elements of all descriptions" and that the relation was rooted in "cultural obscurantism". Wang explained to his readers the changes in attitudes towards the Boxers from the condemnation of the May Fourth Movement to the approval expressed by Mao Zedong during the Cultural Revolution.Wang Yi, "The Cultural Origins of the Boxer Movement's Obscurantism and Its Influence on the Cultural Revolution", in Douglas Kerr, ed., Critical Zone Three. (Hong Kong University Press), 155. In 2006, Yuan Weishi, a professor of philosophy at Zhongshan University in Guangzhou, wrote that the Boxers by their "criminal actions brought unspeakable suffering to the nation and its people! These are all facts that everybody knows, and it is a national shame that the Chinese people cannot forget." Yuan charged that history textbooks had been lacking in neutrality by presenting the Boxer Uprising as a "magnificent feat of patriotism" and not the view that most Boxer rebels were violent. In response, some labelled Yuan Weishi a "traitor" (Hanjian).
Terminology
The name "Boxer Rebellion", concludes Joseph W. Esherick, a contemporary historian, is truly a "misnomer", for the Boxers "never rebelled against the Manchu rulers of China and their Qing dynasty" and the "most common Boxer slogan, throughout the history of the movement, was 'support the Qing, destroy the Foreign,' where 'foreign' clearly meant the foreign religion, Christianity, and its Chinese converts as much as the foreigners themselves". He adds that only after the movement was suppressed by the Allied Intervention did the foreign powers and influential Chinese officials both realise that the Qing would have to remain as the government of China to maintain order and collect taxes to pay the indemnity. Therefore, to save face for the Empress Dowager and the members of the imperial court, all argued that the Boxers were rebels and that the only support which the Boxers received from the imperial court came from a few Manchu princes. Esherick concludes that the origin of the term "rebellion" was "purely political and opportunistic", but it has had a remarkable staying power, particularly in popular accounts.Esherick p. xiv. Esherick notes that many textbooks and secondary accounts followed Victor Purcell, The Boxer Uprising: A Background Study (1963) in seeing a shift from an early anti-dynastic movement to pro-dynastic, but that the "flood of publications" from Taiwan and the People's Republic (including both documents from the time and oral histories conducted in the 1950s) has shown this not to be the case. xv–xvi.
On 6 June 1900, The Times of London used the term "rebellion" in quotation marks, presumably to indicate its view that the rising was actually instigated by Empress Dowager Cixi.Jane Elliot, Some Did It for Civilisation", p. 9, 1. The historian Lanxin Xiang refers to the uprising as the "so called 'Boxer Rebellion, and he also states that "while peasant rebellion was nothing new in Chinese history, a war against the world's most powerful states was." Other recent Western works refer to the uprising as the "Boxer Movement", the "Boxer War" or the Yihetuan Movement, while Chinese studies refer to it as the "Yihetuan Movement" (). In his discussion of the general and legal implications of the terminology involved, the German scholar Thoralf Klein notes that all of the terms, including the Chinese terms, are "posthumous interpretations of the conflict". He argues that each term, whether it be "uprising", "rebellion" or "movement" implies a different definition of the conflict. Even the term "Boxer War", which has frequently been used by scholars in the West, raises questions. Neither side made a formal declaration of war. The imperial edicts on June 21 said that hostilities had begun and directed the regular Chinese army to join the Boxers against the Allied armies. This was a de facto declaration of war. The Allied troops behaved like soldiers who were mounting a punitive expedition in colonial style, rather than soldiers who were waging a declared war with legal constraints. The Allies took advantage of the fact that China had not signed "The Laws and Customs of War on Land", a key document signed at the 1899 Hague Peace Conference. They argued that China had violated provisions that they themselves ignored.
There is also a difference in terms referring to the combatants. The first reports which came from China in 1898 referred to the village activists as the "Yihequan", (Wade–Giles: I Ho Ch'uan). The earliest use of the term "Boxer" is contained in a letter which was written in Shandong in September 1899 by missionary Grace Newton. The context of the letter makes it clear that when it was written, "Boxer" was already a well-known term, probably coined by Arthur Henderson Smith or Henry Porter, two missionaries who were also residing in Shandong. Smith wrote in his 1902 book that the name:
Media portrayal
By 1900, many new forms of media had matured, including illustrated newspapers and magazines, postcards, broadsides, and advertisements, all of which presented images of the Boxers and the invading armies. The rebellion was covered in the foreign illustrated press by artists and photographers. Paintings and prints were also published including Japanese woodblocks. In the following decades, the Boxers were a constant subject of comment. A sampling includes:
Liu E, The Travels of Lao Can. A 1983 translation by Yang Xianyi and Gladys Yang is abridged and is missing some of the Boxer-related material. sympathetically shows an honest official trying to carry out reforms and depicts the Boxers as sectarian rebels.
Wu Jianren, Sea of Regret deals with the disintegration of the relationship of a young couple with the Boxer Rebellion in its background.
Lin Yutang, Moment in Peking covers events in China from 1900 to 1938, including the Boxer Rebellion.
The 1963 film 55 Days at Peking directed by Nicholas Ray and starring Charlton Heston, Ava Gardner, and David Niven.
In 1976, Hong Kong's Shaw Brothers studio produced the film Boxer Rebellion () under director Chang Cheh.
The Last Empress (Boston, 2007), by Anchee Min, describes the long reign of the Empress Dowager Cixi in which the siege of the legations is one of the climactic events in the novel.
Sandalwood Death by Mo Yan. The novel is written from the viewpoint of villagers during the Boxer Uprising.
Gene Luen Yang's Boxers, a piece of historical fiction written around the event in the form of a graphic novel.
See also
Gengzi Guobian Tanci
Imperial Decree on events leading to the signing of Boxer Protocol
List of 1900–1930 publications on the Boxer Rebellion
Xishiku Cathedral
References
Citations
Sources
Excerpt
Volume I; volume II. An account of the Boxers and the siege by a missionary who had lived in a North China village.
Further reading
General accounts and analysis
Introduction to a special issue of the journal devoted to translations of recent research on the Boxers in the People's Republic.
Missionary experience and personal accounts
The story of the Xinzhou martyrs, Shanxi Province.
Allied intervention, the Boxer War, and the aftermath
Contemporary accounts and sources
Includes interviews and selections from newspaper and magazine first person accounts.
An account by the Italian Minister in Peking.
External links
Lost in the Gobi Desert: Hart retraces great-grandfather's footsteps, William & Mary News Story, 3 January 2005.
200 Photographs in Library of Congress online Collection
University of Washington Library's Digital Collections – Robert Henry Chandless Photographs
Proceedings of the Tenth Universal Peace Congress, 1901
Pictures from the Siege of Peking, from the Caldwell Kvaran archives
Eyewitness account: When the Allies Entered Peking, 1900 , an excerpt of Pierre Loti's Les Derniers Jours de Pékin (1902).
Documents of the Boxer Rebellion (China Relief Expedition), 1900–1901 National Museum of the U.S. Navy (Selected Naval Documents).
Category:Wars involving the United States
Category:1899 in China
Category:1899 in Christianity
Category:1900 in China
Category:1900 in Christianity
Category:1901 in China
Category:1901 in Christianity
Category:Anti-Christian sentiment in China
Category:Anti-imperialism in Asia
Category:Attacks on diplomatic missions in China
Category:Battles involving the princely states of India
Category:Chinese nationalism
Category:Chinese Taoists
Category:Wars involving the German Empire
Category:Eight Banners
Category:History of the Royal Marines
Category:Persecution of Christians
Category:Rebellions in the Qing dynasty
Category:United States Marine Corps in the 18th and 19th centuries
Category:United States Marine Corps in the 20th century
Category:Wars involving France
Category:Wars involving Germany
Category:Wars involving Italy
Category:Wars involving Japan
Category:Wars involving the Habsburg monarchy
Category:Wars involving the Russian Empire
Category:Wars involving the United Kingdom
Category:Military history of the Pacific Ocean
|
political_movements
| 11,723
|
66491
|
Socialist realism
|
https://en.wikipedia.org/wiki/Socialist_realism
|
Socialist realism, also known as socrealism (), was the official cultural doctrine of the Soviet Union that mandated an idealized representation of life under socialism in literature and the visual arts. The doctrine was first proclaimed by the First Congress of Soviet Writers in 1934 as approved as the only acceptable method for Soviet cultural production in all media.
The primary official objective of socialist realism was "to depict reality in its revolutionary development" although no formal guidelines concerning style or subject matter were provided. Works of socialist realism were usually characterized by unambiguous narratives or iconography relating to the Marxist–Leninist ideology, such as the emancipation of the proletariat.Korin, Pavel, "Thoughts on Art", Socialist Realism in Literature and Art. Progress Publishers, Moscow, 1971, p. 95. In visual arts, socialist realism often relied on the conventions of academic art and classical sculpture. Socialist realism was usually devoid of complex artistic meaning or interpretation yet counter sources may provide different interpretations.
In the aftermath of World War II, socialist realism was adopted as official policy by the communist states that were politically aligned with the Soviet Union. Socialist realism was the predominant form of approved art in the Soviet Union from its development in the early 1920s to its eventual fall from official status beginning in the late 1960s until the collapse of the Soviet Union in 1991.Encyclopedia Britannica on-line definition of Socialist RealismEllis, Andrew. Socialist Realisms: Soviet Painting 1920–1970. Skira Editore S.p.A., 2012, p. 20 While other countries have employed a prescribed canon of art, socialist realism in the Soviet Union persisted longer and was more restrictive than elsewhere in Europe.Valkenier, Elizabeth. Russian Realist Art. Ardis, 1977, p. 3. The doctrine of socialist realism should not be confused with social realism, a type of art that realistically depicts subjects of social concern and was popularized in the United States during the 1930s, In part available at Grove Art Online, accessed Sep 2025. or any other forms of artistic "realism".
History
Development
Socialist realism was developed by many thousands of artists, across a diverse society, over several decades.Ellis, Andrew. Socialist Realisms: Soviet Painting 1920–1970. Skira Editore S.p.A., 2012, p. 17 Early examples of realism in Russian art include the work of the Peredvizhnikis and Ilya Yefimovich Repin. While these works do not have the same political connotation, they exhibit the techniques exercised by their successors. After the Bolsheviks took control of Russia on October 25, 1917, there was a marked shift in artistic styles. There had been a short period of artistic exploration in the time between the fall of the Tsar and the rise of the Bolsheviks.
Shortly after the Bolsheviks took control, Anatoly Lunacharsky was appointed as head of Narkompros, the People's Commissariat for Enlightenment. This put Lunacharsky in the position of deciding the direction of art in the newly created Soviet state. Although Lunacharsky did not dictate a single aesthetic model for Soviet artists to follow, he developed a system of aesthetics based on the human body that would later help to influence socialist realism. He believed that "the sight of a healthy body, intelligent face or friendly smile was essentially life-enhancing."Ellis, Andrew. Socialist Realisms: Soviet Painting 1920–1970. Skira Editore S.p.A., 2012, p. 21 He concluded that art had a direct effect on the human organism and under the right circumstances that effect could be positive. By depicting "the perfect person" (New Soviet man), Lunacharsky believed art could educate citizens on how to be the perfect Soviets.
Debate within Soviet art
There were two main groups debating the fate of Soviet art: futurists and traditionalists. Russian Futurists, many of whom had been creating abstract or leftist art before the Bolsheviks, believed communism required a complete rupture from the past and, therefore, so did Soviet art. Traditionalists believed in the importance of realistic representations of everyday life. Under Lenin's rule and the New Economic Policy, there was a certain amount of private commercial enterprise, allowing both the futurists and the traditionalists to produce their art for individuals with capital.Ellis, Andrew. Socialist Realisms: Soviet Painting 1920–1970. Skira Editore S.p.A., 2012, p. 22 By 1928, the Soviet government had enough strength and authority to end private enterprises, thus ending support for fringe groups such as the futurists. At this point, although the term "socialist realism" was not being used, its defining characteristics became the norm.Ellis, Andrew. Socialist Realisms: Soviet Painting 1920–1970. Skira Editore S.p.A., 2012, p. 23
According to the Great Russian Encyclopedia, the term was first used in press by chairman of the organizing committee of the Union of Soviet Writers, Ivan Gronsky in Literaturnaya Gazeta on May 23, 1932.Социалистический реализм. In: Большая российская энциклопедия, 2015, pp. 75–753 The term was approved in meetings that included politicians of the highest level, including Joseph Stalin.Ellis, Andrew. Socialist Realisms: Soviet Painting 1920–1970. Skira Editore S.p.A., 2012, p. 37 Maxim Gorky, a proponent of literary socialist realism, published a famous article titled "Socialist Realism" in 1933. During the Congress of 1934, four guidelines were laid out for socialist realism.Juraga, Dubravka and Booker, Keith M. Socialist Cultures East and West. Praeger, 2002, p. 68 The work must be:
Proletarian: art relevant to the workers and understandable to them.
Typical: scenes of everyday life of the people.
Realistic: in the representational sense.
Partisan: supportive of the aims of the State and the Party.
Characteristics
The purpose of socialist realism was to limit popular culture to a specific, highly regulated faction of emotional expression that promoted Soviet ideals.Nelson, Cary and Lawrence, Grossberg. Marxism and the Interpretation of Culture. University of Illinois Press, 1988, p. 5 The party was of the utmost importance and was always to be favorably featured. The key concepts that developed assured loyalty to the party were partiinost' (party-mindedness), ideinost (idea and ideological content), klassovost (class content), pravdivost (truthfulness).Ellis, Andrew. Socialist Realisms: Soviet Painting 1920–1970. Skira Editore S.p.A., 2012, p. 38 Ideinost was an important concept: not only was the work to embody an approved idea, but its content was more important than its form. This allowed the identification of formalism, a work in which the formal aspects of a work of art commanded more importance than the subject matter, or content.
There was a prevailing sense of optimism, as socialist realism's function was to show the ideal Soviet society. Not only was the present glorified, but the future was also supposed to be depicted in an agreeable fashion. Because the present and the future were constantly idealized, socialist realism had a sense of forced optimism. Tragedy and negativity were not permitted, unless they were shown in a different time or place. This sentiment created what would later be dubbed "revolutionary romanticism".
Revolutionary romanticism elevated the common worker, whether factory or agricultural, by presenting his life, work, and recreation as admirable. Its purpose was to show how much the standard of living had improved thanks to the revolution, as educational information, to teach Soviet citizens how they should be acting and to improve morale. The ultimate aim was to create what Lenin called "an entirely new type of human being": The New Soviet Man. Art (especially posters and murals) was a way to instill party values on a massive scale. Stalin described the socialist realist artists as "engineers of souls".Overy, Richard. The Dictators: Hitler's Germany, Stalin's Russia. W.W. Norton & Company, 2004, p. 354
Common images used in socialist realism were flowers, sunlight, the body, youth, flight, industry, and new technology. These poetic images were used to show the utopianism of communism and the Soviet state. Art became more than an aesthetic pleasure; instead it served a very specific function. Soviet ideals placed functionality and work above all else; therefore, for art to be admired, it must serve a purpose. Georgi Plekhanov, a Marxist theoretician, states that art is useful if it serves society: "There can be no doubt that art acquired a social significance only in so far as it depicts, evokes, or conveys actions, emotions and events that are of significance to society."Schwartz, Lawrence H. Marxism and Culture. Kennikat Press, 1980, p. 110
The themes depicted would feature the beauty of work, the achievements of the collective and the individual for the good of the whole. The artwork would often feature an easily discernible educational message.
The artist could not, however, portray life just as they saw it because anything that reflected poorly on Communism had to be omitted as it did not reflect material reality of the work of the artist. People who could not be shown as either wholly good or wholly evil could not be used as characters as the concept of pure good and evil did not exist.Frankel, Tobia. The Russian Artist. Macmillan Company, 1972, p. 125 Art was filled with health and happiness: paintings showed busy industrial and agricultural scenes; sculptures depicted workers, sentries, and schoolchildren.Stegelbaum, Lewis and Sokolov, Andrei. Stalinism As A Way Of Life. Yale University Press, 2004, p. 220
Creativity was an important part of socialist realism. The styles used in creating art during this period were those that would produce the most realistic results based on material realism. Painters would depict happy, muscular peasants and workers in factories and collective farms. During the Stalin period, they produced numerous heroic portraits of Stalin to serve his cult of personalityall in the most realistic fashion possible.Juraga, Dubravka and Booker, Keith M. Socialist Cultures East and West. Praeger, 2002, p. 45 The most important thing for a socialist realist artist was not artistic integrity as it was political in nature, thus creating a singular materialistic realism aesthetic.
Important groups
The Merriam-Webster Dictionary defines socialist realism as "a Marxist aesthetic theory calling for the didactic use of literature, art, and music to develop social consciousness in an evolving socialist state". Socialist realism compelled artists of all forms to create positive or uplifting reflections of socialist utopian life by utilizing any visual media, such as posters, movies, newspapers, theater and radio, beginning during the Communist Revolution of 1917 and escalating during the reign of Stalin until the early 1980s.
Vladimir Lenin, head of the Russian government 1917–1924, laid the foundation for this new wave of art, suggesting that art is for the people and the people should love and understand it, while uniting the masses. Artists Naum Gabo and Antoine Pevsner attempted to define the lines of art under Lenin by writing "The Realist Manifesto" in 1920, suggesting that artists should be given free rein to create as their muse desired. Lenin, however, had a different purpose for art: wanting it functional, and Stalin built on that belief that art should be agitation.
The term Socialist Realism was proclaimed in 1934 at the Soviet Writer's congress, although it was left not precisely defined. This turned individual artists and their works into state-controlled propaganda.
After the death of Stalin in 1953, he was succeeded by Nikita Khrushchev who allowed for less draconian state controls and openly condemned Stalin's artistic demands in 1956 with his "Secret Speech", and thus began a reversal in policy known as "Khrushchev's Thaw". In 1964, Khrushchev was removed and replaced by Leonid Brezhnev, who reintroduced Stalin's ideas and reversed the artistic decisions made by Khrushchev. However, by the early 1980s, the Socialist Realist movement had begun to fade. Artists to date remark that the Russian Social Realist movement as the most oppressive and shunned period of Soviet Art.
Association of Artists of Revolutionary Russia (AKhRR)
The Association of Artists of Revolutionary Russia (AKhRR) was established in 1922 and was one of the most influential artist groups in the USSR. The AKhRR worked to truthfully document contemporary life in Russia by utilizing "heroic realism". The term "heroic realism" was the beginning of the socialist realism archetype. AKhRR was sponsored by influential government officials such as Leon Trotsky and carried favor with the Red Army.
In 1928, the AKhRR was renamed to Association of Artists of the Revolution (AKhR) in order to include the rest of the Soviet states. At this point the group had begun participating in state promoted mass forms of art like murals, jointly-made paintings, advertisement production and textile design.Ellis, Andrew. Socialist Realisms: Soviet Painting 1920–1970. Skira Editore S.p.A., 2012, p. 35 The group was disbanded April 23, 1932 by the decree "On the Reorganization of Literary and Artistic Organizations" serving as the nucleus for the Stalinist USSR Union of Artists.
Studio of military artists named after M. B. Grekov
Studio of military artists was created in 1934.
The Union of Soviet Writers (USW)
The creation of Union of Soviet Writers was partially initiated by Maxim Gorky to unite the Soviet writers of different methods, such as the "proletarian" writers (such as Fyodor Panfyorov), praised by the Communist Party, and the poputchicks (such as Boris Pasternak and Andrei Bely). In August 1934, the union held its first congress where Gorky said:
One of the most famous authors during this time was Alexander Fadeyev. Fadeyev was a close personal friend of Stalin and called Stalin "one of the greatest humanists the world has ever seen." His most famous works include The Rout and The Young Guard.
Reception and impact
Stalin's adversary, Leon Trotsky, was highly critical of this rigid approach towards the arts. He viewed cultural conformity as an expression of Stalinism in which "the literary schools were strangled one after the other" and the method of command extended across various areas from scientific agriculture to music. Overall, he regarded socialist realism to be an arbitrary construct of the Stalinist bureaucracy.
The impact of socialist realist art can still be seen decades after it ceased being the only state-supported style. Even before the end of the USSR in 1991, the government had been reducing its practices of censorship. After Stalin's death in 1953, Nikita Khrushchev began to condemn the previous regime's practice of excessive restrictions. This freedom allowed artists to begin experimenting with new techniques, but the shift was not immediate. It was not until the ultimate fall of Soviet rule that artists were no longer restricted by the deposed Communist Party. Many socialist realist tendencies prevailed until the mid-to-late 1990s and early 2000s.Evangeli, Aleksandr. "Echoes of Socialist Realism in Post-Soviet Art", Socialist Realisms: Soviet Painting 1920–1970. Skira Editore S.p.A., 2012, p. 218
In the 1990s, many Russian artists used the characteristics of socialist realism in an ironic fashion. This was completely different from what existed only a couple of decades before. Once artists broke from the socialist realist mould, there was a significant power shift. Artists began including subjects that could not exist according to Soviet ideals. Now that the power over appearances was taken away from the government, artists achieved a level of authority that had not existed since the early 20th century.Evangeli, Aleksandr. "Echoes of Socialist Realism in Post-Soviet Art", Socialist Realisms: Soviet Painting 1920–1970. Skira Editore S.p.A., 2012, p. 221 In the decade immediately after the fall of the USSR, artists represented socialist realism and the Soviet legacy as a traumatic event. By the next decade, there was a unique sense of detachment.Evangeli, Aleksandr. "Echoes of Socialist Realism in Post-Soviet Art", Socialist Realisms: Soviet Painting 1920–1970. Skira Editore S.p.A., 2012, p. 223
Western cultures often do not look at socialist realism positively. Democratic countries view the art produced during this period of repression as a lie based on their Capitalist realism.Juraga, Dubravka and Booker, Keith M. Socialist Cultures East and West. Praeger, 2002, p. 12 Non-Marxist art historians tend to view communism as a form of totalitarianism that smothers artistic expression and therefore retards the progress of capitalist culture.Schwartz, Lawrence H. Marxism and Culture. Kennikat Press, 1980, p. 4 In recent years there has been a reclamation of the movement in Moscow with the addition of the Institute of Russian Realist Art (IRRA), a three-story museum dedicated to preserving 20th-century Russian realist paintings.
Notable works and artists
Music
Hanns Eisler composed many workers' songs, marches, and ballads on current political topics such as Song of Solidarity, Song of the United Front, and Song of the Comintern. He was a founder of a new style of revolutionary song for the masses. He also composed works in larger forms such as Requiem for Lenin. Eisler's most important works include the cantatas German Symphony, Serenade of the Age and Song of Peace. Eisler combines features of revolutionary songs with varied expression. His symphonic music is known for its complex and subtle orchestration.
Closely associated with the rise of the labor movement was the development of the revolutionary song, which was performed at demonstrations and meetings. Among the most famous of the revolutionary songs are The Internationale and Whirlwinds of Danger. Notable songs from Russia include Boldly, Comrades, in Step, Workers' Marseillaise, and Rage, Tyrants. Folk and revolutionary songs influenced the Soviet mass songs. The mass song was a leading genre in Soviet music, especially during the 1930s and the war. The mass song influenced other genres, including the art song, opera, and film music. The most popular mass songs include Dunaevsky's Song of the Homeland, Isaakovsky's Katiusha, Novikov's Hymn of Democratic Youth of the World, and Aleksandrov's Sacred War.
Film
Discussions of film as a tool of the Soviet state began in the early twentieth century. Leon Trotsky argued that cinema could be used to supplant the influence of the Orthodox Church in Russia. In the early 1930s, Soviet filmmakers applied socialist realism in their work. Notable films include Chapaev, which shows the role of the people in the history-making process. The theme of revolutionary history was developed in films such as The Youth of Maxim by Grigori Kozintsev and Leonid Trauberg, Shchors by Dovzhenko, and We are from Kronstadt by E. Dzigan. The shaping of the new man under socialism was a theme of films such as A Start Life by N. Ekk, Ivan by Dovzhenko, Valerii Chkalov by M. Kalatozov and the film version of Tanker "Derbent" (1941). Some films depicted the part of peoples of the Soviet Union against foreign invaders: Alexander Nevsky by Eisenstein, Minin and Pozharsky by Pudovkin, and Bogdan Khmelnitsky by Savchenko. Soviet politicians were the subjects in films such as Yutkevich's trilogy of movies about Lenin. Socialist realism was also applied to Hindi films of the 1940s and 1950s. These include Chetan Anand's Neecha Nagar (1946), which won the Grand Prize at the 1st Cannes Film Festival, and Bimal Roy's Two Acres of Land (1953), which won the International Prize at the 7th Cannes Film Festival.
Paintings
The painter Aleksandr Deineka provides a notable example for his expressionist and patriotic scenes of the Second World War, collective farms, and sports. Yuriy Ivanovich Pimenov, Boris Ioganson, Isaak Brodsky and Geli Korzev have also been described as "unappreciated masters of twentieth-century realism". Another well-known practitioner was Fyodor Pavlovich Reshetnikov. Socialist realist art found acceptance in the Baltic nations, inspiring many artists. One such artist was Czeslaw Znamierowski (23 May 1890 – 9 August 1977), a Soviet Lithuanian painter, known for his large panoramic landscapes and love of nature. Znamierowski combined these two passions to create very notable paintings in the Soviet Union, earning the prestigious title of Honorable Artist of LSSR in 1965.Alekna, Romas (24 May 1975). "Česlovui Znamierovskiui – 85" [Česlovas Znamierovskis Celebrates his 85th Birthday]. Literatūra ir menas [Literature and Art] (in Lithuanian) (Vilnius: Lithuanian Creative Unions Weekly) Born in Latvia, which formed part of the Russian Empire at the time, Znamierowski was of Polish descent and Lithuanian citizenship, a country where he lived for most of his life and died. He excelled in landscapes and social realism, and held many exhibitions. Znamierowski was also widely published in national newspapers, magazines and books. His more notable paintings include Before Rain (1930), Panorama of Vilnius City (1950), The Green Lake (1955), and In Klaipeda Fishing Port (1959). A large collection of his art is located in the Lithuanian Art Museum.
Gallery of Socialist realism paintings
Literature
Martin Andersen Nexø developed socialist realism in his own way. His creative method featured a combination of publicistic passion, a critical view of capitalist society, and a steadfast striving to bring reality into accord with socialist ideals. The novel Pelle, the Conqueror is considered to be a classic of socialist realism.
Bruno Apitz's novel Nackt unter Wölfen, a story that culminates in the vivid description of the self-liberation of the detainees, was deliberately chosen to take place on the same day as the formal opening of the Buchenwald Monument in September 1958. The novels of Louis Aragon, such as The Real World, depict the working class as a rising force of the nation. He published two books of documentary prose, The Communist Man. In the collection of poems A Knife in the Heart Again, Aragon criticizes the penetration of American imperialism into Europe. The novel The Holy Week depicts the artist's path toward the people against a broad social and historical background.
Maxim Gorky's novel Mother (1906) is usually considered to have been the first socialist-realist novel.Andrei Sinyavsky. Maxim Gorky's Mother as the first Socrealist novel Gorky was also a major factor in the school's rapid rise, and his pamphlet, On Socialist Realism, essentially lays out the needs of Soviet art. Other important works of literature include Fyodor Gladkov's Cement (1925), Nikolai Ostrovsky's How the Steel Was Tempered (1936) and Aleksey Tolstoy's epic trilogy The Road to Calvary (1922–1941). Yury Krymov's novel Tanker "Derbent" (1938) portrays Soviet merchant seafarers being transformed by the Stakhanovite movement. Thol, a novel by D. Selvaraj in Tamil is a standing example of Marxist Realism in India. It won a literary award (Sahithya Akademi) for the year 2012.
Sculptures
Sculptor Fritz Cremer created a series of monuments commemorating the victims of the Nazi regime in the former concentration camps Auschwitz, Buchenwald, Mauthausen and Ravensbrück. His bronze monument in Buchenwald, depicting the liberation of this concentration camp by detainees in April 1945, is considered one of the most striking examples of socialist realism in GDR sculpture for its representation of communist liberation. Each figure in the monument, erected outside the campsite, has symbolic significance according to the orthodox communist interpretation of the event. Thus communists were portrayed as the driving force behind self-liberation, symbolized by a figure in the foreground sacrificing himself for his sufferers, followed by the central group of determined comrades through whose courage and fearlessness is encouraged. The German Democratic Republic used these sculptures to reaffirm its claim to the historical and political legacy of the anti-fascist struggle for freedom.thumb|Cobizev featured on a stamp of Moldova
Claudia Cobizev was a Moldovan sculptor, whose work was known for its sensitive portrayals of women and children.Marian, Ana. "Particularităţile portretului în creaţia Claudiei Cobizev." Arta 1 (AV) (2015): 150–156. Her most notable work is Cap de moldoveancă which was exhibited at the Paris International Exhibition to wide acclaim.Malcoci, Vitalie. "115 ani de la nașterea celebrei sculptoriţe Claudia Cobizev." Arta 1 (AV) (2020): 175–176.
Theater
Theater is a realm in which socialist realism as a movement took root as a way to reach out and appeal to the masses. This occurred both within the Soviet bloc as well as outside of it, with China being another hotbed for socialist realism within theater.
Soviet Union
Countries within the Soviet Union were heavily influenced by socialist realism when it came to theater. Early after the 1917 revolution, a movement arose to attempt to redefine what theater was, with theorist Platon Kerzhentsev wanting to break down the barriers between actors and the public, creating unity between the two.
With the revolution, there was the ability to change the existing theatrical institutions to fit the new ideas circulating. The early 1920s saw this explosion of creativity, with organizations such as the TEO Narkompros (the Department of Fine Arts) working to incorporate new types of theater. Thus, these movements were later brought under control and solidified by the Soviet government, as individual theatrical troupes were organized and transformed through governmental support.
A part of these movements involved the reinvention of classic shows, including those in the Western canon. Hamlet particularly had a draw for Russians, and was seen to provide insight into the workings and complexities of Russian life after the 1917 revolution. Playwrights attempted to express their feelings about life around them while additionally following the guidelines of socialist realism, a way of reinventing old shows. Hamlet was re-imagined by Nikolay Akimov, for example, as a show that was more materialist in nature, coming at the end of this era of experimentation.
These movements were not merely localized to Russia, but spread throughout the USSR, with Poland being a notable location where socialist realism was implemented in theater. In order to make theater more accessible to the average person (for both entertainment and educational purposes), an emphasis was put on creating a network of smaller, independent theaters, including those in rural communities and traveling companies.
By making theater available to everyone, not simply those with the time and money to view it, officials hoped to educate the public both on theater itself and the various ideologies they wanted to promote. Beliefs that were more heavily promoted included those seen to be educational (with the idea of “teaching through entertaining” springing up), those upholding the values of nature and the countryside, and those that generally had a positive quality, especially when looking at children’s theater.
Reinvention of old forms took place, along with the creation of new theatrical movements. Opera as a theatrical form was reinterpreted and reinvented throughout the Soviet Union, moving away from its aristocratic roots and towards the support of the new state. By the 1930s, the Bolshoi Theater in particular became a symbol of Bolshevik power, and the question became how to best integrate socialist realism into an opera that could be performed there. The Union of Soviet Composers, established 1932, played a role towards creating these new operas, and spoke about the importance of socialist realism in opposition to modernistic art.
China
Theater in China fell under the state’s purview after the Chinese Communist Revolution, led partly by poet and playwright Tian Han, President of the China Theater Association (among other honors). He pushed for theatrical reform in a socialist manner, primarily focused on transferring ownership from private troupes to state ones, but additionally on the subject matter of the plays themselves.
In the midst of these reforms, ideas around feminism and how it tied into socialism emerged, specifically with regards to theater. Bai Wei, inspired by Tian Han, developed a style of theater in the 1920s that focused specifically on women within a patriarchal society, and the struggle to break free of it. She additionally incorporated ideas of socialist realism within her work, though did break from it in some ways, including the fact that her characters were more individualized and less collective. Strong female characters were, however, idealized and put forward in Chinese socialist realism, with these women often shown making some sort of sacrifice or grand action in service of a greater cause.
Socialist realism in Chinese theater can be seen to home in on the ideas that it is more valuable to take action as a group, together, than individually. This is evident from plays put on during the Cultural Revolution, where common themes included a large group standing up to imperialist forces (such as a Japanese invasion, for example), with the individual characters within the play being less important than the overarching power struggle occurring.
Soviet Union
In conjunction with the Socialist Classical style of architecture, socialist realism was the officially approved type of art in the Soviet Union for more than fifty years. In the early years of the Soviet Union, Russian and Soviet artists embraced a wide variety of art forms under the auspices of Proletkult. Revolutionary politics and radical non-traditional art forms were seen as complementary.Werner Haftmann, Painting in the 20th century, London 1965, vol. 1, p. 196.
These styles of art were later rejected by members of the Communist Party who did not appreciate modern styles such as Impressionism and Cubism. Socialist realism was, to some extent, a reaction against the adoption of these "decadent" styles. It was thought by Lenin that the non-representative forms of art were not understood by the proletariat and could therefore not be used by the state for propaganda.Haftman, p. 196
Alexander Bogdanov argued that the radical reformation of society to communist principles meant little if any bourgeois art would prove useful; some of his more radical followers advocated the destruction of libraries and museums.Richard Pipes, Russia Under the Bolshevik Regime, p. 288, Lenin rejected this philosophy,Richard Pipes, Russia Under the Bolshevik Regime, p. 289, deplored the rejection of the beautiful because it was old, and explicitly described art as needing to call on its heritage: "Proletarian culture must be the logical development of the store of knowledge mankind has accumulated under the yoke of capitalist, landowner, and bureaucratic society."Oleg Sopontsinsky, Art in the Soviet Union: Painting, Sculpture, Graphic Arts, p. 6 Aurora Art Publishers, Leningrad, 1978
Modern art styles appeared to refuse to draw upon this heritage, thus clashing with the long realist tradition in Russia and rendering the art scene complex.Oleg Sopontsinsky, Art in the Soviet Union: Painting, Sculpture, Graphic Arts, p. 21 Aurora Art Publishers, Leningrad, 1978 Even in Lenin's time, a cultural bureaucracy began to restrain art to fit propaganda purposes.Richard Pipes, Russia Under the Bolshevik Regime, p. 283, Leon Trotsky's arguments that a "proletarian literature" was un-Marxist because the proletariat would lose its class characteristics in the transition to a classless society, however, did not prevail.R. H. Stacy, Russian Literary Criticism p. 191
Socialist realism became state policy in 1934 when the First Congress of Soviet Writers met and Stalin's representative Andrei Zhdanov gave a speech strongly endorsing it as "the official style of Soviet culture". This was either because they were "decadent", unintelligible to the proletariat, or counter-revolutionary. A great number of landscapes, portraits, and genre paintings exhibited at the time pursued purely technical purposes and were thus ostensibly free from any ideology. Genre painting was also approached in a similar way.Sergei V. Ivanov, Unknown Socialist Realism. The Leningrad School, : pp. 29, 32–340. .
Their time and contemporaries, with all its images, ideas, and dispositions found it full expression in portraits by Vladimir Gorb, Boris Korneev, Engels Kozlov, Felix Lembersky, Oleg Lomakin, Samuil Nevelshtein, Victor Oreshnikov, Semion Rotnitsky, Lev Russov, and Leonid Steele; in landscapes by Nikolai Galakhov, Vasily Golubev, Dmitry Maevsky, Sergei Osipov, Vladimir Ovchinnikov, Alexander Semionov, Arseny Semionov, and Nikolai Timkov; and in genre paintings by Andrey Milnikov, Yevsey Moiseenko, Mikhail Natarevich, Yuri Neprintsev, Nikolai Pozdneev, Mikhail Trufanov, Yuri Tulin, Nina Veselova, and others.
In 1974, for instance, a show of unofficial art in a field near Moscow was broken up and the artwork destroyed with a water cannon and bulldozers (see Bulldozer Exhibition). Mikhail Gorbachev's policies of glasnost and perestroika facilitated an explosion of interest in alternative art styles in the late 1980s, but socialist realism remained in limited force as the official state art style until as late as 1991. It was not until after the fall of the Soviet Union that artists were finally freed from state censorship.
Other countries
After the Russian Revolution, socialist realism became an international literary movement. Socialist trends in literature were established in the 1920s in Germany, France, Czechoslovakia, and Poland. Writers who helped develop socialist realism in the West included Louis Aragon, Johannes Becher, and Pablo Neruda.
During the 1950s, this massive undertaking, a crucial role fell to architects perceived not as merely engineers creating streets and edifices, but rather as "engineers of the human soul" who, in addition to extending simple aesthetics into urban design, were to express grandiose ideas and arouse feelings of stability, persistence and political power. In art, from the mid-1960s more relaxed and decorative styles became acceptable even in large public works in the Warsaw Pact bloc, the style mostly deriving from popular posters, illustrations and other works on paper, with discreet influence from their Western equivalents. Today, arguably the only countries still focused on these aesthetic principles are North Korea, Laos, and to some extent Vietnam. Socialist realism had little mainstream impact in the non-Communist world, where it was widely seen as a totalitarian means of imposing state control on artists.
The former Socialist Federal Republic of Yugoslavia was an important exception among the communist countries, because after the Tito–Stalin split in 1948, it abandoned socialist realism along with other elements previously imported from the Soviet system and allowed greater artistic freedom.Library of Congress Country Studies – Yugoslavia: Introduction of Socialist Self-Management Socialist realism was the main art current in the People's Socialist Republic of Albania. In 2017, three works by Albanian artists from the socialist era were exhibited at documenta 14.
Indonesia
Lembaga Kebudajaan Rakjat, often abbreviated Lekra; meaning Institute for the People's Culture) was a prolific cultural and social movement associated with the Communist Party of Indonesia. Founded in 1950, Lekra pushed for artists, writers and teachers to follow the doctrine of socialist realism. Increasingly vocal against non-Lekra members, the group rallied against the Manifes Kebudayaan (Cultural Manifesto), eventually leading to President Sukarno banning it with some hesitations. After the 30 September Movement, Lekra was banned together with the Communist party.
China
Academics typically view China's socialist literature as existing within the trend of Stalinist-influenced socialist realism, particularly major works such as Mikhail Sholokhov's Virgin Soil Upturned and Galina Nikolaeva's Harvest, which were widely translated and disseminated in China. Other academics, including Cai Xiang, Rebecca E. Karl, and Xueping Zhong, place greater weight on the influence of Mao Zedong's 1942 lectures, Talks at the Yan'an Forum on Art and Literature.Cai (2016), pp. xiii–xviii. During the years 1952 to 1954, the architectural style of socialist realism from the Soviet Union influenced Chinese architecture. Socialist realism was introduced into Chinese oil painting through a class held by Konstantin Maksimov in Beijing. Feng Fasi's The Heroic Death of Liu Hulan is regarded as a classic socialist realist painting.
East Germany
Overview
The earliest ideas of socialist realism in the German Democratic Republic (East Germany) came about directly after the end of World War II, when the state was formed. While planning to establish a national East German culture, cultural leaders wanted to move away from fascist ideas, including those of Nazi and militaristic doctrines. Cultural leaders first started clarifying what "realism" entailed. The SED determined that realism was to act as a "fundamental artistic approach that is attuned to contemporary social reality."
The characteristics of realism became more specified in East German cultural policy as the GDR defined its identity as a state. As the head of the SMAD's cultural division, Aleksandr Dymshits asserted that the "negation of reality" and "unbridled fantasy" was a "bourgeois and decadent attitude of the mind" that rejects "the truth of life."
Cultural officials looked back at historical events in Germany that could have acted as the origin points of the eventual creation of the GDR. The works and legacy of Albrecht Dürer became a point of reference for the early development of socialist realism in East Germany. Dürer created many artworks about the Great Peasants' War. His "support for the 'revolutionary forces'" in his illustrations made him an appealing figure to East German officials, while they searched for a starting point of a new German socialist state. In Heinz Lüdecke and Susanne Heiland's anthology Dürer und die Nachwelt, they described Dürer as being "inseparably associated with the two great currents of bourgeois antifeudal progress, namely humanism and the Reformation..." The authors also stated that Dürer came to mind "both by bourgeois self-awareness and by the then awakening German national sense of identity." The legacies of Dürer and the Great Peasants' War continued as artists produced their works in the GDR. Thomas Müntzer was another key figure of historical interest and artistic inspiration for socialist realism in East Germany. Friedrich Engels revered Müntzer for arousing the peasantry to confront the feudal elite.
Visual art
Socialist realist visual art in East Germany was unique in its various historical influences. It also stood out with how the art style transcended the boundaries of the art doctrine at times, yet still maintained the goals the state had of communicating early forms of German revolutionary history. Werner Tübke was one of East Germany's most prominent painters, who demonstrated this expansive nature of socialist realist art in his country. Though his paintings did not always conform to the socialist realism doctrine, he was still "able to portray the Socialist utopia, and in particular the understanding of history as held by the Socialist Unity Party of Germany...
Tübke's style drew from the Renaissance art movement, as the GDR also emphasized this style in the creation of artwork, which they referred to as Erbe, or "heritage" art. He cited various Renaissance-era German painters whom he referenced in developing his art style in his Methodisches Handbuch, Dürer being one of them. He made several paintings depicting the lives of the working class and revolutionary struggle, in styles and compositions that resemble the historical German Renaissance paintings. His series of four triptychs called History of the German Working Class Movement was an example of this. Each painting was filled with action taking place on every part of the panel, along with several people in one scene, two common characteristics of German renaissance artwork.
The GDR aimed to use socialist realism to educate the German people about their history, through the lens of working-class struggle, and to evoke a sense of pride for their socialist state. The SED commissioned East German artists "to produce paintings affirming the 'victors of history.'" Werner Tübke was tasked to create his Early Bourgeois Revolution in Germany. The state wanted to have a visual reminder of the German Peasants' War and the leadership of Müntzer in the revolt. The highly detailed mural includes many different scenes and key figures of the revolution. Dürer is included at the bottom of the painting at the fountain. Edith Brandt, the Secretary for Science, Education, and Culture, believed that the mural "would enhance the historical awareness of the population, especially the young, and serve the cause of patriotic education."
East German socialist realism started to shift in later decades, especially after the Basic Treaty, 1972 was signed by both East and West Germany. The treaty allowed East German artists to travel to West Germany and beyond to other European countries. Artistic exchanges between artists in both states introduced these new practices to the GDR, while socialist realism gained more attention by those outside of East Germany. Two exhibitions featuring artwork from both East and West Germany were curated at the Musée d'Art Moderne de la Ville de Paris in 1981. The exhibition for East German art presented itself as "the good founded by socialist realism to better embody a possible alternative to the crisis of values experienced by the West."
Film
Film was used as a teaching tool for East German cultural values. DEFA was the GDR's official film studio, which created such films. DEFA's socialist realist films were especially geared towards East German youth, as the next generation of the GDR. Leader of the SMAD's propaganda wing, Sergei Tiulpanov, asserted that the primary goal of DEFA was "the struggle to re-educate the German people–especially the young–to a true understanding of genuine democracy and humanism." The studio produced children's films to influence them, as they believed these types of films to be effective in emphasizing good citizenship and how to show children how to emulate this.
Gerhard Lamprecht's Somewhere in Berlin (German: Irgendwo in Berlin) was one of DEFA's most notable films. Though the film was produced in 1946, three years before the GDR was established, it was a foundation point for a broader development of East German socialist realist film. An antifascist film, Lamprecht emphasizes the necessity of "reconstructing the nation" after World War II. Preliminary East German films like Somewhere in Berlin "laid the groundwork for a national film culture based in pedagogical intent."
Some DEFA films were also derived from earlier German fairytales that predated the GDR. Paul Verhoeven's The Cold Heart (German: Das kalte Herz) was one of such films, which was based on the story written by Wilhelm Hauff of the same title. The film was produced to serve as a good example of how a person should treat others. The film's main messages centered on the pitfalls of greed and the value of loving personal relationships.
DEFA also employed films to be used as history lessons for the people of East Germany, namely those about the German Peasants' War. Martin Hellberg's Thomas Muentzer (film) told the stories of his leadership and the revolution in a heroic and idealistic portrayal. DEFA saw Hellberg's film proposal as an opportunity to teach about German revolutionary history, as a means of preventing a descent into fascism again. The producers gave the actor portraying Müntzer lines that embrace Marxist thought, to clearly communicate ideals of socialism and the roles of the working class to viewers. Ideas about property re-distribution and a proletariat victory over the ruling classes are conveyed in the film's depiction of the revolutionary leader.
Literature
Many of East Germany's renowned writers lived through the Nazi regime, which influenced their craft and works with socialist realism. Anna Seghers' 1949 novel The Dead Stay Young (German: Die Toten Bleiben Jung) was considered "a foundational literary work for the young GDR." Critics commented on the pessimistic plot and message of the novel, as it was centered on the unsuccessful Spartacist uprising. Though the novel did not depict an ideal or optimistic view on socialism, critic Günther Cwojdrak stated that Seghers still communicated reality by fulfilling "the task of transforming the working people and educating them in the spirit of socialism..."
East German literature that followed Seghers' novel focused on including heroes as protagonists to communicate optimistic messages of the prospects of socialism. Journalist Heinrich Goeres suggested that writers should use Soviet literature as an example to write more positive stories. Early works of socialist literature in the GDR were produced in 1949 "to promote the new socialist man." In later years, stories about women's lives under socialism were written, and Christa Wolf and Brigitte Reimann were some of the authors who were involved in these widening developments. In the 1960s, the SED introduced the Bitterfelder Weg, a part of Aufbauliteratur, which was a plan to send writers to industrial centers to generate "cultural production" between the writers and workers.
Gender in socialist realism
USSR
Early Soviet period
In the poster propaganda produced during the Russian Civil War (1917–1922) men were overrepresented as workers, peasants, and combat heroes, and when women were shown, it was often either to symbolize an abstract concept (e.g., Mother Russia, "freedom") or as nurses and victims. The symbolic women would be depicted as femininewearing long dresses, long hair, and bare breasts. The image of the urban proletariat, the group which brought the Bolsheviks to power was characterized by masculinity, physical strength, and dignity and were usually shown as blacksmiths.
In 1920, Soviet artists began to produce the first images of women proletarians. These women differed from the symbolic women from the 1910s in that they most closely resembled the aspects of the male workersdignity, masculinity, and even supernatural power in the case of blacksmiths. In many paintings in the 1920s, the men and women were almost indistinguishable in stature and clothing, but the women would often be depicted taking subservient roles to the men, such as being his assistant ("rabotnitsa"). These women blacksmith figures were less common, but significant, since it was the first time women were represented as proletarians. The introduction of women workers in propaganda coincided with a series of government policies which allowed for divorce, abortion, and more sexual freedom.
Peasant women were also rarely depicted in socialist propaganda art in the period before 1920 as socialist realism was still at its inception. The typical image of a peasant was a bearded, sandal-shoed man in shoddy clothes and with a scythe, until 1920, when artists began to create peasant women, who were usually buxom, full-bodied, with a scarf tied around their head. The image of peasant women was not always positive; they often would evoke the derogatory caricature "baba", which was used against peasant women and women in general as some older stereotypes were still present from the Tsarist aristocracy.
As is discussed above, the art style during the early period of the Soviet Union (1917–1930) differed from the socialist realist art created during the Stalinist period. Artists were able to experiment more freely with the message of the revolution. Many Soviet artists during this period were part of the constructivist movement and used abstract forms for propaganda posters, while some chose to use a realist style. Women artists were significantly represented in the revolutionary avant garde movement, which began before 1917Lavery, Rena, Ivan Lindsay, and Katia Kapushesky. 2019. Soviet women and their art: the spirit of equality. and some of the most famous were Alexandra Exter, Natalia Goncharova, Liubov Popova, Varvara Stepanova, Olga Rozanova and Nadezhda Udaltsova. These women challenged some of the historical precedents of male dominance in art. Art historian Christina Kiaer has argued that the post-revolutionary shift away from market-based art production was beneficial to female artists' careers, especially before 1930, when the Association of Artists of Revolutionary Russia (AKhRR) was still relatively egalitarian.Kiaer, C. H. (2012). Fairy Tales of the Proletariat, or, Is Socialist Realism Kitsch? In Socialist Realisms: Soviet Painting 1920–1970 (pp. 183–189). Skira. Instead of an elite, individualistic group of disproportionately male "geniuses" produced by the market, artists shared creation of a common vision.
Stalin era
The style of socialist realism began to dominate the Soviet artistic community starting when Stalin rose to power in 1930, and the government took a more active role in regulating art creation. The AKhRR became more hierarchical and the association privileged realist style oil paintings, a field dominated by men, over posters and other mediums in which women had primarily worked. The task of Soviet artists was to create visualisations of the "New Soviet Man"the idealized icon of humanity living under socialism. This heroic figure encapsulated both men and women, per the Russian word "chelovek", a masculine term meaning "person". While the new Soviet person could be male or female, the figure of man was often used to represent gender neutrality.
Because the government had declared the "woman question" resolved in 1930, there was little explicit discourse about how women should be uniquely created in art. Discussions of gender difference and sexuality were generally taboo and viewed as a distraction from the duties people had to the creation of socialism as the objectifying nature of western culture was not tolerated. Accordingly, nudes of both men and women were rare, and some art critics have pointed out that Socialist Realist paintings escaped the problem of women's sexual objectification commonly seen in capitalist forms of art production which had been considered progressive for its time unlike western institutions which carried old patriarchal cultural norms. But the declaration of women's equality also made it difficult to talk about the gender inequality that did exist; Stalin's government had simultaneously banned abortion and homosexuality as homosexuality was also banned in western capitalist countries as there were still remnants of feudalistic culture remaining in the USSR, made divorce more difficult as pre and post war conditions required more numbers of Soviet citizens to build up future workforces to oversee development, and dismantled the women's associations in government (Zhenotdels). The "New Soviet Woman" was often shown working in traditionally male jobs, such as aviation, engineering, tractor-driving, and politics as unlike western institutions which still prohibited women from participation until early to late war period. The point of this was to encourage women to join the workforce and show off the strides the USSR had made for women, especially in comparison with the United States. Indeed, women had expanded opportunities to take up traditionally male jobs in comparison to the US. In 1950, women made up 51.8% of the Soviet labor force, compared to just 28.3% in North America as most labor force in the US was reserved to women from the upper middle class and in elite positions.
However, there were also many patriarchal depictions of women in pre and post WW2 period. Historian Susan Reid has argued that the cult of personality around male Soviet leaders created an entire atmosphere of patriarchy in Socialist Realist art, where both male and female workers often looked up to the "father" icon of Lenin and Stalin as they were seen as historical inspirations to look up to. Furthermore, the policies of the 1930s ended up forcing many women to be solely responsible for childcare, leaving them with the famous "double burden" of childcare and work duties. The government encouraged women to have children by creating portraits of the "housewife-activist"wives and mothers who supported their husbands and the socialist state by taking on unpaid housework and childcare.
Women were also more often shown as peasants than workers as in wartime men were required for active duty as previous feudalistic stereotypes were largely present, which some scholars see as evidence of their perceived inferiority stated by capitalist institutions. Art depicting peasant women in the Stalin era was far more positive than in the 1920s, and often explicitly pushed back against the "baba" stereotype. However, the peasantry, still living in feudal society, was generally seen as backwards, and did not hold the same status as the heroic status as the revolutionary urban proletariat. An example of the gender distinction of male proletariat and female peasantry is Vera Muhkina's statue Worker and Kolkhoz Woman (1937), where the worker is shown as male, while the collective farm worker is female which signifies the wartime roles which people had to fulfill.
Painting
Sculpture
Reliefs
See also
Brutalist architecture
Capitalist realism
Censorship of images in the Soviet Union
Communist symbolism
Derussification in Ukraine
Demolition of monuments to Alexander Pushkin in Ukraine
Demolition of monuments to Vladimir Lenin in Ukraine
Fine Art of Leningrad
Heroic realism
Lenin's Mausoleum
Museum of the Chinese Communist Party
New Moscow (painting)
Propaganda in the Soviet Union
Socialist realism in Poland
Socialist realism in Romania
Soviet-era statues
Vanguardism
Zhdanov Doctrine
References
Further reading
Bek, Mikuláš; Chew, Geoffrey; and Macel, Petr (eds.). Socialist Realism and Music. Musicological Colloquium at the Brno International Music Festival 36. Prague: KLP; Brno: Institute of Musicology, Masaryk University, 2004.
Golomstock, Igor. Totalitarian Art in the Soviet Union, the Third Reich, Fascist Italy and the People's Republic of China, HarperCollins, 1990.
James, C. Vaughan. Soviet Socialist Realism: Origins and Theory. New York: St. Martin's Press, 1973.
Ivanov, Sergei. Unknown Socialist Realism. The Leningrad School. Saint Petersburg, NP-Print, 2007
Lin Jung-hua. Post-Soviet Aestheticians Rethinking Russianization and Chinization of Marxism (Russian Language and Literature Studies. Serial No. 33) Beijing, Capital Normal University, 2011, No.3. pp. 46–53.
Prokhorov, Gleb. Art under Socialist Realism: Soviet Painting, 1930–1950. East Roseville, NSW, Australia: Craftsman House; G + B Arts International, 1995.
Rideout, Walter B. The Radical Novel in the United States: 1900–1954. Some Interrelations of Literature and Society. New York: Hill and Wang, 1966.
Saehrendt, Christian. Kunst als Botschafter einer künstlichen Nation ("Art from an artificial nation – about modern art as a tool of the GDR's propaganda"), Stuttgart 2009
Sinyavsky, Andrei [writing as Abram Tertz]. "The Trial Begins" and "On Socialist Realism", translated by Max Hayward and George Dennis, with an introduction by Czesław Miłosz. Berkeley: University of California Press, 1960–1982.
The Leningrad School of Painting. Essays on the History. St Petersburg, ARKA Gallery Publishing, 2019.
Origin of Socialist Realism in Russia and China. Translation and revised version of "Las noches rusas y el origen del realismo socialista."
External links
Moderna Museet in Stockholm, Sweden: Socialist Realist Art Conference
Marxists.org Socialist Realism page
Virtual Museum of Political Art – Socialist Realism
Research Guide to Russian Art
Socialist realism: Socialist in content, capitalist in price
Category:Film styles
Realism
Category:Realism (art movement)
Category:Art movements
Category:Propaganda art
Category:Soviet painters
Category:Censorship in the Soviet Union
Category:Propaganda in the Soviet Union
Category:Russian art movements
|
arts_entertainment
| 8,680
|
69980
|
Second Sino-Japanese War
|
https://en.wikipedia.org/wiki/Second_Sino-Japanese_War
|
The Second Sino-Japanese War was fought between the Republic of China and the Empire of Japan between 1937 and 1945, following a period of war localized to Manchuria that started in 1931. It is considered part of World War II, and often regarded as the beginning of World WarII in Asia. It was the largest Asian war in the 20th century. It is known in China as the War of Resistance Against Japanese Aggression.
On 18 September 1931, the Japanese staged the Mukden incident, a false flag event fabricated to justify their invasion of Manchuria and establishment of the puppet state of Manchukuo. This is sometimes marked as the beginning of the war. From 1931 to 1937, China and Japan engaged in skirmishes, including in Shanghai and in Northern China. Nationalist and Chinese Communist Party (CCP) forces, respectively led by Chiang Kai-shek and Mao Zedong, had fought each other in the Chinese Civil War since 1927. In late 1933, Chiang Kai-shek encircled the Chinese Communists in an attempt to finally destroy them, forcing the Communists into the Long March, resulting in the Communists losing around 90% of their men. As a Japanese invasion became imminent, Chiang still refused to form a united front before he was placed under house arrest by his subordinates who forced him to form the Second United Front in late 1936 in order to resist the Japanese invasion together.
The full-scale war began on 7 July 1937 with the Marco Polo Bridge incident near Beijing, which prompted a full-scale Japanese invasion of the rest of China. The Japanese captured the capital of Nanjing in 1937 and perpetrated the Nanjing Massacre. After failing to stop the Japanese capture of Wuhan in 1938, then China's de facto capital at the time, the Nationalist government relocated to Chongqing in the Chinese interior. After the Sino-Soviet Non-Aggression Pact, Soviet aid bolstered the National Revolutionary Army and Air Force. By 1939, after Chinese victories at Changsha and with Japan's lines of communications stretched deep into the interior, the war reached a stalemate. The Japanese were unable to defeat CCP forces in Shaanxi, who waged a campaign of sabotage and guerrilla warfare. In November 1939, Nationalist forces launched a large scale winter offensive, and in August 1940, CCP forces launched the Hundred Regiments Offensive in central China. In April 1941, Soviet aid was halted with the Soviet–Japanese Neutrality Pact.
In December 1941, Japan launched a surprise attack on Pearl Harbor and declared war on the United States. The US increased its aid to China under the Lend-Lease Act, becoming its main financial and military supporter. With Burma cut off, the United States Army Air Forces airlifted material over the Himalayas. In 1944, Japan launched Operation Ichi-Go, the invasion of Henan and Changsha. In 1945, the Chinese Expeditionary Force resumed its advance in Burma and completed the Ledo Road linking India to China. China launched large counteroffensives in South China, repulsed a failed Japanese invasion of West Hunan, and recaptured Japanese occupied regions of Guangxi.
Japan formally surrendered on 2 September 1945, following the atomic bombings of Hiroshima and Nagasaki, Soviet declaration of war and subsequent invasions of Manchukuo and Korea. The war resulted in the deaths of around 20 million people, mostly Chinese civilians. China was recognized as one of the Big Four Allied powers in World War II and one of the "Four Policemen", which formed the foundation of the United Nations. It regained all lost territories and became one of the five permanent members of the United Nations Security Council. The Chinese Civil War resumed in 1946, ending with a communist victory and the Proclamation of the People's Republic of China in 1949, while the government of the Republic of China relocated on Taiwan.
Names
The term "Japanese invasion of China", a term used mainly in non-Japanese narratives.
Chinese
In China, the war is most commonly known as the "War of Resistance against Japanese Aggression" (), and shortened to "Resistance against Japanese Aggression" () or the "War of Resistance" (). It is also referred to by PRC as part of the "Global Anti-Fascist War" as Imperial Japan was then allied with Nazi Germany and Fascist Italy.
The war has often been termed the Eight Years' War of Resistance" (), a traditional view which dates the war's beginning to the Marco Polo Bridge incident in 1937. In an alternative view of Chinese historiography, the 18 September 1931 Japanese invasion of Manchuria marks the start of the" Fourteen Years' War of Resistance" (十四年抗战; 十四年抗戰). In 2017, the Chinese government officially announced that it would adopt this view. Under this interpretation, the 1931–1937 period is viewed as the "partial" war, while 1937–1945 is a period of "total" war. This view of a fourteen-year war has political significance because it provides more recognition for the role of northeast China in the War of Resistance.
Japanese
In contemporary Japan, the name "Japan–China War" () is most commonly used because of its perceived objectivity. Dating the beginning of the war may also vary in Japanese context, with one Japanese historiographical view regarding the war as a "Fifteen-Year War" (Jyugonen Sensô), covering the period beginning with the invasion of Manchuria through the atomic bombings, and including both the war in China and the Pacific war.
When the invasion of China proper began in earnest in July 1937 near Beijing, the Empire of Japan used "The North China Incident" (), and with the outbreak of the Battle of Shanghai the following month, it was changed to "The China Incident" ().
The word "incident" () was used by Japan, as neither country had made a formal declaration of war at the outbreak of hostility. From the Japanese perspective, localizing these conflicts was beneficial in preventing intervention from other countries, particularly the United Kingdom and the United States, which were its primary source of petroleum and steel respectively. A formal expression of these conflicts would potentially lead to an American embargo in accordance with the Neutrality Acts of the 1930s.Jerald A. Combs. Embargoes and Sanctions. Encyclopedia of American Foreign Policy, 2002 In addition, due to China's fractured political status, Japan often claimed that China was no longer a recognizable political entity on which war could be declared.Rea, George Bronson. The Case for Manchoukuo. New York: D. Appleton-Century Company, 1935. Pp 164.
In Japanese propaganda, the invasion of China became a holy war (), the first step of the "eight corners of the world under one roof" slogan (). In 1940, Japanese prime minister Fumimaro Konoe launched the Taisei Yokusankai. When both sides formally declared war in December 1941, the name was replaced by "Greater East Asia War" ().
Although the Japanese government still uses the term "China Incident" in formal documents, the word Shina is considered derogatory by China and therefore the media in Japan often paraphrase with other expressions like "The Japan–China Incident" (), which were used by media as early as the 1930s.
The name "Second Sino-Japanese War" is not commonly used in Japan as the China it fought a war against in 1894 to 1895 was led by the Qing dynasty, and thus is called the Qing-Japanese War (), rather than the First Sino-Japanese War.
Background
The origins of the Second Sino-Japanese War can be traced to the First Sino-Japanese War (1894–1895), in which China, then under the rule of the Qing dynasty, was defeated by Japan and forced to cede Taiwan and recognize the full and complete independence of Korea in the Treaty of Shimonoseki. Japan also annexed the Senkaku Islands, which Japan claims were uninhabited, in early 1895 as a result of its victory at the end of the war. Japan had also attempted to annex the Liaodong Peninsula following the war, though was forced to return it to China following the Triple Intervention by France, Germany, and Russia. The Qing dynasty was on the brink of collapse due to internal revolts and the imposition of the unequal treaties, while Japan had emerged as a great power through its efforts to modernize. In 1905, Japan defeated the Russian Empire in the Russo-Japanese War, gaining Dalian and southern Sakhalin and establishing a protectorate over Korea.
Warlords in the Republic of China
In 1911, factions of the Qing Army uprose against the government, staging a revolution that swept across China's southern provinces. The Qing responded by appointing Yuan Shikai, commander of the loyalist Beiyang Army, as temporary prime minister in order to subdue the revolution. Yuan, wanting to remain in power, compromised with the revolutionaries, and agreed to abolish the monarchy and establish a new republican government, under the condition he be appointed president of China. The new Beiyang government of China was proclaimed in March 1912, after which Yuan Shikai began to amass power for himself. In 1913, the parliamentary political leader Song Jiaoren was assassinated; it is generally believed Yuan Shikai ordered the assassination. Yuan Shikai then forced the parliament to pass a bill to strengthen the power of the president and sought to restore the imperial system, becoming the new emperor of China.
However, there was little support for an imperial restoration among the general population, and protests and demonstrations soon broke out across the country. Yuan's attempts at restoring the monarchy triggered the National Protection War, and Yuan Shikai was overthrown after only a few months. In the aftermath of Shikai's death in June 1916, control of China fell into the hands of the Beiyang Army leadership. The Beiyang government was a civilian government in name, but in practice it was a military dictatorship《时局未宁之内阁问题》, 《满洲报》1922年7月27日, "论说" with a different warlord controlling each province of the country. China was reduced to a fractured state. As a result, China's prosperity began to wither and its economy declined. This instability presented an opportunity for nationalistic politicians in Japan to press for territorial expansion.
Twenty-One Demands
In 1915, Japan issued the Twenty-One Demands to extort further political and commercial privilege from China, which was accepted by the regime of Yuan Shikai.Hoyt, Edwin P., Japan's War: The Great Pacific Conflict, p. 45 Following World War I, Japan acquired the German Empire's sphere of influence in Shandong province,Palmer and Colton, A History of Modern World, p. 725 leading to nationwide anti-Japanese protests and mass demonstrations in China. The country remained fragmented under the Beiyang Government and was unable to resist foreign incursions. For the purpose of unifying China and defeating the regional warlords, the Kuomintang (KMT) in Guangzhou launched the Northern Expedition from 1926 to 1928 with limited assistance from the Soviet Union.
Jinan incident
The National Revolutionary Army (NRA) formed by the Kuomintang swept through southern and central China until it was checked in Shandong, where confrontations with the Japanese garrison escalated into armed conflict. The conflicts were collectively known as the Jinan incident of 1928, during which time the Japanese military killed several Chinese officials and fired artillery shells into Jinan. According to the investigation results of the Association of the Families of the Victims of the Jinan massacre, it showed that 6,123 Chinese civilians were killed and 1,701 injured.Zhen Jiali, Ji Nan Can An (Jinan Massacre) (China University of Political Science and Law Press, 1987), pp. 238. Relations between the Chinese Nationalist government and Japan severely worsened as a result of the Jinan incident.
Reunification of China (1928)
As the National Revolutionary Army approached Beijing, Zhang Zuolin decided to retreat back to Manchuria, before he was assassinated by the Kwantung Army in 1928.Boorman, Biographical Dictionary, vol. 1, p. 121 His son, Zhang Xueliang, took over as the leader of the Fengtian clique in Manchuria. Later in the same year, Zhang declared his allegiance to the Nationalist government in Nanjing under Chiang Kai-shek, and consequently, China was nominally reunified under one government.
1929 Sino-Soviet war
The July–November 1929 conflict over the Chinese Eastern Railroad (CER) further increased the tensions in the Northeast that led to the Mukden Incident and eventually the Second Sino-Japanese War. The Soviet Red Army victory over Xueliang's forces not only reasserted Soviet control over the CER in Manchuria but revealed Chinese military weaknesses that Japanese Kwantung Army officers were quick to note.Michael M. Walker, The 1929 Sino-Soviet War: The War Nobody Knew (Lawrence: University Press of Kansas, 2017), p. 290.
The Soviet Red Army performance also stunned the Japanese. Manchuria was central to Japan's East Asia policy. Both the 1921 and 1927 Imperial Eastern Region Conferences reconfirmed Japan's commitment to be the dominant power in the Northeast. The 1929 Red Army victory shook that policy to the core and reopened the Manchurian problem. By 1930, the Kwantung Army realized they faced a Red Army that was only growing stronger. The time to act was drawing near and Japanese plans to conquer the Northeast were accelerated.Michael M. Walker, The 1929 Sino-Soviet War: The War Nobody Knew (Lawrence: University Press of Kansas, 2017), pp. 290–291.
Chinese Communist Party conflict with the Kuomintang
In 1930, the Central Plains War broke out across China, involving regional commanders who had fought in alliance with the Kuomintang during the Northern Expedition, and the Nanjing government under Chiang. The Chinese Communist Party (CCP) previously fought openly against the Nanjing government after the Shanghai massacre of 1927, and they continued to expand during this protracted civil war. The Kuomintang government focused its efforts on suppressing the Chinese Communists instead of opposing the Japanese, following the policy of "first internal pacification, then external resistance" () through its encirclement campaigns against the Communists.
After the defeat of the Chinese Soviet Republic by the Nationalists, the Communists retreated on the Long March to Yan'an. The Nationalist government ordered local warlords to continue the campaign against the Communists rather than focus on the Japanese threat.
On 1 August 1935, the Communist Party issued the August First Declaration. It called for the creation of a United Front of all Chinese parties, organizations, and people of all circles, including overseas Chinese and ethnic minorities, to oppose the Japanese.
A December 1936 coup by two Nationalist Generals, the Xi'an Incident, forced Chiang Kai-shek to accept a United Front with the Communists to oppose Japan.
Invasion of Manchuria and Northern China
The internecine warfare in China provided excellent opportunities for Japan, which saw Manchuria as a limitless supply of raw materials, a market for its manufactured goods (now excluded from the markets of many Western countries as a result of Depression-era tariffs), and a protective buffer state against the Soviet Union in Siberia. As a result, the Japanese Army was widely prevalent in Manchuria immediately following the Japanese victory in the Russo-Japanese War in 1905, where Japan gained significant territory in Manchuria. As a result of their strengthened position, by 1915 Japan had negotiated a significant amount of economic privilege in the region by pressuring Yuan Shikai, the president of the Republic of China at the time. With a widened range of economic privileges in Manchuria, Japan began focusing on developing and protecting matters of economic interests. This included railroads, businesses, natural resources, and a general control of the territory.
With its influence growing, the Japanese Army began to justify its presence by stating that it was simply protecting its own economic interests. However militarists in the Japanese Army began pushing for an expansion of influence, leading to the Japanese Army assassinating the warlord of Manchuria, Zhang Zuolin. This was done with hopes that it would start a crisis that would allow Japan to expand their power and influence in the region. When this was not as successful as they desired, Japan then decided to invade Manchuria outright after the Mukden incident in September 1931. Japanese soldiers set off a bomb on the Southern Manchurian Railroad in order to provoke an opportunity to act in "self defense" and invade outright. Japan charged that its rights in Manchuria, which had been established as a result of its victory in 1905 at the end of the Russo-Japanese War, had been systematically violated and there were "more than 120 cases of infringement of rights and interests, interference with business, boycott of Japanese goods, unreasonable taxation, detention of individuals, confiscation of properties, eviction, demand for cessation of business, assault and battery, and the oppression of Korean residents".Political Strategy Prior to Outbreak of War Part I Japanese monograph No. 144
After five months of fighting, Japan established the puppet state of Manchukuo in 1932, and installed the last Emperor of China, Puyi, as its puppet ruler. Militarily too weak to challenge Japan directly, China appealed to the League of Nations for help. The League's investigation led to the publication of the Lytton Report, condemning Japan for its incursion into Manchuria, causing Japan to withdraw from the League of Nations. No country took action against Japan beyond tepid censure. From 1931 until summer 1937, the Nationalist Army under Chiang Kai-shek did little to oppose Japanese encroachment into China.
Incessant fighting followed the Mukden Incident. In 1932, Chinese and Japanese troops fought the 28 January battle. This resulted in the demilitarization of Shanghai, which forbade the Chinese to deploy troops in their own city. In Manchukuo there was an ongoing campaign to pacify the Anti-Japanese Volunteer Armies that arose from widespread outrage over the policy of non-resistance to Japan. On 15 April 1932, the Chinese Soviet Republic led by the Communists declared war on Japan.
Under Chi Shi-ying and his protégé Lo Ta-yu's leadership, the Kuomintang also established the Northeast Anti-Manchukuo and Anti-Japanese Association as well as the September 18th Alliance. These organizations developed an extensive underground intelligence network and coordinated anti-Japanese activities in Manchuria.
In 1933, the Japanese attacked the Great Wall region. The Tanggu Truce established in its aftermath, gave Japan control of Rehe Province, as well as a demilitarized zone between the Great Wall and Beijing-Tianjin region. Japan aimed to create another buffer zone between Manchukuo and the Chinese Nationalist government in Nanjing.
Japan increasingly exploited China's internal conflicts to reduce the strength of its fractious opponents. Even years after the Northern Expedition, the political power of the Nationalist government was limited to just the area of the Yangtze River Delta. Other sections of China were essentially in the hands of local Chinese warlords. Japan sought various Chinese collaborators and helped them establish governments friendly to Japan. This policy was called the Specialization of North China (), more commonly known as the North China Autonomous Movement. The northern provinces affected by this policy were Chahar, Suiyuan, Hebei, Shanxi, and Shandong.
This Japanese policy was most effective in the area of what is now Inner Mongolia and Hebei. In 1935, under Japanese pressure, China signed the He–Umezu Agreement, which forbade the KMT to conduct party operations in Hebei. In the same year, the Chin–Doihara Agreement was signed expelling the KMT from Chahar. Thus, by the end of 1935 the Chinese government had essentially abandoned northern China. In its place, the Japanese-backed East Hebei Autonomous Council and the Hebei–Chahar Political Council were established. There in the empty space of Chahar the Mongol military government was formed on 12 May 1936. Japan provided all the necessary military and economic aid. Afterwards Chinese volunteer forces continued to resist Japanese aggression in Manchuria, and Chahar and Suiyuan.
1937: Full-scale invasion of China
On the night of 7 July 1937, Chinese and Japanese troops exchanged fire in the vicinity of the Marco Polo (or Lugou) Bridge about 16 km from Beijing. The initial confused and sporadic skirmishing soon escalated into a full-scale battle.
Unlike Japan, China was unprepared for total war and had little military-industrial strength, no mechanized divisions, and few armoured forces.
Within the first year of full-scale war, Japanese forces obtained victories in most major Chinese cities.
Battle of Beiping–Tianjin
On 11 July, in accordance with the Goso conference, the Imperial Japanese Army General Staff authorized the deployment of an infantry division from the Chōsen Army, two combined brigades from the Kwantung Army and an air regiment composed of 18 squadrons as reinforcements to Northern China. By 20 July, total Japanese military strength in the Beijing-Tianjin area exceeded 180,000 personnel.
The Japanese gave Sung and his troops "free passage" before moving in to pacify resistance in areas surrounding Beijing (then Beiping) and Tianjin. After 24 days of combat, the Chinese 29th Army was forced to withdraw. The Japanese captured Beijing and the Taku Forts at Tianjin on 29 and 30 July respectively, thus concluding the Beijing-Tianjin campaign. By August 1937, Japan had occupied Beijing and Tianjin.
However, the Japanese Army had been given orders not to advance further than the Yongding River. In a sudden volte-face, the Konoe government's foreign minister opened negotiations with Chiang Kai-shek's government in Nanjing and stated: "Japan wants Chinese cooperation, not Chinese land." Nevertheless, negotiations failed to move further. The Ōyama Incident on 9 August escalated the skirmishes and battles into full scale warfare.
The 29th Army's resistance (and poor equipment) inspired the 1937 "Sword March", which—with slightly reworked lyrics—became the National Revolutionary Army's standard marching cadence and popularized the racial epithet guizi to describe the Japanese invaders.Lei, Bryant. "New Songs of the Battlefield": Songs and Memories of the Chinese Cultural Revolution, p. 85. University of Pittsburgh (Pittsburgh), 2004.
Battle of Shanghai
The Imperial General Headquarters (GHQ) in Tokyo, content with the gains acquired in northern China following the Marco Polo Bridge Incident, initially showed reluctance to escalate the conflict into a full-scale war. Following the shooting of two Japanese officers who were attempting to enter the Hongqiao military airport on 9 August 1937, the Japanese demanded that all Chinese forces withdraw from Shanghai; the Chinese outright refused to meet this demand. In response, both the Chinese and the Japanese marched reinforcements into the Shanghai area. Chiang concentrated his best troops north of Shanghai in an effort to impress the city's large foreign community and increase China's foreign support.
On 13 August 1937, Kuomintang soldiers attacked Japanese Marine positions in Shanghai, with Japanese army troops and marines in turn crossing into the city with naval gunfire support at Zhabei, leading to the Battle of Shanghai. On 14 August, Chinese forces under the command of Zhang Zhizhong were ordered to capture or destroy the Japanese strongholds in Shanghai, leading to bitter street fighting. In an attack on the Japanese cruiser Izumo, Kuomintang planes accidentally bombed the Shanghai International Settlement, which led to more than 3,000 civilian deaths.
In the three days from 14 to 16 August 1937, the Imperial Japanese Navy (IJN) sent many sorties of the then-advanced long-ranged G3M medium-heavy land-based bombers and assorted carrier-based aircraft with the expectation of destroying the Chinese Air Force. However, the Imperial Japanese Navy encountered unexpected resistance from the defending Chinese Curtiss Hawk II/Hawk III and P-26/281 Peashooter fighter squadrons; suffering heavy (50%) losses from the defending Chinese pilots (14 August was subsequently commemorated by the KMT as China's Air Force Day).
The skies of China had become a testing zone for advanced biplane and new-generation monoplane combat-aircraft designs. The introduction of the advanced A5M "Claude" fighters into the Shanghai-Nanjing theater of operations, beginning on 18 September 1937, helped the Japanese achieve a certain level of air superiority. However the few experienced Chinese veteran pilots, as well as several Chinese-American volunteer fighter pilots, including Maj. Art Chin, Maj. John Wong Pan-yang, and Capt. Chan Kee-Wong, even in their older and slower biplanes, proved more than able to hold their own against the sleek A5Ms in dogfights, and it also proved to be a battle of attrition against the Chinese Air Force.
At the start of the battle, the local strength of the NRA was around five divisions, or about 70,000 troops, while local Japanese forces comprised about 6,300 marines. On 23 August, the Chinese Air Force attacked Japanese troop landings at Wusongkou in northern Shanghai with Hawk III fighter-attack planes and P-26/281 fighter escorts, and the Japanese intercepted most of the attack with A2N and A4N fighters from the aircraft carriers Hosho and Ryujo, shooting down several of the Chinese planes while losing a single A4N in the dogfight with Lt. Huang Xinrui in his P-26/281; the Japanese Army reinforcements succeeded in landing in northern Shanghai. The Imperial Japanese Army (IJA) ultimately committed over 300,000 troops, along with numerous naval vessels and aircraft, to capture the city. After more than three months of intense fighting, their casualties far exceeded initial expectations.Fu Jing-hui, An Introduction of Chinese and Foreign History of War, 2003, pp. 109–111 On 26 October, the IJA captured Dachang, a key strong-point within Shanghai, and on 5 November, additional reinforcements from Japan landed in Hangzhou Bay. Finally, on 9 November, the NRA began a general retreat.
Japan did not immediately occupy the Shanghai International Settlement or the Shanghai French Concession, areas which were outside of China's control due to the treaty port system. Japan moved into these areas after its 1941 declaration of war against the United States and the United Kingdom.
Battle of Nanjing and massacre
In November 1937, the Japanese concentrated 220,000 soldiers and began a campaign against Nanjing. Building on the hard-won victory in Shanghai, the IJA advanced on and captured the KMT capital city of Nanjing (December 1937) and Northern Shanxi (SeptemberNovember 1937).
Japanese forces inflicted heavy casualties on the Chinese soldiers defending the city, killing approximately 50,000 of them including 17 Chinese generals. Upon the capture of Nanjing, Japanese committed massive war atrocities including mass murder and rape of Chinese civilians after 13 December 1937, which has been referred to as the Nanjing Massacre. Over the next several weeks, Japanese troops perpetrated numerous mass executions and tens of thousands of rapes. The army looted and burned the surrounding towns and the city, destroying more than a third of the buildings.
The number of Chinese killed in the massacre has been subject to much debate, with estimates ranging from 100,000 to more than 300,000.Daqing Yang, "A Sino-Japanese Controversy: The Nanjing Atrocity As History", Sino-Japanese Studies, November 1990, 16. The numbers agreed upon by most scholars are provided by the International Military Tribunal for the Far East, which estimate at least 200,000 murders and 20,000 rapes.
The Japanese atrocities in Nanjing, especially following the Chinese defense of Shanghai, increased international goodwill for the Chinese people and the Chinese government.
The Nationalist government re-established itself in Chongqing, which became the wartime seat of government until 1945.
1938
By January 1938, most conventional Kuomintang forces had either been defeated or no longer offered major resistance to Japanese advances. KMT forces won a few victories in 1938 (the Battle of Taierzhuang and the Battle of Wanjialing) but were generally ineffective that year. By March 1938, the Japanese controlled almost all of North China. Communist-led rural resistance to the Japanese remained active, however.
Battles of Xuzhou and Taierzhuang
With many victories achieved, Japanese field generals escalated the war in Jiangsu in an attempt to wipe out the Chinese forces in the area. The Japanese managed to overcome Chinese resistance around Bengbu and the Teng xian, but were fought to a halt at Linyi.
The Japanese were then decisively defeated at the Battle of Taierzhuang (March–April 1938), where the Chinese used night attacks and close-quarters combat to overcome Japanese advantages in firepower. The Chinese also severed Japanese supply lines from the rear, forcing the Japanese to retreat in the first Chinese victory of the war.
The Japanese then attempted to surround and destroy the Chinese armies in the Xuzhou region with an enormous pincer movement. However the majority of the Chinese forces, some 200,000–300,000 troops in 40 divisions, managed to break out of the encirclement and retreat to defend Wuhan, the Japanese's next target.
Battle of Wuhan
Following Xuzhou, the IJA changed its strategy and deployed almost all of its existing armies in China to attack the city of Wuhan, which had become the political, economic and military center of China, in hopes of destroying the fighting strength of the NRA and forcing the KMT government to negotiate for peace. On 6 June, they captured Kaifeng, the capital of Henan, and threatened to take Zhengzhou, the junction of the Pinghan and Longhai railways.
The Japanese forces, numbering some 400,000 men, were faced by over 1 million NRA troops in the Central Yangtze region. Having learned from their defeats at Shanghai and Nanjing, the Chinese had adapted themselves to fight the Japanese and managed to check their forces on many fronts, slowing and sometimes reversing the Japanese advances, as in the case of Wanjialing.
To overcome Chinese resistance, Japanese forces frequently deployed poison gas and committed atrocities against civilians, such as a "mini-Nanjing Massacre" in the city of Jiujiang upon its capture. After four months of intense combat, the Nationalists were forced to abandon Wuhan by October, and its government and armies retreated to Chongqing. Both sides had suffered tremendous casualties in the battle, with the Chinese losing up to 500,000 soldiers killed or wounded, and the Japanese up to 200,000.
Communist resistance
After their victory at Wuhan, Japan advanced deep into Communist territory and redeployed 50,000 troops to the Shanxi-Chahar-Hebei Border Region. Elements of the Eighth Route Army soon attacked the advancing Japanese, inflicting between 3,000 and 5,000 casualties and resulting in a Japanese retreat. The Eighth Route Army carried out guerilla operations and established military and political bases. As the Japanese military came to understand that the Communists avoided conventional attacks and defense, it altered its tactics. The Japanese military built more roads to quicken movement between strongpoints and cities, blockaded rivers and roads in an effort to disrupt Communists supply, sought to expand militia from its puppet regime to conserve manpower, and use systematic violence on civilians in the Border Region in an effort to destroy its economy. The Japanese military mandated confiscation of the Eighth Route Army's goods and used this directive as a pretext to confiscate goods, including engaging in grave robbery in the Border Region.
Air raid at Chongqing
With Japanese casualties and costs mounting, the Imperial General Headquarters attempted to break Chinese resistance by ordering the Imperial Japanese Navy Air Service and Imperial Japanese Army Air Service to launch the war's first massive air raids on civilian targets. Japanese raiders hit the Kuomintang's newly established provisional capital of Chongqing and most other major cities in unoccupied China, leaving many people either dead, injured, or homeless.
Yellow River flood
1939–1943
By 1939, the Nationalist army had withdrawn to the southwest and northwest of China and the Japanese controlled the coastal cities that had been centres of Nationalist power. From 1939 to 1945, China was divided into three regions: Japanese-occupied territories (Lunxianqu), the Nationalist-controlled region (Guotongqu), and the Communist-controlled regions (Jiefangqu, or liberated areas).
From the beginning of 1939, the war entered a new phase with the unprecedented defeat of the Japanese at Battle of Suixian–Zaoyang and First Battle of Changsha. General Ma Biao also led Hui, Salar and Dongxiang cavalry to defeat the Japanese at the Battle of Huaiyang in the summer of 1939. Ma Biao had fought against the Japanese in the Boxer Rebellion.
In 1939, Mao Zedong wrote The Greatest Crisis under Current Conditions, calling for more active resistance against Japan and for the strengthening of the Second United Front.
The Chinese launched their first large-scale counter-offensive against the IJA in December 1939; however, due to its low military-industrial capacity and limited experience in modern warfare, this offensive was defeated. Afterwards Chiang could not risk any more all-out offensive campaigns given the poorly trained, under-equipped, and disorganized state of his armies and opposition to his leadership both within the Kuomintang and in China in general. He had lost a substantial portion of his best trained and equipped troops in the Battle of Shanghai and was at times at the mercy of his generals, who maintained a high degree of autonomy from the central KMT government.
During the offensive, Hui forces in Suiyuan under generals Ma Hongbin and Ma Buqing routed the Imperial Japanese Army and their puppet Inner Mongol forces and prevented the planned Japanese advance into northwest China. Ma Hongbin's father Ma Fulu had fought against Japanese in the Boxer Rebellion.
After 1940, the Japanese encountered tremendous difficulties in administering and garrisoning the seized territories, and tried to solve their occupation problems by implementing a strategy of creating friendly puppet governments favourable to Japanese interests in the territories conquered. This included prominently the regime headed by Wang Jingwei, one of Chiang's rivals in the KMT. However, atrocities committed by the Imperial Japanese Army, as well as Japanese refusal to delegate any real power, left the puppets very unpopular and largely ineffective. The only success the Japanese had was to recruit a large Collaborationist Chinese Army to maintain public security in the occupied areas.
Japanese expansion
By 1941, Japan held most of the eastern coastal areas of China and Vietnam, but guerrilla fighting continued in these occupied areas. Japan had suffered high casualties which resulted from unexpectedly stubborn Chinese resistance, and neither side could make any swift progress in the manner of Nazi Germany in Western Europe.
By 1943, Guangdong had experienced famine. As the situation worsened, New York Chinese compatriots received a letter stating that 600,000 people were killed in Siyi by starvation.
Second phase: October 1938 – December 1941
During this period, the main Chinese objective was to drag out the war for as long as possible in a war of attrition, thereby exhausting Japanese resources while it was building up China's military capacity. American general Joseph Stilwell called this strategy "winning by outlasting". The NRA adopted the concept of "magnetic warfare" to attract advancing Japanese troops to definite points where they were subjected to ambush, flanking attacks, and encirclements in major engagements. The most prominent example of this tactic was the successful defense of Changsha in 1939, and again in the 1941 battle, in which heavy casualties were inflicted on the IJA.
Local Chinese resistance forces, organized separately by both the CCP and the KMT, continued their resistance in occupied areas to make Japanese administration over the vast land area of China difficult. In 1940, the Chinese Red Army launched a major offensive in north China, destroying railways and a major coal mine. These constant guerilla and sabotage operations deeply frustrated the Imperial Japanese Army and they led them to employ the Three Alls policy—kill all, loot all, burn all. It was during this period that the bulk of Japanese war crimes were committed. In April 1941, Soviet aid to China halted with the Soviet–Japanese Neutrality Pact. The CCP formally stated that the pact was "a great victory for Soviet diplomacy" and "was beneficial to liberation throughout China."
Japan had occupied much of north and coastal China by the end of 1941, but the KMT central government and military had retreated to the western interior to continue their resistance, while the Chinese communists remained in control of base areas in Shaanxi. From 1941 to 1942, Japan concentrated most of its forces in China in an effort to defeat the CCP bases behind Japan's lines. To decrease guerilla's human and material resources, the Japanese military implemented its Three Alls policy ("Kill all, loot all, burn all"). In response, the CCP forces increased their role in production activities, including farming, raising hogs, and cloth-making.
Relationship between the Nationalists and the Communists
After the Mukden Incident in 1931, Chinese public opinion was strongly critical of Manchuria's leader, the "young marshal" Zhang Xueliang, for his non-resistance to the Japanese invasion, even though the Kuomintang central government was also responsible for this policy, giving Zhang an order to improvise while not offering support. After losing Manchuria to the Japanese, Zhang and his Northeast Army were given the duty of suppressing the Red Army in Shaanxi after their Long March. This resulted in great casualties for his Northeast Army, which received no support in manpower or weaponry from Chiang Kai-shek.
In the Xi'an Incident that took place on 12 December 1936, Zhang Xueliang kidnapped Chiang Kai-shek in Xi'an, hoping to force an end to KMT–CCP conflict. To secure the release of Chiang, the KMT agreed to a temporary ceasefire with the Communists. On 24 December, the two parties agreed to a United Front against Japan; this had salutary effects for the beleaguered Communists, who agreed to form the New Fourth Army and the 8th Route Army under the nominal control of the NRA. In addition, Shaan-Gan-Ning and Shanxi-Chahar-Hebei border regions were created, under the control of the CCP. In Shaan-Gan-Ning, Communists in the Shaan-Gan-Ning Base Area fostered opium production, taxed it, and engaged in its trade—including selling to Japanese-occupied and KMT-controlled provinces. The Red Army fought alongside KMT forces during the Battle of Taiyuan, and the high point of their cooperation came in 1938 during the Battle of Wuhan.
The formation of a united front added to the legality of the CCP, but what kind of support the central government would provide to the communists were not settled. When compromise with the CCP failed to incentivize the Soviet Union to engage in an open conflict against Japan, the KMT withheld further support for the Communists. To strengthen their legitimacy, Communist forces actively engaged the Japanese early on. These operations weakened Japanese forces in Shanxi and other areas in the North. Mao Zedong was distrustful of Chiang Kai-shek, however, and shifted strategy to guerrilla warfare in order to preserve the CCP's military strength.
Despite Japan's steady territorial gains in northern China, the coastal regions, and the rich Yangtze River Valley in central China, the distrust between the two antagonists was scarcely veiled. The uneasy alliance began to break down by late 1938, partially due to the Communists' aggressive efforts to expand their military strength by absorbing Chinese guerrilla forces behind Japanese lines. Chinese militia who refused to switch their allegiance were often labelled "collaborators" and attacked by CCP forces. For example, the Red Army led by He Long attacked and wiped out a brigade of Chinese militia led by Zhang Yin-wu in Hebei in June 1939. Starting in 1940, open conflict between Nationalists and Communists became more frequent in the occupied areas outside of Japanese control, culminating in the New Fourth Army Incident in January 1941.
Afterwards, the Second United Front completely broke down and Chinese Communists leader Mao Zedong outlined the preliminary plan for the CCP's eventual seizure of power from Chiang Kai-shek. Mao himself is quoted outlining the "721" policy, saying "We are fighting 70 percent for self development, 20 percent for compromise, and 10 percent against Japan". Mao began his final push for consolidation of CCP power under his authority, and his teachings became the central tenets of the CCP doctrine that came to be formalized as Mao Zedong Thought. The Communists also began to focus most of their energy on building up their sphere of influence wherever opportunities were presented, mainly through rural mass organizations, administrative, land and tax reform measures favouring poor peasants; while the Nationalists attempted to neutralize the spread of Communist influence by military blockade of areas controlled by CCP and fighting the Japanese at the same time.
Entrance of the Western Allies
Japan had expected to extract economic benefits of its invasions of China and elsewhere, including in the form of fuel and raw material resources. As Japanese aggression continued, however, the United States responded with trade embargoes on various goods, including oil and petroleum (beginning December 1939) and scrap iron and munitions (beginning July 1940). The United States demanded that Japan withdraw from China and also refused to recognize Japan's occupations of the Indochinese countries. In spring 1941, trade negotiations between the United States and Japan failed. In July 1941, the United States froze Japanese financial assets and obtained Dutch and British agreements to also cut those countries' oil exports to Japan. This in turn prompted the Japanese decision to attack Pearl Harbor.
Following the attack on Pearl Harbor, the United States declared war against Japan, and within days China joined the Allies in formal declaration of war against Japan, Germany and Italy. As the Western Allies entered the war against Japan, the Sino-Japanese War would become part of a greater conflict, the Pacific theatre of World War II. Japan's military action against the United States also restrained its capacity to conduct further offensive operations in China.
After the Lend-Lease Act was passed in 1941, American financial and military aid began to trickle in.Tai-Chun Kuo, "A Strong Diplomat in a Weak Polity: TV Soong and wartime US–China relations, 1940–1943." Journal of Contemporary China 18.59 (2009): 219–231. Claire Lee Chennault commanded the 1st American Volunteer Group (nicknamed the Flying Tigers), with American pilots flying American warplanes which were painted with the Chinese flag to attack the Japanese. He headed both the volunteer group and the uniformed U.S. Army Air Forces units that replaced it in 1942.Daniel Ford, Flying Tigers: Claire Chennault and His American Volunteers, 1941–1942 (2007). However, it was the Soviets that provided the greatest material help for China from 1937 into 1941, with fighter aircraft for the Nationalist Chinese Air Force and artillery and armour for the Chinese Army through the Sino-Soviet Treaty; Operation Zet also provided for a group of Soviet volunteer combat aviators to join the Chinese Air Force in the fight against the Japanese occupation from late 1937 through 1939.
The United States embargoed Japan in 1941 depriving it of shipments of oil and various other resources necessary to continue the war in China. This pressure, which was intended to disparage a continuation of the war and bring Japan into negotiation, resulted in the Attack on Pearl Harbor and Japan's drive south to procure from the resource-rich European colonies in Southeast Asia by force the resources which the United States had denied to them.
Almost immediately, Chinese troops achieved another decisive victory in the Battle of Changsha, which earned the Chinese government much prestige from the Western Allies. China was one of the "Big Four" Allied Powers during the war. President Franklin D. Roosevelt referred to the United States, United Kingdom, Soviet Union and China as the world's "Four Policemen"; his primary reason for elevating China to such a status was the belief that after the war it would serve as a bulwark against the Soviet Union.
Knowledge of Japanese naval movements in the Pacific was provided to the American Navy by the Sino-American Cooperative Organization (SACO) which was run by the Chinese intelligence head Dai Li. Philippine and Japanese ocean weather was affected by weather originating near northern China. The base of SACO was located in Yangjiashan.
Chiang Kai-shek continued to receive supplies from the United States. However, in contrast to the Arctic supply route to the Soviet Union which stayed open through most of the war, sea routes to China and the Yunnan–Vietnam Railway had been closed since 1940. Therefore, between the closing of the Burma Road in 1942 and its re-opening as the Ledo Road in 1945, foreign aid was largely limited to what could be flown in over "The Hump". In Burma, on 16 April 1942, 7,000 British soldiers were encircled by the Japanese 33rd Division during the Battle of Yenangyaung and rescued by the Chinese 38th Division. After the Doolittle Raid, the Imperial Japanese Army conducted a massive sweep through Zhejiang and Jiangxi, now known as the Zhejiang-Jiangxi Campaign, with the goal of finding the surviving American airmen, applying retribution on the Chinese who aided them and destroying air bases. The operation started 15 May 1942, with 40 infantry battalions and 15–16 artillery battalions but was repelled by Chinese forces in September. During this campaign, the Imperial Japanese Army left behind a trail of devastation and also spread cholera, typhoid, plague and dysentery pathogens. Chinese estimates allege that as many as 250,000 civilians, the vast majority of whom were destitute Tanka boat people and other pariah ethnicities unable to flee, may have died of disease.Yuki Tanaka, Hidden Horrors, Westviewpres, 1996, p. 138 It caused more than 16 million civilians to evacuate far away deep inward China. 90% of Ningbo's population had already fled before battle started.
Most of China's industry had already been captured or destroyed by Japan, and the Soviet Union refused to allow the United States to supply China through the Kazakhstan into Xinjiang as the Xinjiang warlord Sheng Shicai had turned anti-Soviet in 1942 with Chiang's approval. For these reasons, the Chinese government never had the supplies and equipment needed to mount major counter-offensives. Despite the severe shortage of matériel, in 1943, the Chinese were successful in repelling major Japanese offensives in Hubei and Changde.
Chiang was named Allied commander-in-chief in the China theater in 1942. American general Joseph Stilwell served for a time as Chiang's chief of staff, while simultaneously commanding American forces in the China-Burma-India Theater. For many reasons, relations between Stilwell and Chiang soon broke down. Many historians (such as Barbara W. Tuchman) have suggested it was largely due to the corruption and inefficiency of the Kuomintang government, while others (such as Ray Huang and Hans van de Ven) have depicted it as a more complicated situation. Stilwell had a strong desire to assume total control of Chinese troops and pursue an aggressive strategy, while Chiang preferred a patient and less expensive strategy of out-waiting the Japanese. Chiang continued to maintain a defensive posture despite Allied pleas to actively break the Japanese blockade, because China had already suffered tens of millions of war casualties and believed that Japan would eventually capitulate in the face of America's overwhelming industrial output. For these reasons the other Allies gradually began to lose confidence in the Chinese ability to conduct offensive operations from the Asian mainland, and instead concentrated their efforts against the Japanese in the Pacific Ocean Areas and South West Pacific Area, employing an island hopping strategy.Hans Van de Ven, "Stilwell in the Stocks: The Chinese Nationalists and the Allied Powers in the Second World War", Asian Affairs 34.3 (November 2003): 243–259.
Long-standing differences in national interest and political stance among China, the United States, and the United Kingdom remained in place. British Prime Minister Winston Churchill was reluctant to devote British troops, many of whom had been routed by the Japanese in earlier campaigns, to the reopening of the Burma Road; Stilwell, on the other hand, believed that reopening the road was vital, as all China's mainland ports were under Japanese control. The Allies' "Europe first" policy did not sit well with Chiang, while the later British insistence that China send more and more troops to Indochina for use in the Burma Campaign was seen by Chiang as an attempt to use Chinese manpower to defend British colonial possessions. Chiang also believed that China should divert its crack army divisions from Burma to eastern China to defend the airbases of the American bombers that he hoped would defeat Japan through bombing, a strategy that American general Claire Lee Chennault supported but which Stilwell strongly opposed. In addition, Chiang voiced his support of the Indian independence movement in a 1942 meeting with Mohandas Gandhi, which further soured the relationship between China and the United Kingdom.
American and Canadian-born Chinese were recruited to act as covert operatives in Japanese-occupied China. Employing their racial background as a disguise, their mandate was to blend in with local citizens and wage a campaign of sabotage. Activities focused on destruction of Japanese transportation of supplies (signaling bomber destruction of railroads, bridges). Chinese forces advanced to northern Burma in late 1943, besieged Japanese troops in Myitkyina, and captured Mount Song. The British and Commonwealth forces had their operation in Mission 204 which attempted to provide assistance to the Chinese Nationalist Army. The first phase in 1942 under command of SOE achieved very little, but lessons were learned and a second more successful phase, commenced in February 1943 under British Military command, was conducted before the Japanese Operation Ichi-Go offensive in 1944 compelled evacuation.
1944-1945 and Operation Ichi-Go
In 1944, the Communists launched counteroffensives from the liberated areas against Japanese forces.
Japan's 1944 Operation Ichi-Go was the largest military campaign of the Second Sino-Japanese War. The campaign mobilized 500,000 Japanese troops, 100,000 horses, 1,500 artillery pieces, and 800 tanks. The 750,000 casualty figure for Nationalist Chinese forces are not all dead and captured, Cox included in the 750,000 casualties that China incurred in Ichigo soldiers who simply "melted away" and others who were rendered combat ineffective besides killed and captured soldiers.Cox, 1980 pp. 2 Retrieved 9 March 2016
In late November 1944, the Japanese advance slowed approximately 300 miles from Chongqing as it experienced shortages of trained soldiers and materiel. Although Operation Ichi-Go achieved its goals of seizing United States air bases and establishing a potential railway corridor from Manchukuo to Hanoi, it did so too late to impact the result of the broader war. American bombers in Chengdu were moved to the Mariana Islands where, along with bombers from bases in Saipan and Tinian, they could still bomb the Japanese home islands.
After Operation Ichigo, Chiang Kai-shek started a plan to withdraw Chinese troops from the Burma theatre against Japan in Southeast Asia for a counter offensive called "White Tower" and "Iceman" against Japanese soldiers in China in 1945.
The poor performance of Chiang Kai-shek's forces in opposing the Japanese advance during Operation Ichigo became widely viewed as demonstrating Chiang's incompetence. It irreparably damaged the Roosevelt administration's view of Chiang and the KMT. The campaign further weakened the Nationalist economy and government revenues. Because of the Nationalists' increasing inability to fund the military, Nationalist authorities overlooked military corruption and smuggling. The Nationalist army increasingly turned to raiding villages to press-gang peasants into service and force marching them to assigned units. Approximately 10% of these peasants died before reaching their units.
By the end of 1944, Chinese troops under the command of Sun Li-jen attacking from India, and those under Wei Lihuang attacking from Yunnan, joined forces in Mong-Yu, successfully driving the Japanese out of North Burma and securing the Ledo Road, China's vital supply artery. In Spring 1945 the Chinese launched offensives that retook Hunan and Guangxi. With the Chinese army progressing well in training and equipment, Wedemeyer planned to launch Operation Carbonado in summer 1945 to retake Guangdong, thus obtaining a coastal port, and from there drive northwards toward Shanghai. However, the atomic bombings of Hiroshima and Nagasaki and Soviet invasion of Manchuria hastened Japanese surrender and these plans were not put into action.
Chinese industrial base and the CIC
The Second Sino-Japanese War had quickly harmed China's economy, with one of the earliest attacks being the Battle of Shanghai in 1937. With Shanghai, being a major industrial and foreign relations port for the Chinese and now under Japanese control, the Chinese economy industry took a big hit. In an effort to rectify this, Chinese Industrial Cooperatives (CIC) were created in 1937, under the "Gung Ho" Movement, before becoming formalized in 1938. The creation of Chinese Industrial Cooperatives allowed the Chinese people to establish smaller industry centers in small towns across China, allowing for economic and industrial production away from battle or possible Japanese invasion. In addition to supporting the economy, more immediate efforts were placed on supporting the Chinese military in producing whatever materials were needed for the war. Chinese refugees and those displaced by the war were hired by CICs to help production. The CICs relied on foreign aid and contributions, which drew mixed reactions. Overall, the CIC program failed as they had a goal of creating 30,000 cooperatives, but only succeeded in making approximately 2,000 cooperatives instead. The name "Gung Ho" comes from the Americanization of the Chinese name for Chinese Industrial Cooperatives. The full name, "工業合作社" (gōng yè hé zuò shè) was often shortened to the term "工合" (gōng hé), which was mistaken by U.S. Marine Evans Fordyce Carlson to mean "work together". Carlson then went on to use this believed motto as his slogan throughout the war, generating that phrase "Gung Ho" that has come to be in the English language.
Foreign aid
Before the start of full-scale warfare of the Second Sino-Japanese War, Germany had since the time of the Weimar Republic, provided much equipment and training to crack units of the National Revolutionary Army of China, including some aerial-combat training with the Luftwaffe to some pilots of the pre-Nationalist Air Force of China. A number of foreign powers, including the Americans, Italians and Japanese, provided training and equipment to different air force units of pre-war China. With the outbreak of full-scale war between China and the Empire of Japan, the Soviet Union became the primary supporter for China's war of resistance through the Sino-Soviet Non-Aggression Pact from 1937 to 1941. When the Imperial Japanese invaded French Indochina, the United States enacted the oil and steel embargo against Japan and froze all Japanese assets in 1941, and with it came the Lend-Lease Act of which China became a beneficiary on 6 May 1941; from there, China's main diplomatic, financial and military support came from the U.S., particularly following the attack on Pearl Harbor.
Overseas Chinese
Over 3,200 overseas Chinese drivers and motor vehicle mechanics embarked to wartime China to support military and logistics supply lines, especially through Indo-China, which became of absolute tantamount importance when the Japanese cut-off all ocean-access to China's interior with the capture of Nanning after the Battle of South Guangxi. Overseas Chinese communities in the U.S. raised money and nurtured talent in response to Imperial Japan's aggressions in China, which helped to fund an entire squadron of Boeing P-26 fighter planes purchased for the looming war situation between China and the Empire of Japan; over a dozen Chinese-American aviators, including John "Buffalo" Huang, Arthur Chin, Hazel Ying Lee, Chan Kee-Wong et al., formed the original contingent of foreign volunteer aviators to join the Chinese air forces (some provincial or warlord air forces, but ultimately all integrating into the centralized Chinese Air Force; often called the Nationalist Air Force of China) in the "patriotic call to duty for the motherland" to fight against the Imperial Japanese invasion. Several of the original Chinese-American volunteer pilots were sent to Lagerlechfeld Air Base in Germany for aerial-gunnery training by the Chinese Air Force in 1936. Throughout the course of the war hundreds of overseas Chinese pilots and aircraft maintenance technicians from the United States, the Philippines, Thailand, Indonesia, Malaysia, Vietnam, Canada, and other countries fought in China and made up a significant portion of the Chinese Air Force. At least 13 were confirmed to have died in the line of duty or from illnesses.
Korea
The exiled Provisional Government of the Republic of Korea (KPG) based in Chongqing allied with Chiang Kai-shek and the Nationalist Army against the Japanese. The KPG established the Korean Liberation Army (KLA) to fight against the Japanese in China.
Germany
Prior to the war, Germany and China were in close economic and military cooperation, with Germany helping China modernize its industry and military in exchange for raw materials. Germany sent military advisers such as Alexander von Falkenhausen to China to help the KMT government reform its armed forces. Some divisions began training to German standards and were to form a relatively small but well trained Chinese Central Army. By the mid-1930s about 80,000 soldiers had received German-style training. After the KMT lost Nanjing and retreated to Wuhan, Hitler's government decided to withdraw its support of China in 1938 in favour of an alliance with Japan as its main anti-Communist partner in East Asia.
Soviet Union
After Germany and Japan signed the anti-communist Anti-Comintern Pact, the Soviet Union hoped to keep China fighting, in order to deter a Japanese invasion of Siberia and save itself from a two-front war. In September 1937, they signed the Sino-Soviet Non-Aggression Pact and approved Operation Zet, the formation of a secret Soviet volunteer air force, in which Soviet technicians upgraded and ran some of China's transportation systems. Bombers, fighters, supplies and advisors arrived, headed by Aleksandr Cherepanov. Prior to the Western Allies, the Soviets provided the most foreign aid to China: some $250 million in credits for munitions and other supplies. The Soviet Union defeated Japan in the Battles of Khalkhin Gol in May – September 1939, leaving the Japanese reluctant to fight the Soviets again.Douglas Varner, To the Banks of the Halha: The Nomohan Incident and the Northern Limits of the Japanese Empire (2008) In April 1941, Soviet aid to China ended with the Soviet–Japanese Neutrality Pact and the beginning of the Great Patriotic War. This pact enabled the Soviet Union to avoid fighting against Germany and Japan at the same time. In August 1945, the Soviet Union annulled the neutrality pact with Japan and invaded Manchuria, Inner Mongolia, the Kuril Islands, and northern Korea. The Soviets also continued to support the Chinese Communist Party. In total, 3,665 Soviet advisors and pilots served in China, and 227 of them died fighting there.
The Soviet Union provided financial aid to both the Communists and the Nationalists.
United States
The United States generally avoided taking sides between Japan and China until 1940, providing virtually no aid to China in this period. For instance, the 1934 Silver Purchase Act signed by President Roosevelt caused chaos in China's economy which helped the Japanese war effort. The 1933 Wheat and Cotton Loan mainly benefited American producers, while aiding to a smaller extent both Chinese and Japanese alike. This policy was due to US fear of breaking off profitable trade ties with Japan, in addition to US officials and businesses perception of China as a potential source of massive profit for the US by absorbing surplus American products, as William Appleman Williams states.
From December 1937, events such as the Japanese attack on USS Panay and the Nanjing Massacre swung public opinion in the West sharply against Japan and increased their fear of Japanese expansion, which prompted the United States, the United Kingdom, and France to provide loan assistance for war supply contracts to China. Australia also prevented a Japanese government-owned company from taking over an iron mine in Australia, and banned iron ore exports in 1938. However, in July 1939, negotiations between Japanese Foreign Minister Arita Khatira and the British Ambassador in Tokyo, Robert Craigie, led to an agreement by which the United Kingdom recognized Japanese conquests in China. At the same time, the US government extended a trade agreement with Japan for six months, then fully restored it. Under the agreement, Japan purchased trucks for the Kwantung Army,US Congress. Investigation of Concentracion of Economic Power. Hearings before the Temporary National Economic Committee. 76th Congress, 2nd Session, Pt. 21. Washington, 1940, p. 11241 machine tools for aircraft factories, strategic materials (steel and scrap iron up to 16 October 1940, petrol and petroleum products up to 26 June 1941),Д. Г. Наджафов. Нейтралитет США. 1935–1941. М., "Наука", 1990. стр.157 and various other much-needed supplies.
In a hearing before the United States Congress House of Representatives Committee on Foreign Affairs on Wednesday, 19 April 1939, the acting chairman Sol Bloom and other Congressmen interviewed Maxwell S. Stewart, a former Foreign Policy Association research staff and economist who charged that America's Neutrality Act and its "neutrality policy" was a massive farce which only benefited Japan and that Japan did not have the capability nor could ever have invaded China without the massive amount of raw material America exported to Japan. America exported far more raw material to Japan than to China in the years 1937–1940. According to the United States Congress, the U.S.'s third largest export destination was Japan until 1940 when France overtook it due to France being at war too. Japan's military machine acquired war materials, automotive equipment, steel, scrap iron, copper, oil, that it wanted from the United States in 1937–1940 and was allowed to purchase aerial bombs, aircraft equipment, and aircraft from America up to the summer of 1938. A 1934 U.S. State Department memo even noted how Japan's business dealings with Standard Oil of New Jersey company, under the leadership of Walter Teagle, made United States oil the "major portion of the petroleum and petroleum products now imported into Japan." War essentials exports from the United States to Japan increased by 124% along with a general increase of 41% of all American exports from 1936 to 1937 when Japan invaded China. Japan's war economy was fueled by exports to the United States at over twice the rate immediately preceding the war. According to the U.S. Department of Commerce, Japan corresponded to the following share of American exports.
Japan invaded and occupied the northern part of French Indochina in September 1940 to prevent China from receiving the 10,000 tons of materials delivered monthly by the Allies via the Haiphong–Yunnan Fou Railway line.
On 22 June 1941, Germany attacked the Soviet Union. In spite of non-aggression pacts or trade connections, Hitler's assault threw the world into a frenzy of re-aligning political outlooks and strategic prospects.
On 21 July, Japan occupied the southern part of French Indochina (southern Vietnam and Cambodia), contravening a 1940 gentlemen's agreement not to move into southern French Indochina. From bases in Cambodia and southern Vietnam, Japanese planes could attack Malaya, Singapore, and the Dutch East Indies. As the Japanese occupation of northern French Indochina in 1940 had already cut off supplies from the West to China, the move into southern French Indochina was viewed as a direct threat to British and Dutch colonies. Many principal figures in the Japanese government and military (particularly the navy) were against the move, as they foresaw that it would invite retaliation from the West.
On 24 July 1941, Roosevelt requested Japan withdraw all its forces from Indochina. Two days later the US and the UK began an oil embargo; two days after that the Netherlands joined them. This was a decisive moment in the Second Sino-Japanese War. The loss of oil imports made it impossible for Japan to continue operations in China on a long-term basis. It set the stage for Japan to launch a series of military attacks against the Allies, including the attack on Pearl Harbor on 7 December 1941.
In mid-1941, the United States government financed the creation of the American Volunteer Groups (AVG), of which one the "Flying Tigers" reached China, to replace the withdrawn Soviet volunteers and aircraft. The Flying Tigers did not enter actual combat until after the United States had declared war on Japan. Led by Chennault, their early combat success of 300 kills against a loss of 12 of their newly introduced Curtiss P-40 Warhawk fighters heavily armed with six 0.50-inch caliber machine guns and very fast diving speeds earned them wide recognition at a time when the Chinese Air Force and Allies in the Pacific and SE Asia were suffering heavy losses, and soon afterwards their "boom and zoom" high-speed hit-and-run air combat tactics would be adopted by the United States Army Air Forces.
Disagreements existed both between the United States and the Nationalists, and within the United States military, about the form of aid. Chennault contended that aid should be in the form of building on the success of the Flying Tigers and go to the US Fourteenth Air Force in China. Lieutenant General Joseph Stilwell, who was in charge of training Nationalist divisions equipped by the United States, became increasingly frustrated by the Nationalists' refusal to use them to fight the Japanese in Burma or in southeastern China.
The Sino-American Cooperative Organization was an organization created by the SACO Treaty signed by the Republic of China and the United States of America in 1942 that established a mutual intelligence gathering entity in China between the respective nations against Japan. It operated in China jointly along with the Office of Strategic Services (OSS), America's first intelligence agency and forerunner of the CIA while also serving as joint training program between the two nations. Among all the wartime missions that Americans set up in China, SACO was the only one that adopted a policy of "total immersion" with the Chinese. The "Rice Paddy Navy" or "What-the-Hell Gang" operated in the China-Burma-India theater, advising and training, forecasting weather and scouting landing areas for USN fleet and Gen Claire Chennault's 14th AF, rescuing downed American flyers, and intercepting Japanese radio traffic. An underlying mission objective during the last year of war was the development and preparation of the China coast for Allied penetration and occupation. Fujian was scouted as a potential staging area and springboard for the future military landing of the Allies of World War II in Japan.
United Kingdom
After the Tanggu Truce of 1933, Chiang Kai-Shek and the British government would have more friendly relations but were uneasy due to British foreign concessions there. During the Second Sino-Japanese War the British government would initially have an impartial viewpoint toward the conflict urging both to reach an agreement and prevent war. British public opinion would swing in favor of the Chinese after Hughe Knatchbull-Hugessen's car which had Union Jacks on it was attacked by Japanese aircraft with Hugessen being temporarily paralyzed with outrage against the attack from the public and government. The British public were largely supportive of the Chinese and many relief efforts were untaken to help China. Britain at this time was beginning the process of rearmament and the sale of military surplus was banned but there was never an embargo on private companies shipping arms. A number of unassembled Gloster Gladiator fighters were imported to China via Hong Kong for the Chinese Air Force. Between July 1937 and November 1938 on average 60,000 tons of munitions were shipped from Britain to China via Hong Kong. Attempts by the United Kingdom and the United States to do a joint intervention were unsuccessful as both countries had rocky relations in the interwar era.
In February 1941 a Sino-British agreement was forged whereby British troops would assist the Chinese "Surprise Troops" units of guerrillas already operating in China, and China would assist Britain in Burma.
When Hong Kong was overrun in December 1941, the British Army Aid Group (B.A.A.G.) was set up and headquartered in Guilin, Guangxi. Its aim was to assist prisoners of war and internees to escape from Japanese camps. This led to the formation of the Hong Kong Volunteer Company which later fought in Burma. B.A.A.G. also sent agents to gather intelligence – military, political and economic in Southern China, as well as giving medical and humanitarian assistance to Chinese civilians and military personnel.
A British-Australian commando operation, Mission 204 (Tulip Force), was initialized to provide training to Chinese guerrilla troops. The mission conducted two operations, mostly in the provinces of Yunnan and Jiangxi.
The first operation commenced in February 1942 from Burma on a long journey to the Chinese front. Due to issues with supporting the Chinese and gradual disease and supply issues, the first phase achieved very little and the unit was withdrawn in September.
Another phase was set up with lessons learned from the first. Commencing in February 1943 this time valid assistance was given to the Chinese 'Surprise Troops' in various actions against the Japanese. These involved ambushes, attacks on airfields, blockhouses, and supply depots. The unit operated successfully before withdrawal in November 1944.
Commandos and members of SOE who had formed Force 136, worked with the Free Thai Movement who also operated in China, mostly while on their way into Thailand.
After the Japanese blocked the Burma Road in April 1942, and before the Ledo Road was finished in early 1945, the majority of US and British supplies to the Chinese had to be delivered via airlift over the eastern end of the Himalayas known as "The Hump". Flying over the Himalayas was extremely dangerous, but the airlift continued daily to August 1945, at great cost in men and aircraft.
French Indochina
The Chinese Kuomintang also supported the Vietnamese Việt Nam Quốc Dân Đảng (VNQDD) in its battle against French and Japanese imperialism. In Guangxi, Chinese military leaders were organizing Vietnamese nationalists against the Japanese. The VNQDD had been active in Guangxi and some of their members had joined the KMT army. Under the umbrella of KMT activities, a broad alliance of nationalists emerged. With Ho at the forefront, the Viet Nam Doc Lap Dong Minh Hoi (Vietnamese Independence League, usually known as the Viet Minh) was formed and based in the town of Jingxi. The pro-VNQDD nationalist Ho Ngoc Lam, a KMT army officer and former disciple of Phan Bội Châu, was named as the deputy of Phạm Văn Đồng, later to be Ho's Prime Minister. The front was later broadened and renamed the Viet Nam Giai Phong Dong Minh (Vietnam Liberation League).
The Viet Nam Revolutionary League was a union of various Vietnamese nationalist groups, run by the pro Chinese VNQDD. Chinese KMT General Zhang Fakui created the league to further Chinese influence in Indochina, against the French and Japanese. Its stated goal was for unity with China under the Three Principles of the People, created by KMT founder Dr. Sun and opposition to Japanese and French Imperialists. The Revolutionary League was controlled by Nguyen Hai Than, who was born in China and could not speak Vietnamese. General Zhang shrewdly blocked the Communists of Vietnam, and Ho Chi Minh from entering the league, as Zhang's main goal was Chinese influence in Indochina. The KMT utilized these Vietnamese nationalists during World WarII against Japanese forces. Franklin D. Roosevelt, through General Stilwell, privately made it clear that they preferred that the French not reacquire French Indochina (modern day Vietnam, Cambodia, and Laos) after the war was over. Roosevelt offered Chiang Kai-shek control of all of Indochina. It was said that Chiang Kai-shek replied: "Under no circumstances!"
After the war, 200,000 Chinese troops under General Lu Han were sent by Chiang Kai-shek to northern Indochina (north of the 16th parallel) to accept the surrender of Japanese occupying forces there, and remained in Indochina until 1946, when the French returned. The Chinese used the VNQDD, the Vietnamese branch of the Chinese Kuomintang, to increase their influence in French Indochina and to put pressure on their opponents. Chiang Kai-shek threatened the French with war in response to maneuvering by the French and Ho Chi Minh's forces against each other, forcing them to come to a peace agreement. In February 1946, he also forced the French to surrender all of their concessions in China and to renounce their extraterritorial privileges in exchange for the Chinese withdrawing from northern Indochina and allowing French troops to reoccupy the region. Following France's agreement to these demands, the withdrawal of Chinese troops began in March 1946.
Central Asian rebellions
In 1937, then pro-Soviet General Sheng Shicai invaded Dunganistan accompanied by Soviet troops to defeat General Ma Hushan of the KMT 36th Division. General Ma expected help from Nanjing, but did not receive it. The Nationalist government was forced to deny these maneuvers as "Japanese propaganda", as it needed continued military supplies from the Soviets.
As the war went on, Nationalist General Ma Buqing was in virtual control of the Gansu corridor. Ma had earlier fought against the Japanese, but because the Soviet threat was great, Chiang in July 1942 directed him to move 30,000 of his troops to the Tsaidam marsh in the Qaidam Basin of Qinghai. Chiang further named Ma as Reclamation Commissioner, to threaten Sheng's southern flank in Xinjiang, which bordered Tsaidam.
The Ili Rebellion broke out in Xinjiang when the Kuomintang Hui Officer Liu Bin-Di was killed while fighting Turkic Uyghur rebels in November 1944. The Soviet Union supported the Turkic rebels against the Kuomintang, and Kuomintang forces fought back.
Ethnic minorities
Japan attempted to reach out to Chinese ethnic minorities in order to rally them to their side against the Han Chinese, but only succeeded with certain Manchu, Mongol, Uyghur, and Tibetan elements.
The Japanese attempt to get the Muslim Hui people on their side failed, as many Chinese generals such as Bai Chongxi, Ma Hongbin, Ma Hongkui, and Ma Bufang were Hui. The Japanese attempted to approach Ma Bufang but were unsuccessful in making any agreement with him. Ma Bufang ended up supporting the anti-Japanese Imam Hu Songshan, who prayed for the destruction of the Japanese. Ma became chairman (governor) of Qinghai in 1938 and commanded a group army. He was appointed because of his anti-Japanese inclinations, and was such an obstruction to Japanese agents trying to contact the Tibetans that he was called an "adversary" by a Japanese agent.
Hui Muslims
Hui cemeteries were destroyed for military reasons. Many Hui fought in the war against the Japanese such as Bai Chongxi, Ma Hongbin, Ma Hongkui, Ma Bufang, Ma Zhanshan, Ma Biao, Ma Zhongying, Ma Buqing and Ma Hushan. Qinghai Tibetans served in the Qinghai army against the Japanese. The Qinghai Tibetans view the Tibetans of Central Tibet (Tibet proper, ruled by the Dalai Lamas from Lhasa) as distinct and different from themselves, and even take pride in the fact that they were not ruled by Lhasa ever since the collapse of the Tibetan Empire.
Xining was subjected to aerial bombardment by Japanese warplanes in 1941, causing all ethnicities in Qinghai to unite against the Japanese. General Han Youwen directed the defense of the city of Xining during air raids by Japanese planes. Han survived an aerial bombardment by Japanese planes in Xining while he was being directed via telephone by Ma Bufang, who hid in an air-raid shelter in a military barracks. The bombing resulted in Han being buried in rubble, though he was later rescued.
John Scott reported in 1934 that there was both strong anti-Japanese feeling and anti-Bolshevik among the Muslims of Gansu and he mentioned the Muslim generals Ma Fuxiang, Ma Qi, Ma Anliang and Ma Bufang who was chairman of Qinghai province when he stayed in Xining.
Conclusion in 1945 and aftermath
End of the Pacific War and the surrender of Japanese troops in China
During the Second Sino-Japanese War, the Japanese had consistent tactical successes but failed to achieve strategic results. Although it seized the majority of China's industrial capacity, occupied most major cities, and rarely lost a battle, Japan's occupation of China was costly. Japan had approximately 50,000 military fatalities each year and 200,000 wounded per year.thumb|WWII victory parade at Chongqing on 3 September 1945
In less than two weeks the Kwantung Army, which was the primary Japanese fighting force,Robert A. Pape. Why Japan Surrendered. International Security, Vol. 18, No. 2 (Autumn, 1993), pp. 154–201 consisting of over a million men but lacking in adequate armour, artillery, or air support, had been destroyed by the Soviets. Japanese Emperor Hirohito officially capitulated to the Allies on 15 August 1945. The official surrender was signed aboard the battleship on 2 September 1945, in a ceremony where several Allied commanders including Chinese general Hsu Yung-chang were present.
After the Allied victory in the Pacific, General Douglas MacArthur ordered all Japanese forces within China (excluding Manchuria), Taiwan and French Indochina north of 16° north latitude to surrender to Chiang Kai-shek, and the Japanese troops in China formally surrendered on 9 September 1945, at 9:00.Act of Surrender, 9 September 1945 (page visited on 3 September 2015). The ninth hour of the ninth day of the ninth month was chosen in echo of the Armistice of 11 November 1918 (on the eleventh hour of the eleventh day of the eleventh month) and because "nine" (九 jiǔ) is a homophone of the word for "long lasting" (久) in Chinese (to suggest that the peace won would last forever).Hans Van De Ven, "A call to not lead humanity into another war", China Daily, 31 August 2015.
Chiang relied on American help in transporting Nationalist troops to regain control of formerly Japanese-occupied areas. Non-Chinese generally viewed the behavior of these troops as undercutting Nationalist legitimacy, and these troops engaged in corruption and looting, leading to widespread views of a "botched liberation".
The Nationalist government seized Japanese-held businesses at the time of the Japanese surrender. The Nationalist government made little effort to return these businesses to their original Chinese owners. A mechanism existed through which Chinese and foreign owners could petition for the return of their former property. In practice, the Nationalist government and its officials retained a great deal of the seized property and embezzling property, particularly from warehouses, was common. Nationalist officials sometimes extorted money from individuals in liberated territories under threat of labeling them as Japanese collaborators.
Chiang's focus on his communist opponents prompted him to leave Japanese troops or troops of the Japanese puppet regimes to remain on duty in occupied areas so as to avoid their surrender to Communist forces.
Post-war struggle and resumption of the civil war
In 1945, China emerged from the war a victor, but economically weak and on the verge of all-out civil war. The economy was sapped by the military demands of a long costly war and internal strife, by spiraling inflation, and by corruption in the Nationalist government that included profiteering, speculation and hoarding.
The poor performance of Nationalist forces opposing the Ichi-go campaign was largely viewed as reflecting poorly on Chiang's competence. Chiang blamed the failure on the United States, particularly Stilwell, who had used Chinese forces in the Burma Campaign and in Chiang's view, left China insufficiently defended.
As part of the Yalta Conference, which allowed a Soviet sphere of influence in Manchuria, the Soviets dismantled and removed more than half of the industrial equipment left there by the Japanese before handing over Manchuria to China. Large swathes of the prime farming areas had been ravaged by the fighting and there was starvation and famine in the wake of the war. Many towns and cities were destroyed, and millions were rendered homeless by floods.
The problems of rehabilitation and reconstruction after the ravages of a protracted war were staggering, and the war left the Nationalists severely weakened, and their policies left them unpopular. Meanwhile, the war strengthened the Communists both in popularity and as a viable fighting force. At Yan'an and elsewhere in the communist controlled areas, Mao Zedong was able to adapt Marxism–Leninism to Chinese conditions. He taught party cadres to lead the masses by living and working with them, eating their food, and thinking their thoughts.
In Japanese-occupied areas, the Communists had established military and political bases from which it carried out guerilla warfare. The Communists built popular support in these areas, returning land to poor peasants, reducing peasant's rent, and arming the people. By Spring 1945, there were 19 Communist-governed areas in China in which 95 million people lived. In Fall 1945, the Communist armies had 1.27 million men and were supported by 2.68 million militia members.
Mao also began to execute his plan to establish a new China by rapidly moving his forces from Yan'an and elsewhere to Manchuria. This opportunity was available to the Communists because although Nationalist representatives were not invited to Yalta, they had been consulted and had agreed to the Soviet invasion of Manchuria in the belief that the Soviet Union would cooperate only with the Nationalist government after the war.
However, the Soviet occupation of Manchuria was long enough to allow the Communist forces to move in en masse and arm themselves with the military hardware surrendered by the Imperial Japanese Army, quickly establish control in the countryside and move into position to encircle the Nationalist government army in major cities of northeast China. Following that, the Chinese Civil War broke out between the Nationalists and Communists, which concluded with the Communist victory in mainland China and the retreat of the Nationalists to Taiwan in 1949.
Aftermath
The Nationalists suffered higher casualties because they were the main combatants opposing the Japanese in each of the 22 major battles (involving more than 100,000 troops on both sides) between China and Japan. The Communist forces, by contrast, usually avoided pitched battles with the Japanese, in which their guerrilla tactics were less effective, and generally limited their combat to guerrilla actions (the Hundred Regiments Offensive and the Battle of Pingxingguan are notable exceptions). The Nationalists committed their strongest divisions in early battle against the Japanese (including the 36th, 87th, 88th divisions, the crack divisions of Chiang's Central Army) to defend Shanghai and continued to deploy most of their forces to fight the Japanese even as the Communists changed their strategy to engage mainly in a political offensive against the Japanese while declaring that the CCP should "save and preserve our strength and wait for favourable timing" by the end of 1941.Yang Kuisong, "The Formation and Implementation of the Chinese Communists' Guerrilla Warfare Strategy in the Enemy's Rear during the Sino-Japanese War", paper presented at Harvard University Conference on Wartime China, Maui, January 2004, pp. 32–36
Legacy
China-Japan relations
Today, the war is a major point of contention and resentment between China and Japan. The war remains a major roadblock for Sino-Japanese relations. Issues regarding the current historical outlook on the war exist. For example, the Japanese government has been accused of historical revisionism by allowing the approval of a few school textbooks omitting or glossing over Japan's militant past, although the most recent controversial book, the New History Textbook was used by only 0.039% of junior high schools in JapanSven Saaler: Politics, Memory and Public Opinion: The History Textbook Controversy and Japanese Society. Munich: 2005 and despite the efforts of the Japanese nationalist textbook reformers, by the late 1990s the most common Japanese schoolbooks contained references to, for instance, the Nanjing Massacre, Unit 731, and the comfort women of World WarII, all historical issues which have faced challenges from ultranationalists in the past.
In 2005, a history textbook prepared by the Japanese Society for History Textbook Reform which had been approved by the government in 2001, sparked huge outcry and protests in China and Korea. It referred to the Nanjing Massacre and other atrocities such as the Manila massacre as an "incident", glossed over the issue of comfort women, and made only brief references to the death of Chinese soldiers and civilians in Nanjing. A copy of the 2005 version of a junior high school textbook titled New History Textbook found that there is no mention of the "Nanjing Massacre" or the "Nanjing Incident". Indeed, the only one sentence that referred to this event was: "they [the Japanese troops] occupied that city in December".
Taiwan
During the Second Sino-Japanese War, Taiwan was a Japanese colony that was used as a strategic base for military operations against China and Southeast Asia. Native Han Chinese inhabitants on the island were given the option of moving back to the mainland, although few did, and some put up an armed resistance against the Japanese. This formed the backbone of the nascent Taiwanese independence movement. In the period before the war in the Pacific widened, Japan came to regard Taiwan as an "unsinkable aircraft carrier" and an important stepping stone in its military expansion.
There were indigenous Taiwanese who worked in Japan's defense and war-related industries in Taiwan that abetted Japan's war efforts. Many Taiwanese served in the Japanese military, including units that fought in China, resulting in the combat deaths of nearly 30,000. The future President, Lee Teng-hui (aa Kuomintang member) was one of those conscripted.
After the surrender, Taiwan and the Penghu islands were put under the administrative control of the Republic of China (ROC) government in 1945 by the United Nations Relief and Rehabilitation Administration.World Directory of Minorities and Indigenous Peoples – Taiwan : Overview United Nations High Commission for Refugees The ROC proclaimed Taiwan Retrocession Day on 25 October 1945. However, due to the unresolved Chinese Civil War, neither the newly established People's Republic of China in mainland China nor the Nationalist ROC that retreated to Taiwan was invited to sign the Treaty of San Francisco, as neither had shown full and complete legal capacity to enter into an international legally binding agreement. Since China was not present, the Japanese only formally renounced the territorial sovereignty of Taiwan and Penghu islands without specifying to which country Japan relinquished the sovereignty, and the treaty was signed in 1951 and came into force in 1952.
In 1952, the Treaty of Taipei was signed separately between the ROC and Japan that basically followed the same guideline of the Treaty of San Francisco, not specifying which country has sovereignty over Taiwan. However, Article 10 of the treaty states that the Taiwanese people and the juridical person should be the people and the juridical person of the ROC. Both the PRC and ROC governments base their claims to Taiwan on the Japanese Instrument of Surrender which specifically accepted the Potsdam Declaration which refers to the Cairo Declaration. Disputes over the precise de jure sovereign of Taiwan persist to the present. On a de facto basis, sovereignty over Taiwan has been and continues to be exercised by the ROC. Japan's position has been to avoid commenting on Taiwan's status, maintaining that Japan renounced all claims to sovereignty over its former colonial possessions after World WarII, including Taiwan. FOCUS: Taiwan–Japan ties back on shaky ground as Taipei snubs Tokyo envoy
Traditionally, the Republic of China government has held celebrations marking the Victory Day on 9 September (now known as Armed Forces Day) and Taiwan's Retrocession Day on 25 October. However, after the Democratic Progressive Party (DPP) won the presidential election in 2000, these national holidays commemorating the war have been cancelled as the pro-independent DPP does not see the relevancy of celebrating events that happened in mainland China.
Meanwhile, many KMT supporters, particularly veterans who retreated with the government in 1949, still have an emotional interest in the war. For example, in celebrating the 60th anniversary of the end of war in 2005, the cultural bureau of KMT stronghold Taipei held a series of talks in the Sun Yat-sen Memorial Hall regarding the war and post-war developments, while the KMT held its own exhibit in the KMT headquarters. Whereas the KMT won the presidential election in 2008, the ROC government resumed commemorating the war.
Japanese women left in China
Several thousand Japanese who were sent as colonizers to Manchukuo and Inner Mongolia were left behind in China. The majority of these were women, and they married mostly Chinese men and became known as "stranded war wives" (zanryu fujin).Mackerras 2003 , p. 59.
The Japanese government claims that these women had willingly chosen to stay in China, believing that women thirteen years of age and older were capable of making the decision to stay or leave China. Due to this, many women faced legal and cultural concerns about returning to Japan, such as less employment opportunities, less governmental aid and discrimination.Ward, Rowena (2006). Japanese government policy and the reality of the lives of the zanryu fujin. University of Wollongong. Journal contribution. https://hdl.handle.net/10779/uow.27713820.v1 Many of these women had gotten married and started families with Chinese men, which produced children ineligible to enter Japan due to their lack of Japanese citizenship. Additionally, Japan created repatriation legislation determined by both based on age (if they were minors) and if the individual willfully stayed in China or was forcibly separated from Japan. Other factors such as poor Sino-Japanese relations as well as poorer communication to rural areas, where many of these women lived, also prevented many Japanese women from returning to Japan.
Korean women left in China
In China some Korean comfort women stayed behind instead of going back to their native land.Tanaka 2002 , p. 59.Tanaka 2003 , p. 59. Most Korean comfort women who were left behind in China married Chinese men.Teunis 2007 , p. 90.
Korean women and young girls brought to China during the Second Sino-Japanese War were brought by the Japanese as comfort women. These women were used as a sexual outlet by Japanese soldiers. Since the early 1930s, the Japanese brought more than two hundred thousand women, mostly Korean women, to China; however some estimates reach up to five hundrend thousand women. Many of the women became pregnant and gave birth to children. Some women recall being raped by Japanese soldiers more than fifty times a day.In the Name of the Emperor. Directed by Christine Choy. Filmakers Library, 1997. https://video.alexanderstreet.com/watch/in-the-name-of-the-emperor. While some of these Korean women also stayed in China and married Chinese men and started families, many were killed by the Japanese towards the end of the war. Those who returned to Korea faced social barring and stigmatism, making it difficult for these women to move on from their horrific pasts, and some knowing of the shame they would face stayed in China.
Commemorations
Three major museums in China commemorate China's War of Resistance, including the Museum of the War of Chinese People's Resistance Against Japanese Aggression. China also holds parades, memorials and other annual events, often held on September 3, to commemorate the end of the Second Sino-Japanese War and World War II. These events reflect on events of the war, such as the Nanjing Massacre, while creating a collective sense of national unity and remembrance.
Japan holds a national memorial on August 15, which features statements from government officials and many Japanese visit the Yasukuni Shrine, which continues to be a controversial point in Japanese relations with China and South Korea. Museums such as the Yushukan Museum in Tokyo, which has an exhibit dedicated to the Second Sino-Japanese War containing artifacts of military elites, and memorial museums for Hiroshima and Nagasaki all allow the Japanese to remember the events of the Second Sino-Japanese War and World War II.
The days during which each country holds its respective commemorations relate to events surrounding the end of World War II. China observes September 3 as that day, marking when Japan officially surrendered in Tokyo. Japan observes August 15 as it is the day when Emperor Hirohito declared Japan's surrender.
Casualties
The conflict lasted eight years, two months, and two days (from 7 July 1937 to 9 September 1945). The total number of casualties that resulted from this war (and subsequently theater) equaled more than half the total number of casualties that later resulted from the entire Pacific War.
Chinese
Duncan Anderson, Head of the Department of War Studies at the Royal Military Academy, UK, writing for BBC states that the total number of Chinese who died was at 20 million.
Based on data released by the Nationalists and Communists from 1945 to 1947, the total losses for Chinese military personnel and civilians in the Second Sino-Japanese War amounted to 22,782,959 casualties (9,530,317 dead, 9,905,880 wounded or crippled, 540,562 missing, and 2,806,200 captured).
Dr. Bian Xiuyue, a researcher from the Chinese Academy of Social Sciences, put total losses of the Chinese population from 1931 to 1945 at 20,620,939 dead (excluding the Henan Famine) and estimated the number of wounded at 20,692,246 for a total of 41,313,185 dead or wounded. Of the aforementioned figure, Nationalist and Communist military personnel and conscripts accounted for 27.61%, forced laborers and civilians from China (including Manchukuo) accounted for 69.21%, collaborationist Chinese military personnel accounted for 2.44%, overseas Chinese accounted for 0.61%, and Taiwanese military personnel in the Japanese Army accounted for 0.13%. If the 5.35 million Chinese who went missing or were captured by the Japanese Army and the 3 million civilians who died from famine in Henan Province are included, the total number of Chinese losses amounted to between 45 and 48 million dead, wounded, missing, and captured.
The official PRC statistics for China's civilian and military casualties in the Second Sino-Japanese War from 1937 to 1945 are over 35 million casualties including over 20 million dead. Military casualties amounted to over 3.8 million out of the over 35 million figure.
The official account of the war published in Taiwan reported that the Nationalist Chinese Army lost 3,238,000 men (1,797,000 wounded, 1,320,000 killed, and 120,000 missing) and 5,787,352 civilians casualties putting the total number of casualties at 9,025,352. The Nationalists fought in 22 major engagements, most of which involved more than 100,000 troops on both sides, 1,171 minor engagements most of which involved more than 50,000 troops on both sides, and 38,931 skirmishes.Hsu Long-hsuen "History of the Sino-Japanese war (1937–1945)" Taipei 1972 The Chinese reported their yearly total battle casualties as 367,362 for 1937, 735,017 for 1938, 346,543 for 1939, and 299,483 for 1941. Additionally, the Ministry of Military Affairs recorded a total of 10,322,934 losses from illnesses, reorganizations, and desertions.國史館檔案史料文物查詢系統, 抗戰期間陸軍動員人數統計表, 典藏號: 008-010701-00015-046
The postwar investigation of Chinese losses by the Nationalist Government recorded a total of 3,407,931 military combat casualties (1,371,374 killed, 1,738,324 wounded, and 298,233 missing) and 422,479 military deaths from illnesses. Additionally, there were 2,313 casualties (1,042 killed and 1,271 wounded) from the Air Defense Service and 9,134,569 civilian casualties (4,397,504 dead and 4,737,065 wounded). Yearly casualties for the army are 881,349 in 1937, 517,121 in 1938, 413,853 in 1939, 153,983 in 1940, 258,530 in 1941, 126,557 in 1942, 67,903 in 1943, 322,625 in 1944, and 649,503 in 1945.國史館檔案史料文物查詢系統, 民國二十六年七月至三十四年八月止抗戰軍事損失統計表(陸軍部門), 典藏號: 008-010701-00015-052國史館檔案史料文物查詢系統, 中日戰爭損失統計(三), 典藏號: 020-010116-0004
The Ministry of Military Affairs recorded the losses of wounded and sick soldiers in hospital directly administrated by the Nationalist Government at 443,398 losses for wounded soldiers (45,710 dead, 123,017 crippled, and 274,671 deserted) and 937,559 losses for sick soldiers (422,479 dead, 191,644 crippled, and 323,436 deserted), for a total of 1,380,957 losses (468,189 dead, 314,661 crippled, and 598,107 deserted).
An academic study published in the United States in 1959 estimates military casualties: 1.5 million killed in battle, 750,000 missing in action, 1.5 million deaths due to disease and 3 million wounded; civilian casualties: due to military activity, killed 1,073,496 and 237,319 wounded; 335,934 killed and 426,249 wounded in Japanese air attacks.Ho Ping-ti. Studies on the Population of China, 1368–1953. Cambridge: Harvard University Press, 1959. This estimate is based on the National Central Research Institute's study of China's losses in six years from 7 July 1937 until 6 July 1943.國史館檔案史料文物查詢系統, 二十六至三十二年中國對日戰事損失之估計(國立中央研究所社會科學研究所韓啟桐編), 典藏號: 020-010116-0001
According to historian Mitsuyoshi Himeta, at least 2.7 million civilians died during the "kill all, loot all, burn all" operation (Three Alls Policy, or sanko sakusen) implemented in May 1942 in north China by general Yasuji Okamura and authorized on 3 December 1941, by Imperial Headquarter Order number 575.*
The property loss suffered by the Chinese was valued at 383 billion US dollars according to the currency exchange rate in July 1937, roughly 50 times the gross domestic product of Japan at that time (US$7.7 billion).Ho Ying-chin, Who Actually Fought the Sino-Japanese War 1937–1945? 1978
In addition, the war created 95 million refugees.
Rudolph Rummel gave a figure of 3,949,000 people in China murdered directly by the Japanese army while giving a figure of 10,216,000 total dead in the war with the additional millions of deaths due to indirect causes like starvation, disease and disruption but not direct killing by Japan. China suffered from famines during the war caused by drought affected both China and India, Chinese famine of 1942–43 in Henan that led to starvation deaths of 2 to 3 million people, Guangdong famine caused more than 3 million people to flee or die, and the 1943–1945 Indian famine in Bengal that killed about 3 million Indians in Bengal and parts of Southern India.
Japanese
The Japanese recorded around 1.1 to 1.9million military casualties during all of World WarII (which includes those killed, wounded, and missing). The official death toll of Japanese men killed in China, according to the Japan Defense Ministry, was 480,000. Based on the investigation of the Japanese Yomiuri Shimbun, the military death toll of Japan in China is about 700,000 since 1937 (excluding the deaths in Manchuria and Burma campaign).
Another source from Hilary Conroy claims that a total of 447,000 Japanese soldiers died or went missing in China during the Second Sino-Japanese War. Of the 1,130,000 Imperial Japanese Army soldiers who died during World WarII, 39 percent died in China.
Then in War Without Mercy, John W. Dower claims that a total of 396,000 Japanese soldiers died in China during the Second Sino-Japanese War. Of this number, the Imperial Japanese Army lost 388,605 soldiers, and the Imperial Japanese Navy lost 8,000 soldiers. Another 54,000 soldiers also died after the war had ended, mostly from illness and starvation. Of the 1,740,955 Japanese soldiers who died during World WarII, 22 percent died in China.Dower, John "War Without Mercy", pp. 297.
Japanese statistics, however, lack complete estimates for the wounded. From 1937 to 1941, 185,647 Japanese soldiers were killed in China and 520,000 were wounded. Disease also incurred critical losses on Japanese forces. From 1937 to 1941, 430,000 Japanese soldiers were recorded as being sick. In North China alone, 18,000 soldiers were evacuated back to Japan for illnesses in 1938, 23,000 in 1939, and 15,000 in 1940. From 1941 to 1945: 202,958 dead; another 54,000 dead after war's end. Chinese forces also report that by May 1945, 22,293 Japanese soldiers were captured as prisoners. Many more Japanese soldiers surrendered when the war ended.
Contemporary studies from the Beijing Central Compilation and Translation Press state that the Japanese suffered a total of 2,227,200 casualties, including 1,055,000 dead and 1,172,341 injured. This Chinese publication analyzes statistics provided by Japanese publications and claims these numbers were largely based on Japanese publications.Liu Feng, (2007). "血祭太阳旗: 百万侵华日军亡命实录". Central Compilation and Translation Press. . Note: This Chinese publication analyses statistics provided by Japanese publications.
Both Nationalist and Communist Chinese sources report that their respective forces were responsible for the deaths of over 1.7 million Japanese soldiers. Nationalist War Minister He Yingqin himself contested the Communists' claims, finding it impossible for a force of "untrained, undisciplined, poorly equipped" guerrillas of Communist forces to have killed so many enemy soldiers.
The Nationalist Chinese authorities ridiculed Japanese estimates of Chinese casualties. In 1940, the National Herald stated that the Japanese exaggerated Chinese casualties, while deliberately concealing the true number of Japanese casualties, releasing false figures that made them appear much lower. The article reports on the casualty situation of the war up to 1940.
Use of chemical and biological weapons
Despite Article 23 of the Hague Conventions of 1899 and 1907, article V of the Treaty in Relation to the Use of Submarines and Noxious Gases in Warfare, article 171 of the Treaty of Versailles and a resolution adopted by the League of Nations on 14 May 1938, condemning the use of poison gas by the Empire of Japan, the Imperial Japanese Army frequently used chemical weapons during the war.
According to Walter E. Grunden, history professor at Bowling Green State University, Japan permitted the use of chemical weapons in China because the Japanese concluded that Chinese forces did not possess the capacity to retaliate in kind. The Japanese incorporated gas warfare into many aspects of their army, which includes special gas troops, infantry, artillery, engineers and air force; the Japanese were aware of basic gas tactics of other armies, and deployed multifarious gas warfare tactics in China. The Japanese were very dependent on gas weapons when they were engaged in chemical warfare.
Japan used poison gas at Hankow during the Battle of Wuhan to break fierce Chinese resistance after conventional Japanese assaults were repelled by Chinese defenders. Rana Mitter writes,
According to Freda Utley, during the battle at Hankow, in areas where Japanese artillery or gunboats on the river could not reach Chinese defenders on hilltops, Japanese infantrymen had to fight Chinese troops on the hills. She noted that the Japanese were inferior at hand-to-hand combat against the Chinese, and resorted to deploying poison gas to defeat the Chinese troops. She was told by General Li Zongren that the Japanese consistently used tear gas and mustard gas against Chinese troops. Li also added that his forces could not withstand large scale deployments of Japanese poison gas. Since Chinese troops did not have gas-masks, the poison gases provided enough time for Japanese troops to bayonet debilitated Chinese soldiers.
During the battle in Yichang of October 1941, Japanese troops used chemical munitions in their artillery and mortar fire, and warplanes dropped gas bombs all over the area; since the Chinese troops were poorly equipped and without gas-masks, they were severely gassed, burned and killed.
According to historians Yoshiaki Yoshimi and Seiya Matsuno, the chemical weapons were authorized by specific orders given by Hirohito himself, transmitted by the Imperial General Headquarters. For example, the Emperor authorized the use of toxic gas on 375 separate occasions during the Battle of Wuhan from August to October 1938.Y. Yoshimi and S. Matsuno, Dokugasusen Kankei Shiryō II (Materials on poison gas warfare), Kaisetsu, Hōkan 2, Jugonen Sensō Gokuhi Shiryōshu, 1997, pp. 27–29 They were also used during the invasion of Changde. Those orders were transmitted either by Prince Kan'in Kotohito or General Hajime Sugiyama.Yoshimi and Matsuno, idem, Herbert Bix, Hirohito and the Making of Modern Japan, 2001, pp. 360–364 Gases manufactured in Okunoshima were used more than 2,000 times against Chinese soldiers and civilians in the war in China in the 1930s and 1940s
Bacteriological weapons provided by Shirō Ishii's units were also profusely used. For example, in 1940, the Imperial Japanese Army Air Force bombed Ningbo with fleas carrying the bubonic plague.Japan triggered bubonic plague outbreak, doctor claims, , Prince Tsuneyoshi Takeda and Prince Mikasa received a special screening by Shirō Ishii of a film showing imperial planes loading germ bombs for bubonic dissemination over Ningbo in 1940. (Daniel Barenblatt, A Plague upon Humanity, 2004, p. 32.) All these weapons were experimented with on humans before being used in the field. During the Khabarovsk War Crime Trials the accused, such as Major General Kiyashi Kawashima, testified that, in 1941, some 40 members of Unit 731 air-dropped plague-contaminated fleas on Changde. These attacks caused epidemic plague outbreaks.Daniel Barenblatt, A Plague upon Humanity, 2004, pages 220–221. In the Zhejiang-Jiangxi Campaign, of the 10,000 Japanese soldiers who fell ill with the disease, about 1,700 Japanese troops died when the biological weapons rebounded on their own forces. According to statistics from the Nationalist government, the Japanese army from July 1937 until September 1945 used poison gas 1,973 times. Based on available data, a total of 103,069 Chinese soldiers and civilians died from biological and chemical weapons.國史館檔案史料文物查詢系統, 八年血債: 七七事變前日寇對我之逼迫、日軍侵華戰爭中暴行(毒虐、屠害、炸擄、縱火)、我軍官兵傷亡及財產損失概況、領袖對日以德報怨、日背信忘義, 典藏號: 002-110500-00009-008
Japan gave its own soldiers methamphetamines in the form of Philopon.
Use of suicide attacks
Chinese armies deployed "dare to die corps" () or "suicide squads" against the Japanese.
Suicide bombing was also used against the Japanese. A Chinese soldier detonated a grenade vest and killed 20 Japanese at Sihang Warehouse. Chinese troops strapped explosives, such as grenade packs or dynamite to their bodies and threw themselves under Japanese tanks to blow them up. This tactic was used during the Battle of Shanghai, where a Chinese suicide bomber stopped a Japanese tank column by exploding himself beneath the lead tank, and at the Battle of Taierzhuang, where dynamite and grenades were strapped on by Chinese troops who rushed at Japanese tanks and blew themselves up. During one incident at Taierzhuang, Chinese suicide bombers destroyed four Japanese tanks with grenade bundles.
Combatants
See also
Aviation Martyrs' Cemetery
Japan during World War II
Japanese war crimes
List of military engagements of the Second Sino-Japanese War
Mao Zedong thanking Japan controversy
Timeline of events leading to World War II in Asia
Timeline of events preceding World War II
War crimes in World War II#Crimes perpetrated by Japan
Women in China during the Second Sino-Japanese War
Notes
References
Citations
Bibliography
Bayly, C. A., and T. N. Harper. Forgotten Armies: The Fall of British Asia, 1941–1945. Cambridge, MA: Belknap Press of Harvard University Press, 2005. xxxiii, 555p. .
Bayly, C. A., T. N. Harper. Forgotten Wars: Freedom and Revolution in Southeast Asia. Cambridge, MA: Belknap Press of Harvard University Press, 2007. xxx, 674p. .
Benesch, Oleg. "Castles and the Militarisation of Urban Society in Imperial Japan", Transactions of the Royal Historical Society, Vol. 28 (Dec. 2018), pp. 107–134.
Buss, Claude A. War And Diplomacy in Eastern Asia (1941) 570pp online free
Gordon, David M. "The China–Japan War, 1931–1945" Journal of Military History (January 2006). v. 70#1, pp, 137–82. Historiographical overview of major books from the 1970s through 2006
Guo Rugui, editor-in-chief Huang Yuzhang,中国抗日战争正面战场作战记 China's Anti-Japanese War Combat Operations (Jiangsu People's Publishing House, 2005) . On line in Chinese: 中国抗战正向战场作战记
. Reprinted : Abingdon, Oxon; New York: Routledge, 2015. Chapters on military, economic, diplomatic aspects of the war.
Annalee Jacoby and Theodore H. White, Thunder out of China, New York: William Sloane Associates, 1946. Critical account of Chiang's government by Time magazine reporters.
– Book about the Chinese and Mongolians who fought for the Japanese during the war.
Lary, Diana and Stephen R. Mackinnon, eds. The Scars of War: The Impact of Warfare on Modern China. Vancouver: UBC Press, 2001. 210p. .
MacKinnon, Stephen R., Diana Lary and Ezra F. Vogel, eds. China at War: Regions of China, 1937–1945. Stanford University Press, 2007. xviii, 380p. .
- Book about the Chinese from Canada as well as Americans who fought against Japan in the Second World War.
Macri, Franco David. Clash of Empires in South China: The Allied Nations' Proxy War with Japan, 1935–1941 (2015) online
Peattie, Mark. Edward Drea, and Hans van de Ven, eds. The Battle for China: Essays on the Military History of the Sino-Japanese War of 1937–1945 (Stanford University Press, 2011); 614 pages
Quigley, Harold S. Far Eastern War 1937 1941 (1942) online free
Steiner, Zara. "Thunder from the East: The Sino-Japanese Conflict and the European Powers, 1933=1938": in Steiner, The Triumph of the Dark: European International History 1933–1939 (2011) pp 474–551.
Van de Ven, Hans, Diana Lary, Stephen MacKinnon, eds. Negotiating China's Destiny in World War II (Stanford University Press, 2014) 336 pp. online review
Issue 40 of China, a collection of pamphlets. Original from Pennsylvania State University. Digitized 15 September 2009
External links
Biographical Dictionary of Occupied China
Full text of the Chinese declaration of war against Japan on Wikisource
"CBI Theater of Operations" – IBIBLIO World War II: China Burma India Links to selected documents, photos, maps, and books.
Annals of the Flying Tigers
Perry–Castañeda Library Map Collection, China 1:250,000, Series L500, U.S. Army Map Service, 1954– . Topographic Maps of China during the Second World War.
Perry–Castañeda Library Map Collection Manchuria 1:250,000, Series L542, U.S. Army Map Service, 1950– . Topographic Maps of Manchuria during the Second World War.
Multi-year project seeks to expand research by promoting cooperation among scholars and institutions in China, Japan, the United States, and other nations. Includes extensive bibliographies.
Photographs of the war from a Presbyterian mission near Canton
"The Route South"
Category:Wars involving Japan
Category:Wars involving the Republic of China
Category:Anti-Chinese violence in Asia
Category:Anti-Japanese sentiment in China
Category:China–Japan military relations
Category:Invasions by Japan
*
Category:Military history of the Republic of China (1912–1949)
Category:Pacific War
Category:Tunnel warfare
Category:1930s in China
Category:1940s in China
Category:Invasions of China
Category:Articles containing video clips
Category:1930s in Japan
Category:1940s in Japan
Category:1940s in Vietnam
Category:1930s conflicts
Category:1940s conflicts
Category:Campaigns of World War II
Category:Interwar period
Category:China–Japan relations
Category:Sino-Japanese Wars
|
wars_military
| 17,772
|
82991
|
Chamber music
|
https://en.wikipedia.org/wiki/Chamber_music
|
Chamber music is a form of classical music that is composed for a small group of instruments—traditionally a group that could fit in a palace chamber or a large room. Most broadly, it includes any art music that is performed by a small number of performers, with one performer to a part (in contrast to orchestral music, in which each string part is played by a number of performers). However, by convention, it usually does not include solo instrument performances.
Because of its intimate nature, chamber music has been described as "the music of friends".Christina Bashford, "The String Quartet and Society", in . The expression "music of friends" was first used by Richard Walthew in a lecture published in South Place Institute, London, in 1909. For more than 100 years, chamber music was played primarily by amateur musicians in their homes, and even today, when chamber music performance has migrated from the home to the concert hall, many musicians, amateur and professional, still play chamber music for their own pleasure. Playing chamber music requires special skills, both musical and social, that differ from the skills required for playing solo or symphonic works.Estelle Ruth Jorgensen, The Art of Teaching Music (Bloomington: Indiana University Press, 2008): 153–54. (cloth); (pbk).
Johann Wolfgang von Goethe described chamber music (specifically, string quartet music) as "four rational people conversing".Christina Bashford, "The String Quartet and Society" in . The quote was from a letter to C. F. Zelter, November 9, 1829. This conversational paradigm – which refers to the way one instrument introduces a melody or motif and then other instruments subsequently "respond" with a similar motif – has been a thread woven through the history of chamber music composition from the end of the 18th century to the present. The analogy to conversation recurs in descriptions and analyses of chamber music compositions.
History
From its earliest beginnings in the Medieval period to the present, chamber music has been a reflection of the changes in the technology and the society that produced it.
Early beginnings
During the Middle Ages and the early Renaissance, instruments were used primarily as accompaniment for singers.For a detailed discussion of the origins of chamber music see . String players would play along with the melody line sung by the singer. There were also purely instrumental ensembles, often of stringed precursors of the violin family, called consorts.
Some analysts consider the origin of classical instrumental ensembles to be the sonata da camera (chamber sonata) and the sonata da chiesa (church sonata). These were compositions for one to five or more instruments. The sonata da camera was a suite of slow and fast movements, interspersed with dance tunes; the sonata da chiesa was the same, but the dances were omitted. These forms gradually developed into the trio sonata of the Baroque – two treble instruments and a bass instrument, often with a keyboard or other chording instrument (harpsichord, organ, harp or lute, for example) filling in the harmony. Both the bass instrument and the chordal instrument would play the basso continuo part.
During the Baroque period, chamber music as a genre was not clearly defined. Often, works could be played on any variety of instruments, in orchestral or chamber ensembles. The Art of Fugue by Johann Sebastian Bach, for example, can be played on a keyboard instrument (harpsichord or organ) or by a string quartet or a string orchestra. The instrumentation of trio sonatas was also often flexibly specified; some of Handel's sonatas are scored for "German flute, Hoboy [oboe] or Violin"Solos for a German Flute, a Hoboy or a Violin published by John Walsh, c. 1730. Bass lines could be played by violone, cello, theorbo, or bassoon, and sometimes three or four instruments would join in the bass line in unison. Sometimes composers mixed movements for chamber ensembles with orchestral movements. Telemann's 'Tafelmusik' (1733), for example, has five sets of movements for various combinations of instruments, ending with a full orchestral section.
Baroque chamber music was often contrapuntal; that is, each instrument played the same melodic materials at different times, creating a complex, interwoven fabric of sound. Because each instrument was playing essentially the same melodies, all the instruments were equal. In the trio sonata, there is often no ascendent or solo instrument, but all three instruments share equal importance.
The harmonic role played by the keyboard or other chording instrument was subsidiary, and usually the keyboard part was not even written out; rather, the chordal structure of the piece was specified by numeric codes over the bass line, called figured bass.
In the second half of the 18th century, tastes began to change: many composers preferred a new, lighter Galant style, with "thinner texture, ... and clearly defined melody and bass" to the complexities of counterpoint. Now a new custom arose that gave birth to a new form of chamber music: the serenade. Patrons invited street musicians to play evening concerts below the balconies of their homes, their friends and their lovers. Patrons and musicians commissioned composers to write suitable suites of dances and tunes, for groups of two to five or six players. These works were called serenades, nocturnes, divertimenti, or cassations (from gasse=street). The young Joseph Haydn was commissioned to write several of these.
Haydn, Mozart, and the classical style
Joseph Haydn is generally credited with creating the modern form of chamber music as we know it,See Donald Tovey, "Haydn", in , or . although scholars today such as Roger Hickman argue "the idea that Haydn invented the string quartet and single-handedly advanced the genre is based on only a vague notion of the true history of the eighteenth-century genre." A typical string quartet of the period would consist of
An opening movement in sonata form, usually with two contrasting themes, followed by a development section where the thematic material is transformed and transposed, and ending with a recapitulation of the initial two themes.
A lyrical movement in a slow or moderate tempo, sometimes built out of three sections that repeat themselves in the order A–B–C–A–B–C, and sometimes a set of variations.
A minuet or scherzo, a light movement in three quarter time, with a main section, a contrasting trio section, and a repeat of the main section.
A fast finale section in rondo form, a series of contrasting sections with a main refrain section opening and closing the movement, and repeating between each section.
Haydn was by no means the only composer developing new modes of chamber music. Even before Haydn, many composers were already experimenting with new forms. Giovanni Battista Sammartini, Ignaz Holzbauer, and Franz Xaver Richter wrote precursors of the string quartet. Franz Ignaz von Beecke (1733-1803), with his Piano Quintet in A minor (1770) and 17 string quartets was also one of the pioneers of chamber music of the Classical period.
Another renowned composer of chamber music of the period was Wolfgang Amadeus Mozart. Mozart's seven piano trios and two piano quartets were the first to apply the conversational principle to chamber music with piano. Haydn's piano trios are essentially piano sonatas with the violin and cello playing mostly supporting roles, doubling the treble and bass lines of the piano score. But Mozart gives the strings an independent role, using them as a counter to the piano, and adding their individual voices to the chamber music conversation.J.A. Fuller Maitland, "Pianoforte and Strings", in .
Mozart introduced the newly invented clarinet into the chamber music arsenal, with the Kegelstatt Trio for viola, clarinet and piano, K. 498, and the Quintet for Clarinet and String Quartet, K. 581. He also tried other innovative ensembles, including the quintet for violin, two violas, cello, and horn, K. 407, quartets for flute and strings, and various wind instrument combinations. He wrote six string quintets for two violins, two violas and cello, which explore the rich tenor tones of the violas, adding a new dimension to the string quartet conversation.
Mozart's string quartets are considered the pinnacle of the classical art. The six string quartets that he dedicated to Haydn, his friend and mentor, inspired the elder composer to say to Mozart's father, "I tell you before God as an honest man that your son is the greatest composer known to me either in person or by reputation. He has taste, and, what is more, the most profound knowledge of composition."
Many other composers wrote chamber compositions during this period that were popular at the time and are still played today. Luigi Boccherini, Italian composer and cellist, wrote nearly a hundred string quartets, and more than one hundred quintets for two violins, viola and two cellos. In this innovative ensemble, later used by Schubert, Boccherini gives flashy, virtuosic solos to the principal cello, as a showcase for his own playing. Violinist Carl Ditters von Dittersdorf and cellist Johann Baptist Wanhal, who both played pickup quartets with Haydn on second violin and Mozart on viola, were popular chamber music composers of the period.
From home to hall
The turn of the 19th century saw dramatic changes in society and in music technology which had far-reaching effects on the way chamber music was composed and played.
Collapse of the aristocratic system
Throughout the 18th century, the composer was normally an employee of an aristocrat, and the chamber music he or she composed was for the pleasure of aristocratic players and listeners.for a discussion of the effects of social change on music of the 18th and 19th centuries, see . Haydn, for example, was an employee of Nikolaus I, Prince Esterházy, a music lover and amateur baryton player, for whom Haydn wrote many of his string trios. Mozart wrote three string quartets for the King of Prussia, Frederick William II, a cellist. Many of Beethoven's quartets were first performed with patron Count Andrey Razumovsky on second violin. Boccherini composed for the king of Spain.
With the decline of the aristocracy and the rise of new social orders throughout Europe, composers increasingly had to make money by selling their compositions and performing concerts. They often gave subscription concerts, which involved renting a hall and collecting the receipts from the performance. Increasingly, they wrote chamber music not only for rich patrons, but for professional musicians playing for a paying audience.
Changes in the structure of stringed instruments
At the beginning of the 19th century, luthiers developed new methods of constructing the violin, viola and cello that gave these instruments a richer tone, more volume, and more carrying power.David Boyden, "The Violin", pp. 31–35, in Sadie (1989). Also at this time, bowmakers made the violin bow longer, with a thicker ribbon of hair under higher tension. This improved projection, and also made possible new bowing techniques. In 1820, Louis Spohr invented the chinrest, which gave violinists more freedom of movement in their left hands, for a more nimble technique. These changes contributed to the effectiveness of public performances in large halls, and expanded the repertoire of techniques available to chamber music composers.
Invention of the pianoforte
Throughout the Baroque era, the harpsichord was one of the main instruments used in chamber music. The harpsichord used quills to pluck strings, and it had a delicate sound. Due to the design of the harpsichord, the attack or weight with which the performer played the keyboard did not change the volume or tone. Between about 1750 and the late 1700s, the harpsichord gradually fell out of use. By the late 1700s, the pianoforte became more popular as an instrument for performance. Even though the pianoforte was invented by Bartolomeo Cristofori at the beginning of the 1700s, it did not become widely used until the end of that century, when technical improvements in its construction made it a more effective instrument. Unlike the harpsichord, the pianoforte could play soft or loud dynamics and sharp sforzando attacks depending on how hard or soft the performer played the keys.Cecil Glutton, "The Pianoforte", in Baines (1969). The improved pianoforte was adopted by Mozart and other composers, who began composing chamber ensembles with the piano playing a leading role. The piano was to become more and more dominant through the 19th century, so much so that many composers, such as Franz Liszt and Frédéric Chopin, wrote almost exclusively for solo piano (or solo piano with orchestra).
Beethoven
Ludwig van Beethoven straddled this period of change as a giant of Western music. Beethoven transformed chamber music, raising it to a new plane, both in terms of content and in terms of the technical demands on performers and audiences. His works, in the words of Maynard Solomon, were "...the models against which nineteenth-century romanticism measured its achievements and failures."Maynard Solomon, "Beethoven: Beyond Classicism", p. 59, in . His late quartets, in particular, were considered so daunting an accomplishment that many composers after him were afraid to try composing quartets; Johannes Brahms composed and tore up 20 string quartets before he dared publish a work that he felt was worthy of the "giant marching behind".Stephen Hefling, "The Austro-Germanic quartet tradition in the nineteenth century", in .
Beethoven made his formal debut as a composer with three Piano Trios, Op. 1. Even these early works, written when Beethoven was only 22, while adhering to a strictly classical mold, showed signs of the new paths that Beethoven was to forge in the coming years. When he showed the manuscript of the trios to Haydn, his teacher, prior to publication, Haydn approved of the first two, but warned against publishing the third trio, in C minor, as too radical, warning it would not "...be understood and favorably received by the public.". The quote is from Ferdinand Ries's recollections of conversations with Beethoven.
Haydn was wrong—the third trio was the most popular of the set, and Haydn's criticisms caused a falling-out between him and the sensitive Beethoven. The trio is, indeed, a departure from the mold that Haydn and Mozart had formed. Beethoven makes dramatic deviations of tempo within phrases and within movements. He greatly increases the independence of the strings, especially the cello, allowing it to range above the piano and occasionally even the violin.
If his Op. 1 trios introduced Beethoven's works to the public, his Septet, Op. 20, established him as one of Europe's most popular composers. The septet, scored for violin, viola, cello, contrabass, clarinet, horn, and bassoon, was a huge hit. It was played in concerts again and again. It appeared in transcriptions for many combinations – one of which, for clarinet, cello and piano, was written by Beethoven himself – and was so popular that Beethoven feared it would eclipse his other works. So much so that by 1815, Carl Czerny wrote that Beethoven "could not endure his septet and grew angry because of the universal applause which it has received." The septet is written as a classical divertimento in six movements, including two minuets, and a set of variations. It is full of catchy tunes, with solos for everyone, including the contrabass.
In his 17 string quartets, composed over the course of 37 of his 56 years, Beethoven goes from classical composer par excellence to creator of musical Romanticism, and finally, with his late string quartets, he transcends classicism and romanticism to create a genre that defies categorization. Stravinsky referred to the Große Fuge, of the late quartets, as, "...this absolutely contemporary piece of music that will be contemporary forever."Joseph Kerman, "Beethoven Quartet Audiences: Actual Potential, Ideal", p. 21, in .
The string quartets 1–6, Op. 18, were written in the classical style, in the same year that Haydn wrote his Op. 76 string quartets. Even here, Beethoven stretched the formal structures pioneered by Haydn and Mozart. In the quartet Op. 18, No. 1, in F major, for example, there is a long, lyrical solo for cello in the second movement, giving the cello a new type of voice in the quartet conversation. And the last movement of Op. 18, No. 6, "La Malincolia", creates a new type of formal structure, interleaving a slow, melancholic section with a manic dance. Beethoven was to use this form in later quartets, and Brahms and others adopted it as well.
In the years 1805 to 1806, Beethoven composed the three Op. 59 quartets on a commission from Count Razumovsky, who played second violin in their first performance. These quartets, from Beethoven's middle period, were pioneers in the romantic style. Besides introducing many structural and stylistic innovations, these quartets were much more difficult technically to perform – so much so that they were, and remain, beyond the reach of many amateur string players. When first violinist Ignaz Schuppanzigh complained of their difficulty, Beethoven retorted, "Do you think I care about your wretched violin when the spirit moves me?" Among the difficulties are complex syncopations and cross-rhythms; synchronized runs of sixteenth, thirty-second, and sixty-fourth notes; and sudden modulations requiring special attention to intonation. In addition to the Op. 59 quartets, Beethoven wrote two more quartets during his middle period – Op. 74, the "Harp" quartet, named for the unusual harp-like effect Beethoven creates with pizzicato passages in the first movement, and Op. 95, the "Serioso".
The Serioso is a transitional work that ushers in Beethoven's late period – a period of compositions of great introspection. "The particular kind of inwardness of Beethoven's last style period", writes Joseph Kerman, gives one the feeling that "the music is sounding only for the composer and for one other auditor, an awestruck eavesdropper: you."Kerman, in . In the late quartets, the quartet conversation is often disjointed, proceeding like a stream of consciousness. Melodies are broken off, or passed in the middle of the melodic line from instrument to instrument. Beethoven uses new effects, never before essayed in the string quartet literature: the ethereal, dreamlike effect of open intervals between the high E string and the open A string in the second movement of quartet Op. 132; the use of sul ponticello (playing on the bridge of the violin) for a brittle, scratchy sound in the Presto movement of Op. 131; the use of the Lydian mode, rarely heard in Western music for 200 years, in Op. 132; a cello melody played high above all the other strings in the finale of Op. 132.For a complete analysis of the late quartets, see . Yet for all this disjointedness, each quartet is tightly designed, with an overarching structure that ties the work together.
Beethoven wrote eight piano trios, five string trios, two string quintets, and numerous pieces for wind ensemble. He also wrote ten sonatas for violin and piano and five sonatas for cello and piano.
Franz Schubert
As Beethoven, in his last quartets, went off in his own direction, Franz Schubert carried on and established the emerging romantic style. In his 31 years, Schubert devoted much of his life to chamber music, composing 15 string quartets, two piano trios, string trios, a piano quintet commonly known as the Trout Quintet, an octet for strings and winds, and his famous quintet for two violins, viola, and two cellos.
Schubert's music, as his life, exemplified the contrasts and contradictions of his time. On the one hand, he was the darling of Viennese society: he starred in soirées that became known as Schubertiaden, where he played his light, mannered compositions that expressed the gemütlichkeit of Vienna of the 1820s. On the other hand, his own short life was shrouded in tragedy, wracked by poverty and ill health. Chamber music was the ideal medium to express this conflict, "to reconcile his essentially lyric themes with his feeling for dramatic utterance within a form that provided the possibility of extreme color contrasts." The String Quintet in C, D.956, is an example of how this conflict is expressed in music. After a slow introduction, the first theme of the first movement, fiery and dramatic, leads to a bridge of rising tension, peaking suddenly and breaking into the second theme, a lilting duet in the lower voices.Recording is by Caeli Smith and Ryan Shannon, violins, Nora Murphy, viola, and Nick Thompson and Rachel Grandstrand, celli The alternating Sturm und Drang and relaxation continue throughout the movement.
These contending forces are expressed in some of Schubert's other works: in the quartet Death and the Maiden, the Rosamunde quartet and in the stormy, one-movement Quartettsatz, D. 703.For an analysis of these works, as well as the quintet, see Willi Kahl, "Schubert", in .
Felix Mendelssohn
Unlike Schubert, Felix Mendelssohn had a life of peace and prosperity. Born into a wealthy Jewish family in Hamburg, Mendelssohn proved himself a child prodigy. By the age of 16, he had written his first major chamber work, the String Octet, Op. 20. Already in this work, Mendelssohn showed some of the unique style that was to characterize his later works; notably, the gossamer light texture of his scherzo movements, exemplified also by the Canzonetta movement of the String Quartet, Op. 12, and the scherzo of the Piano Trio No. 1 in D minor, Op. 49.
Another characteristic that Mendelssohn pioneered is the cyclic form in overall structure. This means the reuse of thematic material from one movement to the next, to give the total piece coherence. In his second string quartet, he opens the piece with a peaceful adagio section in A major, that contrasts with the stormy first movement in A minor. After the final, vigorous Presto movement, he returns to the opening adagio to conclude the piece. This string quartet is also Mendelssohn's homage to Beethoven; the work is studded with quotes from Beethoven's middle and late quartets.
During his adult life, Mendelssohn wrote two piano trios, seven works for string quartet, two string quintets, the octet, a sextet for piano and strings, and numerous sonatas for piano with violin, cello, and clarinet.
Robert Schumann
Robert Schumann continued the development of cyclic structure. In his Piano Quintet in E flat, Op. 44,Piano quintet Op. 44 by Robert Schumann, last movement, is played by Steans Artists of Musicians from Ravinia in concert at the Isabella Stewart Gardner Museum. Traffic.libsyn.com Schumann wrote a double fugue in the finale, using the theme of the first movement and the theme of the last movement. Both Schumann and Mendelssohn, following the example set by Beethoven, revived the fugue, which had fallen out of favor since the Baroque period. However, rather than writing strict, full-length fugues, they used counterpoint as another mode of conversation between the chamber music instruments. Many of Schumann's chamber works, including all three of his string quartets and his piano quartet have contrapuntal sections interwoven seamlessly into the overall compositional texture.Fannie Davies, "Schumann" in .
The composers of the first half of the 19th century were acutely aware of the conversational paradigm established by Haydn and Mozart. Schumann wrote that in a true quartet "everyone has something to say ... a conversation, often truly beautiful, often oddly and turbidly woven, among four people."Stephen Hefling, "The Austro-Germanic quartet tradition of the nineteenth century", in . Their awareness is exemplified by composer and virtuoso violinist Louis Spohr. Spohr divided his 36 string quartets into two types: the quatuor brillant, essentially a violin concerto with string trio accompaniment; and quatuor dialogue, in the conversational tradition.Hefling, in .
Chamber music and society in the 19th century
During the 19th century, with the rise of new technology driven by the Industrial Revolution, printed music became cheaper and thus more accessible while domestic music making gained widespread popularity. Composers began to incorporate new elements and techniques into their works to appeal to this open market, since there was an increased consumer desire for chamber music. While improvements in instruments led to more public performances of chamber music, it remained very much a type of music to be played as much as performed. Amateur quartet societies sprang up throughout Europe, and no middling-sized city in Germany or France was without one. These societies sponsored house concerts, compiled music libraries, and encouraged the playing of quartets and other ensembles.Bashford, in . For a detailed discussion of quartet societies in France, see Fauquet (1986). In European countries, in particular Germany and France, like minded musicians were brought together and started to develop a strong connection with the community. Composers were in high favor with orchestral works and solo virtuosi works, which made up the largest part of the public concert repertoire.Lott, Marie S. (2008) Audience and style in nineteenth-century chamber music, c. 1830 to 1880. University of Rochester, Eastman School of Music, ProQuest Dissertations Publishing. Early French composers including Camille Saint-Saëns and César Franck.
Apart from the "central" Austro-Germanic countries, there was an occurrence of the subculture of chamber music in other regions such as Britain. There chamber music was often performed by upper- and middle-class men with less advanced musical skills in an unexpected setting such as informal ensembles in private residence with few audience members. In Britain, the most common form of chamber music compositions are the string quartets, sentimental songs and piano chamber works like the piano trio, in a way depicts the standard conception of the conventional "Victorian music making". In the middle of the 19th century, with the rise of the feminist movement, women also started to receive acceptability to be participated in chamber music.
Thousands of quartets were published by hundreds of composers; between 1770 and 1800, more than 2000 quartets were published,Bashford, in . and the pace did not decline in the next century. Throughout the 19th century, composers published string quartets now long neglected: George Onslow wrote 36 quartets and 35 quintets; Gaetano Donizetti wrote dozens of quartets, Antonio Bazzini, Anton Reicha, Carl Reissiger, Joseph Suk and others wrote to fill an insatiable demand for quartets. In addition, there was a lively market for string quartet arrangements of popular and folk tunes, piano works, symphonies, and opera arias.Bashford, in .
But opposing forces were at work. The middle of the 19th century saw the rise of superstar virtuosi, who drew attention away from chamber music toward solo performance. The piano, which could be mass-produced, became an instrument of preference, and many composers, like Chopin and Liszt, composed primarily if not exclusively for piano.For a discussion of the impact of the piano on string quartet composition, see .
The ascendance of the piano, and of symphonic composition, was not merely a matter of preference; it was also a matter of ideology. In the 1860s, a schism grew among romantic musicians over the direction of music. Many composers tend to express their romantic persona through their works. By the time, these chamber works are not necessarily dedicated for any specific dedicatee. Famous chamber works such as Fanny Mendelssohn D minor Piano Trio, Ludwig van Beethoven's Trio in E-flat major, and Franz Schubert's Piano Quintet in A major are all highly personal. Liszt and Richard Wagner led a movement that contended that "pure music" had run its course with Beethoven, and that new, programmatic forms of music–in which music created "images" with its melodies–were the future of the art. The composers of this school had no use for chamber music. Opposing this view was Johannes Brahms and his associates, especially the powerful music critic Eduard Hanslick. This War of the Romantics shook the artistic world of the period, with vituperative exchanges between the two camps, concert boycotts, and petitions.
Although amateur playing thrived throughout the 19th century, this was also a period of increasing professionalization of chamber music performance. Professional quartets began to dominate the chamber music concert stage. The Hellmesberger Quartet, led by Joseph Hellmesberger, and the Joachim Quartet, led by Joseph Joachim, debuted many of the new string quartets by Brahms and other composers. Another famous quartet player was Vilemina Norman Neruda, also known as Lady Hallé. Indeed, during the last third of the century, women performers began taking their place on the concert stage: an all-women string quartet led by Emily Shinner, and the Lucas quartet, also all women, were two notable examples.Tully Potter, "From chamber to concert hall", in .
Meanwhile in the New World parlour music was a its high point in the late 19th century.
Toward the 20th century
It was Johannes Brahms who carried the torch of Romantic music toward the 20th century. Heralded by Robert Schumann as the forger of "new paths" in music,Robert Schumann, "Neue Bahnen" in the journal Neue Zeitschrift für Musik, October 1853, W3.rz-berlin.mpg.de (accessed 2007-10-30). Brahms's music is a bridge from the classical to the modern. On the one hand, Brahms was a traditionalist, conserving the musical traditions of Bach and Mozart. Throughout his chamber music, he uses traditional techniques of counterpoint, incorporating fugues and canons into rich conversational and harmonic textures. On the other hand, Brahms expanded the structure and the harmonic vocabulary of chamber music, challenging traditional notions of tonality. An example of this is in the Brahms second string sextet, Op. 36.
Traditionally, composers wrote the first theme of a piece in the key of the piece, firmly establishing that key as the tonic, or home, key of the piece. The opening theme of Op. 36 starts in the tonic (G major), but already by the third measure has modulated to the unrelated key of E-flat major. As the theme develops, it ranges through various keys before coming back to the tonic G major. This "harmonic audacity", as Swafford describes it, opened the way for bolder experiments to come.
Not only in harmony, but also in overall musical structure, Brahms was an innovator. He developed a technique that Arnold Schoenberg described as "developing variation"., cited in . Rather than discretely defined phrases, Brahms often runs phrase into phrase, and mixes melodic motives to create a fabric of continuous melody. Schoenberg, the creator of the 12-tone system of composition, traced the roots of his modernism to Brahms, in his essay "Brahms the Progressive"., cited in .
All told, Brahms published 24 works of chamber music, including three string quartets, five piano trios, the quintet for piano and strings, Op. 34, and other works. Among his last works were the clarinet quintet, Op. 115, and a trio for clarinet, cello and piano. He wrote a trio for the unusual combination of piano, violin and horn, Op. 40. He also wrote two songs for alto singer, viola and piano, Op. 91, reviving the form of voice with string obbligato that had been virtually abandoned since the Baroque.
The exploration of tonality and of structure begun by Brahms was continued by composers of the French school. César Franck's piano quintet in F minor, composed in 1879, further established the cyclic form first explored by Schumann and Mendelssohn, reusing the same thematic material in each of the three movements. Claude Debussy's string quartet, Op. 10, is considered a watershed in the history of chamber music. The quartet uses the cyclic structure, and constitutes a final divorce from the rules of classical harmony. "Any sounds in any combination and in any succession are henceforth free to be used in a musical continuity", Debussy wrote. Pierre Boulez said that Debussy freed chamber music from "rigid structure, frozen rhetoric and rigid aesthetics".
Debussy's quartet, like the string quartets of Maurice Ravel and of Gabriel Fauré, created a new tone color for chamber music, a color and texture associated with the Impressionist movement.Debussy himself denied that he was an impressionist. See Thomson (1940), p. 161. Violist James Dunham, of the Cleveland and Sequoia Quartets, writes of the Ravel quartet, "I was simply overwhelmed by the sweep of sonority, the sensation of colors constantly changing ..." For these composers, chamber ensembles were the ideal vehicle for transmitting this atmospheric sense, and chamber works constituted much of their oeuvre.
Nationalism in chamber music
Parallel with the trend to seek new modes of tonality and texture was another new development in chamber music: the rise of nationalism. Composers turned more and more to the rhythms and tonalities of their native lands for inspiration and material. "Europe was impelled by the Romantic tendency to establish in musical matters the national boundaries more and more sharply", wrote Alfred Einstein. "The collecting and sifting of old traditional melodic treasures ... formed the basis for a creative art-music." For many of these composers, chamber music was the natural vehicle for expressing their national characters.
Czech composer Antonín Dvořák created in his chamber music a new voice for the music of his native Bohemia. In 14 string quartets, three string quintets, two piano quartets, a string sextet, four piano trios, and numerous other chamber compositions, Dvořák incorporates folk music and modes as an integral part of his compositions. For example, in the piano quintet in A major, Op. 81, the slow movement is a Dumka, a Slavic folk ballad that alternates between a slow expressive song and a fast dance. Dvořák's fame in establishing a national art music was so great that the New York philanthropist and music connoisseur Jeannette Thurber invited him to America, to head a conservatory that would establish an American style of music. There, Dvořák wrote his string quartet in F major, Op. 96, nicknamed "The American". While composing the work, Dvořák was entertained by a group of Kickapoo Indians who performed native dances and songs, and these songs may have been incorporated in the quartet.
Bedřich Smetana, another Czech, wrote a piano trio and string quartet, both of which incorporate native Czech rhythms and melodies. In Russia, Russian folk music permeated the works of the late 19th-century composers. Pyotr Ilyich Tchaikovsky uses a typical Russian folk dance in the final movement of his string sextet, Souvenir de Florence, Op. 70. Alexander Borodin's second string quartet contains references to folk music, and the slow Nocturne movement of that quartet recalls Middle Eastern modes that were current in the Muslim sections of southern Russia. Edvard Grieg used the musical style of his native Norway in his string quartet in G minor, Op. 27 and his violin sonatas.
In Hungary, composers Zoltán Kodály and Béla Bartók pioneered the science of ethnomusicology by performing one of the first comprehensive studies of folk music. Ranging across the Magyar provinces, they transcribed, recorded, and classified tens of thousands of folk melodies. They used these tunes in their compositions, which are characterized by the asymmetrical rhythms and modal harmonies of that music. Their chamber music compositions, and those of the Czech composer Leoš Janáček, combined the nationalist trend with the 20th century search for new tonalities. Janáček's string quartets not only incorporate the tonalities of Czech folk music, they also reflect the rhythms of speech in Czech.
New sounds for a new world
The end of western tonality, begun subtly by Brahms and made explicit by Debussy, posed a crisis for composers of the 20th century. It was not merely an issue of finding new types of harmonies and melodic systems to replace the diatonic scale that was the basis of western harmony; the whole structure of western music – the relationships between movements and between structural elements within movements – was based on the relationships between different keys. So composers were challenged with building a whole new structure for music.
This was coupled with the feeling that the era that saw the invention of automobiles, the telephone, electric lighting, and world war needed new modes of expression. "The century of the aeroplane deserves its music", wrote Debussy.
Inspiration from folk music
The search for a new music took several directions. The first, led by Bartók, was toward the tonal and rhythmic constructs of folk music. Bartók's research into Hungarian and other eastern European and Middle Eastern folk music revealed to him a musical world built of musical scales that were neither major nor minor, and complex rhythms that were alien to the concert hall. In his fifth quartet, for example, Bartók uses a time signature of , "startling to the classically-trained musician, but second-nature to the folk musician." Structurally, also, Bartók often invents or borrows from folk modes. In the sixth string quartet, for example, Bartók begins each movement with a slow, elegiac melody, followed by the main melodic material of the movement, and concludes the quartet with a slow movement that is built entirely on this elegy. This is a form common in many folk music cultures.
Bartók's six string quartets are often compared with Beethoven's late quartets. In them, Bartók builds new musical structures, explores sonorities never previously produced in classical music (for example, the snap pizzicato, where the player lifts the string and lets it snap back on the fingerboard with an audible buzz), and creates modes of expression that set these works apart from all others. "Bartók's last two quartets proclaim the sanctity of life, progress and the victory of humanity despite the anti-humanistic dangers of the time", writes analyst John Herschel Baron. The last quartet, written when Bartók was preparing to flee the Nazi invasion of Hungary for a new and uncertain life in the U.S., is often seen as an autobiographical statement of the tragedy of his times.
Bartók was not alone in his explorations of folk music. Igor Stravinsky's Three Pieces for String Quartet is structured as three Russian folksongs, rather than as a classical string quartet. Stravinsky, like Bartók, used asymmetrical rhythms throughout his chamber music; the Histoire du soldat, in Stravinsky's own arrangement for clarinet, violin and piano, constantly shifts time signatures between two, three, four and five beats to the bar. In Britain, composers Ralph Vaughan Williams, William Walton and Benjamin Britten drew on English folk music for much of their chamber music: Vaughan Williams incorporates folksongs and country fiddling in his first string quartet. American composer Charles Ives wrote music that was distinctly American. Ives gave programmatic titles to much of his chamber music; his first string quartet, for example, is called "From the Salvation Army", and quotes American Protestant hymns in several places.
Serialism, polytonality and polyrhythms
A second direction in the search for a new tonality was twelve-tone serialism. Arnold Schoenberg developed the twelve-tone method of composition as an alternative to the structure provided by the diatonic system. His method entails building a piece using a series of the twelve notes of the chromatic scale, permuting it and superimposing it on itself to create the composition.
Schoenberg did not arrive immediately at the serial method. His first chamber work, the string sextet Verklärte Nacht, was mostly a late German romantic work, though it was bold in its use of modulations. The first work that was frankly atonal was the second string quartet; the last movement of this quartet, which includes a soprano, has no key signature. Schoenberg further explored atonality with Pierrot Lunaire, for singer, flute or piccolo, clarinet, violin, cello and piano. The singer uses a technique called Sprechstimme, halfway between speech and song.
After developing the twelve-tone technique, Schoenberg wrote a number of chamber works, including two more string quartets, a string trio, and a wind quintet. He was followed by a number of other twelve-tone composers, the most prominent of whom were his students Alban Berg, who wrote the Lyric Suite for string quartet, and Anton Webern, who wrote Five Movements for String Quartet, op. 5.
Twelve-tone technique was not the only new experiment in tonality. Darius Milhaud developed the use of polytonality, that is, music where different instruments play in different keys at the same time. Milhaud wrote 18 string quartets; quartets number 14 and 15 are written so that each can be played by itself, or the two can be played at the same time as an octet. Milhaud also used jazz idioms, as in his Suite for clarinet, violin and piano.
The American composer Charles Ives used not only polytonality in his chamber works, but also polymeter. In his first string quartet he writes a section where the first violin and viola play in time while the second violin and cello play in .
Neoclassicism
The plethora of directions that music took in the first quarter of the 20th century led to a reaction by many composers. Led by Stravinsky, these composers looked to the music of preclassical Europe for inspiration and stability. While Stravinsky's neoclassical works – such as the 'Concertino for String Quartet' – sound contemporary, they are modeled on Baroque and early classical forms – the canon, the fugue, and the Baroque sonata form.
Paul Hindemith was another neoclassicist. His many chamber works are essentially tonal, though they use many dissonant harmonies. Hindemith wrote seven string quartets and two string trios, among other chamber works. At a time when composers were writing works of increasing complexity, beyond the reach of amateur musicians, Hindemith explicitly recognized the importance of amateur music-making, and intentionally wrote pieces that were within the abilities of nonprofessional players.
The works that the composer summarised as Kammermusik, a collection of eight extended compositions, consists mostly of concertante works, comparable to Bach's Brandenburg Concertos.
Dmitri Shostakovich was one of the most prolific of chamber music composers of the 20th century, writing 15 string quartets, two piano trios, the piano quintet, and numerous other chamber works. Shostakovich's music was for a long time banned in the Soviet Union and Shostakovich himself was in personal danger of deportation to Siberia. His eighth quartet is an autobiographical work, that expresses his deep depression from his ostracization, bordering on suicide: it quotes from previous compositions, and uses the four-note motif DSCH, the composer's initials.
Stretching the limits
As the century progressed, many composers created works for small ensembles that, while they formally might be considered chamber music, challenged many of the fundamental characteristics that had defined the genre over the last 150 years.
Music of friends
The idea of composing music that could be played at home has been largely abandoned. Bartók was among the first to part with this idea. "Bartók never conceived these quartets for private performance but rather for large, public concerts." Aside from the many almost insurmountable technical difficulties of many modern pieces, some of them are hardly suitable for performance in a small room. For example, Different Trains by Steve Reich is scored for live string quartet and recorded tape, which layers together a carefully orchestrated sound collage of speech, recorded train sounds, and three string quartets.Steve Reich, Composer's Notes, at .
Relation of composer and performer
Traditionally, the composer wrote the notes, and the performer interpreted them. But this is no longer the case in much modern music. In Für kommende Zeiten (For Times to Come), Stockhausen writes verbal instructions describing what the performers are to play. "Star constellations/with common points/and falling stars ... Abrupt end" is a sample.Karlheinz Stockhausen, Awake, no. 16 (July 7, 1970) from Aus den sieben Tagen/Für kommende Zeiten/For Times to Come/Pour les temps a venir: 17 Texte für Intuitive Musik, Werk Nr. 33 (Kürten: Stockhausen-Verlag, 1976), 66.
Composer Terry Riley describes how he works with the Kronos Quartet, an ensemble devoted to contemporary music: "When I write a score for them, it's an unedited score. I put in just a minimal amount of dynamics and phrasing marks ...we spend a lot of time trying out different ideas in order to shape the music, to form it. At the end of the process, it makes the performers actually own the music. That to me is the best way for composers and musicians to interact."K. Robert Schwarz, "A New Look at a Major Minimalist", in The New York Times (May 6, 1990), Section H, p. 24. Retrieved 20 April 2010.
New sounds
Composers seek new timbres, remote from the traditional blend of strings, piano and woodwinds that characterized chamber music in the 19th century. This search led to the incorporation of new instruments in the 20th century, such as the theremin and the synthesizer in chamber music compositions.
Many composers sought new timbres within the framework of traditional instruments. "Composers begin to hear new timbres and new timbral combinations, which are as important to the new music of the twentieth century as the so-called breakdown of functional tonality," writes music historian James McCalla. Examples are numerous: Bartók's Sonata for Two Pianos and Percussion (1937), Schoenberg's Pierrot lunaire, Charles Ives's Quartertone Pieces for two pianos tuned a quartertone apart. Other composers used electronics and extended techniques to create new sonorities. An example is George Crumb's Black Angels, for electric string quartet (1970). The players not only bow their amplified instruments, they also beat on them with thimbles, pluck them with paper clips and play on the wrong side of the bridge or between the fingers and the nut. Still other composers have sought to explore the timbres created by including instruments which are not often associated with a typical orchestral ensemble. For example, Robert Davine explores the orchestral timbres of the accordion when it is included in a traditional wind trio in his Divertimento for accordion, flute, clarinet and bassoon. and Karlheinz Stockhausen wrote a Helicopter String Quartet.Irvine Arditti, "Flight of Fantasy", The Strad (March 2008):52–53, 55.
What do these changes mean for the future of chamber music? "With the technological advances have come questions of aesthetics and sociological changes in music", writes analyst Baron. "These changes have often resulted in accusations that technology has destroyed chamber music and that technological advance is in inverse proportion to musical worth. The ferocity of these attacks only underscores how fundamental these changes are, and only time will tell if humankind will benefit from them."
In contemporary society
Analysts agree that the role of chamber music in society has changed profoundly in the last 50 years; yet there is little agreement as to what that change is. On the one hand, Baron contends that "chamber music in the home ... remained very important in Europe and America until the Second World War, after which the increasing invasion of radio and recording reduced its scope considerably." This view is supported by subjective impressions. "Today there are so many more millions of people listening to music, but far fewer playing chamber music just for the pleasure of it", says conductor and pianist Daniel Barenboim.
However, recent surveys suggest there is, on the contrary, a resurgence of home music making. In the radio program "Amateurs Help Keep Chamber Music Alive" from 2005, reporter Theresa Schiavone cites a Gallup poll showing an increase in the sale of stringed instruments in America. Joe Lamond, president of the National Association of Music Manufacturers (NAMM) attributes the increase to a growth of home music-making by adults approaching retirement. "I would really look to the demographics of the [baby] boomers", he said in an interview. These people "are starting to look for something that matters to them ... nothing makes them feel good more than playing music."Theresa Schiavone, "Amateurs Help Keep Chamber Music Alive", All Things Considered, August 27, 2005, NPR
A study by the European Music Office in 1996 suggests that not only older people are playing music. "The number of adolescents today to have done music has almost doubled by comparison with those born before 1960", the study shows.Antoine Hennion, "Music industry and music lovers, beyond Benjamin: The return of the amateur", in Soundscapes (volume 2, July 1999) available online at Soundscapes.info. While most of this growth is in popular music, some is in chamber music and art music, according to the study.
While there is no agreement about the number of chamber music players, the opportunities for amateurs to play have certainly grown. The number of chamber music camps and retreats, where amateurs can meet for a weekend or a month to play together, has burgeoned. Music for the Love of It, an organization to promote amateur playing, publishes a directory of music workshops that lists more than 500 workshops in 24 countries for amateurs in 2008 The Associated Chamber Music Players (ACMP) offers a directory of over 5,000 amateur players worldwide who welcome partners for chamber music sessions.
Regardless of whether the number of amateur players has grown or shrunk, the number of chamber music concerts in the west has increased greatly in the last 20 years. Concert halls have largely replaced the home as the venue for concerts. Baron suggests that one of the reasons for this surge is "the spiraling costs of orchestral concerts and the astronomical fees demanded by famous soloists, which have priced both out of the range of most audiences." The repertoire at these concerts is almost universally the classics of the 19th century. However, modern works are increasingly included in programs, and some groups, like the Kronos Quartet, devote themselves almost exclusively to contemporary music and new compositions; and ensembles like the Turtle Island String Quartet, that combine classical, jazz, rock and other styles to create crossover music. Cello Fury and Project Trio offer a new spin to the standard chamber ensemble. Cello Fury consists of three cellists and a drummer and Project Trio includes a flutist, bassist, and cellist.
Several groups such as Classical Revolution and Simple Measures have taken classical chamber music out of the concert hall and into the streets. Simple Measures, a group of chamber musicians in Seattle (Washington, US), gives concerts in shopping centers, coffee shops, and streetcars. The Providence (Rhode Island, US) String Quartet has started the "Storefront Strings" program, offering impromptu concerts and lessons out of a storefront in one of Providence's poorer neighborhoods. "What really makes this for me", said Rajan Krishnaswami, cellist and founder of Simple Measures, "is the audience reaction ... you really get that audience feedback.""Classical Music Sans Stuffiness", radio interview with Dave Beck, KUOW-FM, Seattle, December 28, 2008, Simplepleasures.org
Performance
Chamber music performance is a specialized field, and requires a number of skills not normally required for the performance of symphonic or solo music. Many performers and authors have written about the specialized techniques required for a successful chamber musician. Chamber music playing, writes M. D. Herter Norton, requires that "individuals ... make a unified whole yet remain individuals. The soloist is a whole unto himself, and in the orchestra individuality is lost in numbers ...".
"Music of friends"
Many performers contend that the intimate nature of chamber music playing requires certain personality traits.
David Waterman, cellist of the Endellion Quartet, writes that the chamber musician "needs to balance assertiveness and flexibility."Waterman, in . Good rapport is essential. Arnold Steinhardt, first violinist of the Guarneri Quartet, notes that many professional quartets suffer from frequent turnover of players. "Many musicians cannot take the strain of going mano a mano with the same three people year after year."
Mary Norton, a violinist who studied quartet playing with the Kneisel Quartet at the beginning of the last century, goes so far that players of different parts in a quartet have different personality traits. "By tradition the first violin is the leader" but "this does not mean a relentless predominance." The second violinist "is a little everybody's servant." "The artistic contribution of each member will be measured by his skill in asserting or subduing that individuality which he must possess to be at all interesting."
Interpretation
"For an individual, the problems of interpretation are challenging enough", writes Waterman, "but for a quartet grappling with some of the most profound, intimate and heartfelt compositions in the music literature, the communal nature of decision-making is often more testing than the decisions themselves."David Waterman, "Playing quartets: the view from inside", in .
The problem of finding agreement on musical issues is complicated by the fact that each player is playing a different part, that may appear to demand dynamics or gestures contrary to those of other parts in the same passage. Sometimes these differences are even specified in the score – for example, where cross-dynamics are indicated, with one instrument crescendoing while another is getting softer.
One of the issues that must be settled in rehearsal is who leads the ensemble at each point of the piece. Normally, the first violin leads the ensemble. By leading, this means that the violinist indicates the start of each movement and their tempos by a gesture with her head or bowing hand. However, there are passages that require other instruments to lead. For example, John Dalley, second violinist of the Guarneri Quartet, says, "We'll often ask [the cellist] to lead in pizzicato passages. A cellist's preparatory motion for pizzicato is larger and slower than that of a violinist."
Players discuss issues of interpretation in rehearsal; but often, in mid-performance, players do things spontaneously, requiring the other players to respond in real time. "After twenty years in the [Guarneri] Quartet, I'm happily surprised on occasion to find myself totally wrong about what I think a player will do, or how he'll react in a particular passage", says violist Michael Tree.
Ensemble, blend, and balance
Playing together constitutes a major challenge to chamber music players. Many compositions pose difficulties in coordination, with figures such as hemiolas, syncopation, fast unison passages and simultaneously sounded notes that form chords that are challenging to play in tune. But beyond the challenge of merely playing together from a rhythmic or intonation perspective is the greater challenge of sounding good together.
To create a unified chamber music sound – to blend – the players must coordinate the details of their technique. They must decide when to use vibrato and how much. They often need to coordinate their bowing and "breathing" between phrases, to ensure a unified sound. They need to agree on special techniques, such as spiccato, sul tasto, sul ponticello, and so on.For a detailed discussion of problems of blending in a string quartet, see
Balance refers to the relative volume of each of the instruments. Because chamber music is a conversation, sometimes one instrument must stand out, sometimes another. It is not always a simple matter for members of an ensemble to determine the proper balance while playing; frequently, they require an outside listener, or a recording of their rehearsal, to tell them that the relations between the instruments are correct.
Intonation
Chamber music playing presents special problems of intonation. The piano is tuned using equal temperament, that is, the 12 notes of the scale are spaced exactly equally. This method makes it possible for the piano to play in any key; however, all the intervals except the octave sound very slightly out of tune. String players can play with just intonation, that is, they can play specific intervals (such as fifths) exactly in tune. Moreover, string and wind players can use expressive intonation, changing the pitch of a note to create a musical or dramatic effect. "String intonation is more expressive and sensitive than equal-tempered piano intonation."Waterman, in .
However, using true and expressive intonation requires careful coordination with the other players, especially when a piece is going through harmonic modulations. "The difficulty in string quartet intonation is to determine the degree of freedom you have at any given moment", says Steinhardt.
The chamber music experience
Players of chamber music, both amateur and professional, attest to a unique enchantment with playing in ensemble. "It is not an exaggeration to say that there opened out before me an enchanted world", writes Walter Willson Cobbett, instigator of the Cobbett Competition, Cobbett Medal and editor of Cobbett's Cyclopedic Survey of Chamber Music.Cobbett, "Chamber Music Life", in .
Ensembles develop a close intimacy of shared musical experience. "It is on the concert stage where the moments of true intimacy occur", writes Steinhardt. "When a performance is in progress, all four of us together enter a zone of magic somewhere between our music stands and become a conduit, messenger, and missionary ... It is an experience too personal to talk about and yet it colors every aspect of our relationship, every good-natured musical confrontation, all the professional gossip, the latest viola joke."
The playing of chamber music has been the inspiration for numerous books, both fiction and nonfiction. An Equal Music by Vikram Seth, explores the life and love of the second violinist of a fictional quartet, the Maggiore. Central to the story is the tensions and the intimacy developed between the four members of the quartet. "A strange composite being we are [in performance], not ourselves any more, but the Maggiore, composed of so many disjunct parts: chairs, stands, music, bows, instruments, musicians ..." The Rosendorf Quartet, by Nathan Shaham, describes the trials of a string quartet in Palestine, before the establishment of the state of Israel. For the Love of It by Wayne Booth is a nonfictional account of the author's romance with cello playing and chamber music.
Festivals
Ensembles
This is a partial list of the types of ensembles found in chamber music. The standard repertoire for chamber ensembles is rich, and the totality of chamber music in print in sheet music form is nearly boundless. See the articles on each instrument combination for examples of repertoire.
Number of musiciansNameCommon ensemblesInstrumentationComments 2 Duo Piano duo 2 pianos Instrumental duo any instrument and piano Found especially as instrumental sonatas; i.e., violin, cello, viola, horn, oboe, bassoon, clarinet, flute sonatas.any instrument and basso continuo Common in baroque music predating the piano. The basso continuo part is always present to provide rhythm and accompaniment, and is often played by a harpsichord but other instruments can also be used. Contemporaneously, however, such a work was not called a "duo" but a "solo". Duet Piano duet 1 piano, 4 hands Mozart, Beethoven, Schubert, Brahms (original pieces and many transcriptions of his own works); a favorite domestic musical form, with many transcriptions of other genres (operas, symphonies, concertos and so on). Vocal duet voice, piano Commonly used in the art song, or Lied. Instrumental duet 2 of any instrument, either equal or not Mozart's Duets KV 423 and 424 for vn and va and Sonata KV 292 for bsn and vc; Beethoven's Duet for va and vc; Bartók's Duets for 2 vn. 3 Trio String trio vln, vla, vc Mozart's Divertimento K. 563 is an important example; Beethoven composed 5 trios near the beginning of his career. 2 Vln and vla trios have been written by Dvořák, Bridge and Kodály. Piano trio vln, vc, pno Haydn, Mozart, Beethoven, Schubert, Chopin, Mendelssohn, Schumann, Brahms, Tchaikovsky, Dvořák and many others. cl, vln, pno Famous compositions by Bartók, Ives, Berg, Donald Martino, Milhaud and Khachaturian (all 20th-century) cl, vla, pno Mozart's trio K. 498, other works by Schumann and Bruch cl, vc, pno Beethoven's Trio Op. 11, as well as his own transcription, Op. 38, of the Septet, Op. 20; trios by Louise Farrenc and Ferdinand Ries, Brahms's trio Op. 114, Alexander von Zemlinsky's Op. 3, Robert Muczynski's Fantasy-Trio Voice and piano trio Voice, vla, pno William Bolcom's trio "Let Evening Come" for Soprano, Viola and Piano, and Brahms' Zwei Gesänge, Op. 91, for Contralto, Viola and Piano voice, cl, pno Schubert's "The Shepherd on the Rock", D965; Spohr's Lieder voice, hrn, pno Schubert's "Auf Dem Strom" Flute, viola and harp fl, vla, hrp Famous works by Debussy and Bax. A 20th-century invention now with a surprisingly large repertoire. A variant is Flute, Cello and Harp. Flute trio fl, ob, eh Nicholas Laucella's Divertimento for flute, oboe and English horn Horn trio hrn, vln, pno Two masterpieces by Brahms and Ligeti Reed trio ob, cl, bsn 20th-century composers such as Villa-Lobos have established this typical combination, also well suited to transcriptions of Mozart's Basset horn trios (if not to Beethoven's 2 ob. + English horn trio) 4 Quartet String quartet 2 vln, vla, vc Very popular form. Numerous major examples by Haydn (its creator), Mozart, Beethoven, Schubert, and many other leading composers (see article).Piano quartet vln, vla, vc, pno Mozart's KV 478 and 493; Beethoven youth compositions; Schumann, Brahms, Fauré vln, cl, vc, pno Rare; famous example: Messiaen's Quatuor pour la fin du temps; less famous: Hindemith (1938), Walter Rabl (Op. 1; 1896).Clarinet quartet 3 B♭ Clarinets and Bass Clarinet Twentieth-century composersSaxophone quartet s/a/t/b. sax or a/a/t/b. sax Examples: Eugène Bozza, Paul Creston, Alfred Desenclos, Pierre Max Dubois, Philip Glass, Alexander Glazunov, David Maslanka, Florent Schmitt, Jean-Baptiste Singelée, Iannis XenakisFlute quartet 4 fls or fl, vln, vla, and vlc Examples include those by Friedrich Kuhlau, Anton Reicha, Eugène Bozza, Florent Schmitt and Joseph Jongen. 20th Century: Shigeru Kan-noPercussion quartet 4 Percussion Twentieth-century. Composers include: John Cage, David Lang, and Paul Lansky. See So Percussion Wind instrument and String trio vn, va, vc and fl, ob, cl, bsn Mozart's four Flute Quartets and one Oboe Quartet; Krommer's Flute Quartets (e.g. Op. 75), Clarinet Quartets, and Bassoon Quartets (e.g. his Op. 46 set); Devienne's Bassoon Quartet, Jörg Duda's Finnish Quartets Accordion and wind trio acc, fl, cl, bsn Robert Davine's Divertimento for flute, clarinet, bassoon, and accordion Piano and wind trio pno, cl, hrn, bsn Franz Berwald's Op. 1 (1819) Voice and piano trio voice, pno, vn, vc Used by Beethoven and Joseph Haydn for settings of Lieder based on folk melodies5Quintet String quintet 2 vln, vla, vc with additional vla, vc, or cb with 2nd vla: Michael Haydn, Mozart, Beethoven, Brahms, Bruckner; with 2nd vc: Boccherini, Schubert; with cb: Vagn Holmboe, Dvořák. Piano quintet 2 vln, vla, vc, pno Schumann's Op. 44, Brahms, Bartók, Dvořák, Shostakovich and others vln, vla, vc, cb, pno An uncommon instrumentation used by Franz Schubert in his Trout Quintet as well as by Johann Nepomuk Hummel and Louise Farrenc. pno, ob, cl, bsn, hrn Mozart's KV 452, Beethoven's Op. 16, and many others, including two by Nikolai Rimsky-Korsakov and Anton Rubinstein. (The four wind instruments may vary) Wind quintet fl, cl, ob, bsn, hrn 19th-century (Reicha, Danzi and others) and 20th-century composers (Carl Nielsen's Op. 43). Wind & strings quintet ob, cl, vln, vla, cb Prokofiev, Quintet in G minor Op. 39. In six movements. (1925) Brass quintet 2 tr, 1 hrn, 1 trm, 1 tuba Mostly after 1950. Clarinet quintet cl, 2 vn, 1 va, 1 vc Mozart's KV 581, Brahms's Op. 115, Weber's Op. 34, Samuel Coleridge-Taylor's Op. 10, Hindemith's Quintet (in which the clarinet player must alternate between a B♭ and an E♭ instrument), Milton Babbitt's Clarinet Quintet, and many others. cl, pno left hand, vn, va, vc Schmidt's chamber pieces dedicated to the pianist Paul Wittgenstein (who played with the left hand only), although they are almost always performed nowadays in a two-hands version arranged by Friedrich Wührer. Pierrot ensemble fl, cl, vln, vc, pno Named after Arnold Schoenberg's Pierrot Lunaire, which was the first piece to demand this instrumentation. Other works include Joan Tower's Petroushkates, Sebastian Currier's Static, and Elliott Carter's Triple Duo. Some works, such as Pierrot Lunaire itself, augment the ensemble with voice or percussion. Reed quintet ob, cl, a. sax, b. cl, bsn 20th and 21st centuries. Wind and string quartet wind instrument, 2 vn, va, vc Mozart's Quintet for Clarinet and Strings, Brahms' Quintet for Clarinet and Strings, Franz Krommer's Quintet for Flute and Strings, Op. 66, Bax's Quintet for Oboe and Strings 6 Sextet String sextet 2 vln, 2 vla, 2 vc Important among these are Brahms's Op. 18 and Op. 36 Sextets, and Schoenberg's Verklärte Nacht, Op. 4 (original version). Wind sextet 2 ob, 2 bsn, 2 hrn or 2 cl, 2 hrn, 2 bsn By Mozart there are the two types; Beethoven used the one with cl Piano and wind quintet fl, ob, cl, bsn, hrn, pno Such as the Poulenc Sextet, and another by Ludwig Thuille. Piano sextet vln, 2 vla, vc, cb, pno e.g. Mendelssohn's Op. 110, also one by Leslie Bassett. () cl, 2 vln, vla, vc, pno Prokofiev's Overture on Hebrew Themes Op. 34, Copland's Sextet. 7 Septet Wind and string septet cl, hrn, bsn, vln, vla, vc, cb Popularized by Beethoven's Septet Op. 20, Berwald's, and many others. 8 Octet Wind and string octet cl, hrn, bsn, 2 vln, vla, vc, cb or cl, 2 hrn, vln, 2 vla, vc, cb Schubert's Octet D. 803 (inspired by Beethoven's Septet) and Spohr's Octet, Op. 32. String octet 4 vln, 2 vla, 2 vc (less commonly 4 vln, 2 vla, vc, cb) Popularized by Mendelssohn's String Octet Op. 20. Others (among them works by Bruch, Woldemar Bargiel, George Enescu's String Octet, Op. 7, and a pair of pieces by Shostakovich) have followed. Double quartet 4 vln, 2 vla, 2 vc Two string quartets arranged antiphonically. A genre preferred by Spohr. Milhaud's Op. 291 Octet is, rather, a couple of String Quartets (his 14th and 15th) performed simultaneously Wind octet 2 ob, 2 cl, 2 hrn, 2 bsn Mozart's KV 375 and 388, Beethoven's Op. 103, Franz Lachner's Op. 156, Reinecke's Op. 216 many written by Franz Krommer. Including one written by Stravinsky and the delightful Petite Symphonie by Gounod. Vocal octet 2 sop, 2 alto, 2 ten, 2 bass Robert Lucas de Pearsall's Lay a garland and Purcell's Hear My Prayer. 9 Nonet Wind and string nonet fl, ob, cl, hrn, bsn, vln, vla, vc, cbGrand Nonetto (1813) by Spohr; Nonet (1849) by Louise Farrenc; Nonet (1875) by Franz Lachner; Petite Symphonie (1885) by Charles Gounod; Stanford's Serenade (1905); Parry's Wind Nonet (1877); Nonet (1923) by Heitor Villa-Lobos; Planos (1934) by Silvestre Revueltas; three by Bohuslav Martinů; four by Alois Hába. 10 Decet Double wind quintet 2 ob, 2 eh, 2 cl, 2 hrn, 2 bsn
2 fl, 2 ob, 2 cl, 2 hrn and 2 bsn There are few double wind quintets written in the 18th century (notable exceptions being partitas by Josef Reicha and Antonio Rosetti), but in the 19th and 20th centuries they are plentiful. The most common instrumentation is 2 flutes (piccolo), 2 oboes (or English horn), two clarinets, two horns and two bassoons. Some of the best 19th-century compositions are the Émile Bernard Divertissement, Arthur Bird's Suite and the Salomon Jadassohn Serenade, to name a few. In the 20th century the Decet/dixtuor in D, Op. 14 by Enescu written in 1906, is a well-known example. Frequently an additional bass instrument is added to the standard double wind quintet. Over 500 works have been written for these instruments and related ones. Key: vln – violin; vla – viola; vc – cello; cb – double bass; pno – piano; fl – flute; ob – oboe; Eng hrn – English horn; cl – clarinet; s. sax – soprano saxophone; a. sax – alto saxophone; t. sax – tenor saxophone; b. sax – baritone saxophone; bsn – bassoon; hrn – horn; tr – trumpet; trm – trombone
Notes
Bibliography
and .
Further reading
The New Grove Dictionary of Music and Musicians (ed. Stanley Sadie, 1980)
External links
Chamber Music America
earsense chamberbase, an online database of over 50,000 chamber works
Fischoff National Chamber Music Association, sponsor of the chamber music competitions and a supporter of chamber music education.
Associated Chamber Music Players (ACMP), New York City
Annotated bibliography of double wind quintet music
Category:Musical forms
Category:History of classical music
Category:Classical music styles
Category:Musical terminology
|
arts_entertainment
| 11,102
|
83620
|
Sonata form
|
https://en.wikipedia.org/wiki/Sonata_form
|
The sonata form (also sonata-allegro form or first movement form) is a musical structure generally consisting of three main sections: an exposition, a development, and a recapitulation. It has been used widely since the middle of the 18th century (the early Classical period).
While it is typically used in the first movement of multi-movement pieces, it is sometimes used in subsequent movements as well—particularly the final movement. The teaching of sonata form in music theory rests on a standard definition and a series of hypotheses about the underlying reasons for the durability and variety of the form—a definition that arose in the second quarter of the 19th century. There is little disagreement that on the largest level, the form consists of three main sections: an exposition, a development, and a recapitulation; however, beneath this general structure, sonata form is difficult to pin down to a single model.
The standard definition focuses on the thematic and harmonic organization of tonal materials that are presented in an exposition, elaborated and contrasted in a development and then resolved harmonically and thematically in a recapitulation. In addition, the standard definition recognizes that an introduction and a coda may be present. Each of the sections is often further divided or characterized by the particular means by which it accomplishes its function in the form.
After its establishment, the sonata form became the most common form in the first movement of works entitled "sonata", as well as other long works of classical music, including the symphony, concerto, string quartet, and so on. Accordingly, there is a large body of theory on what unifies and distinguishes practice in the sonata form, both within and between eras. Even works that do not adhere to the standard description of a sonata form often present analogous structures or can be analyzed as elaborations or expansions of the standard description of sonata form.
Defining 'sonata form'
According to the Grove Dictionary of Music and Musicians, sonata form is "the most important principle of musical form, or formal type, from the Classical period well into the 20th century". As a formal model it is usually best exemplified in the first movements of multi-movement works from this period, whether orchestral or chamber, and has, thus, been referred to frequently as "first-movement form" or "sonata-allegro form" (since the typical first movement in a three- or four-movement cycle will be in allegro tempo). However, as what Grove, following Charles Rosen, calls a "principle"—a typical approach to shaping a large piece of instrumental music—it can be seen to be active in a much greater variety of pieces and genres, from minuet to concerto to sonata-rondo. It also carries with it expressive and stylistic connotations: "sonata style"—for Donald Tovey and other theorists of his time—was characterized by drama, dynamism, and a "psychological" approach to theme and expression.
Although the Italian term sonata often refers to a piece in sonata form, it is important to separate the two. As the title for a single-movement piece of instrumental music, sonata—the past participle of suonare, "to play [an instrument]", as opposed to cantata, the past participle of cantare, "to sing"—covers many pieces from the Baroque and mid-18th century that are not "in sonata form". Conversely, in the late 18th century or "Classical" period, the title "sonata" is typically given to a work composed of three or four movements. Nonetheless, this multi-movement sequence is not what is meant by sonata form, which refers to the structure of an individual movement.
The definition of sonata form in terms of musical elements sits uneasily between two historical eras. Although the late 18th century witnessed the most exemplary achievements in the form, above all from Joseph Haydn and Wolfgang Amadeus Mozart, a compositional theory of the time did not use the term "sonata form". Perhaps the most extensive contemporary description of the sonata-form type of movement may have been given by the theorist Heinrich Christoph Koch in 1793: like earlier German theorists and unlike many of the descriptions of the form we are used to today, he defined it in terms of the movement's plan of modulation and principal cadences, without saying a great deal about the treatment of themes. Seen in this way, sonata form was closest to binary form, out of which it probably developed.
The model of the form that is often taught currently tends to be more thematically differentiated. It was originally promulgated by Anton Reicha in Traité de haute composition musicale in 1826, by Adolf Bernhard Marx in Die Lehre von der musikalischen Komposition in 1845, and by Carl Czerny in 1848. Marx may be the originator of the term "sonata form". This model was derived from the study and criticism of Beethoven's piano sonatas.
Definition as a formal model
A sonata-allegro movement is divided into sections. Each section is felt to perform specific functions in the musical argument.
It may begin with an introduction, which is, in general, slower than the main movement.
The first required section is the exposition. The exposition presents the primary thematic material for the movement: one or two themes or theme groups, often in contrasting styles and in opposing keys, connected by a modulating transition. The exposition typically concludes with a closing theme, a codetta, or both.
The exposition is followed by the development where the harmonic and textural possibilities of the thematic material are explored.
The development then re-transitions back to the recapitulation where the thematic material returns in the tonic key, and for the recapitulation to complete the musical argument, material that has not been stated in the tonic key is "resolved" by being played, in whole or in part, in the tonic.
The movement may conclude with a coda, beyond the final cadence of the recapitulation.
The term 'sonata form' is controversial and has been called misleading by scholars and composers almost from its inception. Its originators implied that there was a set template to which Classical and Romantic composers aspired, or should aspire. However, sonata form is currently viewed as a model for musical analysis, rather than compositional practice. Although the descriptions on this page could be considered an adequate analysis of many first-movement structures, there are enough variations that theorists such as Charles Rosen have felt them to warrant the plural in 'sonata forms'.
These variations include, but are not limited to:
a monothematic exposition, where the same material is presented in different keys, often used by Haydn;
a 'third subject group' in a different key than the other two, used by Schubert (e.g. in the String Quintet, D. 956), and Bruckner's Symphony No. 4;
the first subject recapitulated in the 'wrong' key, often the subdominant, as in Mozart's Piano Sonata No. 16 in C, K. 545 and Schubert's Symphony No. 5;
the second subject group recapitulated in a key other than the tonic, as in Richard Strauss's Symphony No. 2;
and an extended coda section that pursues developmental, rather than concluding, processes, often found in Beethoven's middle-period works, such as his Symphony No. 3.
Through the Romantic period, formal distortions and variations become so widespread (Mahler, Elgar and Sibelius among others are cited and studied by James Hepokoski) that 'sonata form' as it is outlined here is not adequate to describe the complex musical structures that it is often applied to.
In the context of the many late-Baroque extended binary forms that bear similarities to sonata form, sonata form can be distinguished by the following three characteristics:
a separate development section including a retransition
the simultaneous return of the first subject group and the tonic
a full (or close to full) recapitulation of the second subject group
Outline of sonata form
The standard description of the sonata form is:
Introduction
The introduction section is optional, or may be reduced to a minimum. If it is extended, it is, in general, slower than the main section and frequently focuses on the dominant key. It may or may not contain material that is later stated in the exposition. The introduction increases the weight of the movement (such as the famous dissonant introduction to Mozart's "Dissonance" Quartet, K. 465), and also permits the composer to begin the exposition with a theme that would be too light to start on its own, as in Haydn's Symphony No. 103 ("The Drumroll") and Beethoven's Quintet for Piano and Winds, Op. 16. The introduction usually is not included in the exposition repeat: the Pathétique is a possible counterexample. Much later, Chopin's Piano Sonata No. 2 (Op. 35) is a clear example where the introduction is also included.
On occasion, the material of introduction reappears in its original tempo later in the movement. Often, this occurs as late as the coda, as in Mozart's String Quintet in D major, K. 593, Haydn's "Drumroll" Symphony, Beethoven's Piano Sonata No. 8 ("Pathétique"), or Schubert's Symphony No. 9 ("Great"). Sometimes it can appear earlier: it occurs at the beginning of the development in the Pathétique Sonata, and at the beginning of the recapitulation of Schubert's Symphony No. 1.
Exposition
The primary thematic material for the movement is presented in the exposition. This section can be further divided into several sections. The same section in most sonata form movements has prominent harmonic and thematic parallelisms (although in some works from the 19th century and onward, some of these parallelisms are subject to considerable exceptions), which include:
First subject group, P (Primary) – this consists of one or more themes, all in the tonic key. Although there are exceptions, most pieces follow this form.
Transition, T – in this section the composer modulates from the key of the first subject to the key of the second. If the first group is in a major key, the second group will usually be in the dominant key. However, if the first group is in a minor key, the second group will usually be the relative major.
Second subject group, S – one or more themes in a different key (typically the dominant) from the first group. The material of the second group is often different in rhythm or mood from that of the first group (frequently, it is more lyrical) and is often stated at a piano dynamic.
Closing zone (or closing area), C – a suffix after the end of the second subject group that reinforces the new key area. C involves musical material that differs from what was heard in S, and often includes distinctly new thematic material.
The exposition is commonly repeated, particularly in classical and early romantic works, and more likely in solo or chamber works and symphonies than for concerti. Often, though not always, first and second endings are employed during the last measure(s) of the exposition. The first ending to point back to the tonic, where the exposition began, and the second ending to point towards the development.
Development
In general, the development starts in the same key as the exposition ended, and may move through many different keys during its course. It will usually consist of one or more themes from the exposition altered and on occasion juxtaposed and may include new material or themes—though exactly what is acceptable practice is a point of contention. Alterations include taking material through distant keys, breaking down of themes and sequencing of motifs, and so forth.
The development varies greatly in length from piece to piece and from time period to time period, sometimes being relatively short compared to the exposition (e.g., the first movement of Eine kleine Nachtmusik) and in other cases quite long and detailed (e.g., the first movement of the "Eroica" Symphony). Developments in the Classical era are typically shorter due to how much composers of that era valued symmetry, unlike the more expressive Romantic era in which development sections gain a much greater importance. However, it almost always shows a greater degree of tonal, harmonic, and rhythmic instability than the other sections. In a few cases, usually in late Classical and early Romantic concertos, the development section consists of or ends with another exposition, often in the relative minor of the tonic key.
At the end, the music will usually return to the tonic key in preparation of the recapitulation. (On occasion, it will actually return to the sub-dominant key and then proceed with the same transition as in the exposition.) The transition from the development to the recapitulation is a crucial moment in the work. The last part of the development section is called the : It prepares for the return of the first subject group in the tonic.
Exceptions include the first movement of Brahms's Piano Sonata No. 1. The general key of the movement is C major, and it would then follow that the retransition should stress the dominant seventh chord on G. Instead, it builds in strength over the dominant seventh chord on C, as if the music were proceeding to F major, only to take up immediately the first theme in C major. Another exception is the fourth movement of Schubert's Symphony No. 9. The home key of the movement is C major. The retransition prolongates over the dominant chord on G, but suddenly takes up the first theme in the flattened mediant E major.
A particularly common exception is for the dominant to be substituted with the dominant of the relative minor key: one example is the first movement of Haydn's String Quartet in E major, Op. 54 No. 3.
Occasionally, the retransition can begin with a false recapitulation, in which the opening material of the first theme group is presented before the development has completed. The surprise that ensues when the music continues to modulate toward the tonic can be used for either comic or dramatic effect. An example occurs in the first movement of Haydn's String Quartet in G major, Op. 76 No. 1.
Recapitulation
The recapitulation is an altered repeat of the exposition, and consists of:
First subject group – normally given prominence as the highlight of a recapitulation, it is usually in exactly the same key and form as in the exposition.
Transition – often the transition is carried out by introducing a novel material: a kind of an additional brief development. It is called a "secondary development".
Second subject group – usually in roughly the same form as in the exposition, but now in the home key, which sometimes involves change of mode from major to minor, or vice versa, as occurs in the first movement of Mozart's Symphony No. 40 (K. 550). More often, however, it may be recast in the parallel major of the home key (for example, C major when the movement is in C minor like Beethoven's Symphony No. 5 in C Minor, op. 67/I). Key here is more important than mode (major or minor); the recapitulation provides the needed balance even if the material's mode is changed, so long as there is no longer any key conflict.
Exceptions to the recapitulation form include Mozart and Haydn works that often begin with the second subject group when the first subject group has been elaborated at length in the development. If a theme from the second subject group has been elaborated at length in the development in a resolving key such as the tonic major or minor or the subdominant, it may also be omitted from the recapitulation. Examples include the opening movements of Mozart's piano sonata in C minor, K. 457, and Haydn's String Quartet in G major, Op. 77 No. 1.
After the closing cadence, the musical argument proper is said to be completed harmonically. If the movement continues, it is said to have a coda.
Coda
The coda is optional in Classical-era works, but became essential in many Romantic works. After the final cadence of the recapitulation, the movement may continue with a coda that will contain material from the movement proper. Codas, when present, vary considerably in length, but like introductions are not generally part of the "argument" of the work in the Classical era. Codas became increasingly important and essential parts of the sonata form in the nineteenth century. The coda often ends with a perfect authentic cadence in the original key. Codas may be quite brief tailpieces, typically in the Classical era, or they may be very long and elaborate. An example of the more extended type is the coda to the first movement of Beethoven's Eroica Symphony, and an exceptionally long coda appears at the end of the finale of Beethoven's Symphony No. 8.
Explanations for why an extended coda is present vary. One reason may be to omit the repeat of the development and recapitulation sections found in earlier sonata forms of the 18th century. Indeed, Beethoven's extended codas often serve the purpose of further development of thematic material and resolution of ideas left unresolved earlier in the movement. Another role that these codas sometimes serve is to return to the minor mode in minor-key movements where the recapitulation proper concludes in the parallel major, as in the first movements of Beethoven's Symphony No. 5 or Schumann's Piano Concerto, or rarely, to restore the home key after an off-tonic recapitulation, such as in the first movements of Brahms's Clarinet Quintet and Dvořák's Symphony No. 9.
Variations on the standard schema
Monothematic expositions
It is not necessarily the case that the move to the dominant key in the exposition is marked by a new theme. Haydn in particular was fond of using the opening theme, often in a truncated or otherwise altered form, to announce the move to the dominant, as in the first movement of his Sonata Hob. XVI/49 in E major. Mozart also occasionally wrote such expositions: for instance in the Piano Sonata K. 570 or the String Quintet K. 593. Such expositions are often called monothematic, meaning that one theme serves to establish the opposition between tonic and dominant keys. This term is misleading, since most "monothematic" works have multiple themes: most works so labeled have additional themes in the second subject group. Rarely, as in the fourth movement of Haydn's String Quartet in B major, Op. 50, No. 1, did composers perform the tour de force of writing a complete sonata exposition with just one theme. A more recent example is Edmund Rubbra's Symphony No. 2.
The fact that so-called monothematic expositions usually have additional themes is used by Charles Rosen to illustrate his theory that the Classical sonata form's crucial element is some sort of dramatization of the arrival of the dominant.See his book The Classical Style (New York: Norton) Using a new theme was a very common way to achieve this, but other resources such as changes in texture, salient cadences and so on were also accepted practice.
No transitions between the first and second subject groups
In some sonata-form works, especially in the Classical period, there is no transitional material linking the subject groups. Instead, the piece moves straight from the first subject group to the second subject group via common-tone modulation. This happens in the first movement of Mozart's Symphony No. 31 and again in the third movement of his Symphony No. 34. It also occurs in the first movement of Beethoven's Symphony No. 1. In the exposition, the first subject group ends on a half-cadence in tonic, and the second subject group immediately follows in the dominant key (without a transition).
Expositions that modulate to other keys
The key of the second subject may be something other than the dominant (for a major-mode sonata movement) or relative major (for a minor-key movement). A second option for minor-mode sonata form movements was to modulate to the minor dominant; this option, however, robs the sonata structure of the space of relief and comfort that a major-mode second theme would bring, and was therefore used primarily for a bleak, grim effect, as Beethoven did with some frequency. Mendelssohn also did this in the first movement of his Symphony No. 3 and the last movement of his Symphony No. 4.
About halfway through his career, Beethoven also began to experiment with other tonal relationships between the tonic and the second subject group. The most common practice, for Beethoven and many other composers from the Romantic era, was to use the mediant or submediant, rather than the dominant, for the second group. For instance, the first movement of the "Waldstein" sonata, in C major, modulates to the mediant E major, while the opening movement of the "Hammerklavier" sonata, in B major, modulates to the submediant G major, and String Quartet No. 13 in the same key modulating to the flattened submediant key of G major. Tchaikovsky also implemented this practice in the last movement of his Symphony No. 2; the movement is in C major and modulates to the flattened submediant A major. The young Chopin even experimented with expositions that do not modulate at all, in the opening movements of his Piano Sonata No. 1 (remaining in C minor throughout) and his Piano Concerto No. 1 (moving from E minor to E major).
Beethoven began also to use the submediant major with more frequency in minor-key sonata-form movements, as in the first movements of Symphony No. 9, Piano Sonata No. 32, and String Quartets No. 11 and No. 15. The latter case transposes the second repeat of its exposition by a fifth, starting on the minor dominant (instead of the tonic) and finishing on the major mediant (instead of the submediant). The first movement of Richard Strauss's Symphony No. 2, in F minor, modulates to the submediant D major, as do the F minor first movements of Brahms' first clarinet sonata and piano quintet; all three works balance this downward third by moving up to the major mediant (A major) for the key of the second movement.
Rarely, a major-mode sonata form movement will modulate to a minor key for the second subject area, such as the mediant minor (Beethoven Sonata Op. 31/1, i), the relative minor (first movements of Beethoven Triple Concerto and Brahms Piano Trio No. 1) or even the minor dominant (Brahms Piano Concerto No. 2, i). In such cases, the second theme will often return initially in the tonic minor in the recapitulation, with the major mode restored later on.
During the late Romantic period, it was also possible to modulate to remote tonal areas to represent divisions of the octave. In the first movement of Tchaikovsky's Symphony No. 4, the first subject group is in the tonic F minor but modulates to G minor and then to B major for the second subject group. The recapitulation begins in D minor and modulates to F major, and goes back to the parallel F minor for the coda.
Also in the late Romantic period, it was possible for a minor-key sonata form movement to modulate to the major dominant, as in the first movements of Tchaikovsky's Symphony No. 1 and Brahms' Symphony No. 4.
Expositions with more than two key areas
The exposition need not only have two key areas. Some composers, most notably Schubert, composed sonata forms with three or more key areas (see three-key exposition). The first movement of Schubert's Quartet in D minor, D. 810 ("Death and the Maiden"), for example, has three separate key and thematic areas, in D minor, F major, and A minor. Similarly, Chopin's Piano Concerto in F minor uses F minor, A major, and C minor in its first movement's exposition. In both cases, the transition is i–III–v, an elaboration of the minor schema of either using i–III or i–v. This is by no means the only scheme, however: the opening movement of Schubert's Violin Sonata in G minor, D. 408, uses the scheme i–III–VI, and the opening movement of Schubert's Symphony No. 2 in B major, D. 125, uses the scheme I–IV–V. The first movement of Tchaikovsky's Symphony No. 5 uses the scheme i–v–VII. An extreme example is the finale to Schubert's Symphony No. 6, D. 589, which has a six-key exposition (C major, A major, F major, A major, E, and G major), with a new theme for each key.
The second subject group can start in a particular key and then modulate to that key's parallel major or minor. In the first movement of Brahms' Symphony No. 1 (in C minor), the second subject group begins in the relative E major and goes to the parallel mediant E minor. Similarly, the opening movement of Dvorak's Symphony No. 9 in E minor has its second subject group start in the minor mediant G minor and then to its parallel G major. And in the opening movement of his Symphony No. 6 in D major, the first theme of the second subject group is in the relative B minor while the second theme is in the parallel submediant B major.
Modulations within the first subject group
The first subject group need not be entirely in the tonic key. In the more complex sonata expositions there can be brief modulations to fairly remote keys, followed by reassertion of the tonic. For example, Mozart's String Quintet in C, K. 515, visits C minor and D major as chromaticism within the C major first subject group, before finally moving to D major, the dominant of the dominant major (G major), preparing the second subject group in the dominant. Many works by Schubert and later composers utilized even further harmonic convolutions. In the first subject group of Schubert's Piano Sonata in B, D. 960, for example, the theme is presented three times, in B major, in G major, and then again in B major. The second subject group is even more wide-ranging. It begins in F minor, moves into A major, then through B major to F major.
Recapitulations in the "wrong key"
In the recapitulation section, the key of the first subject group may be in a key other than tonic, most often in the subdominant, known as a "subdominant recapitulation". In some pieces by Mozart, such as Mozart's Piano Sonata No. 16 in C, K. 545, or the finale of his String Quartet No. 14 in G, K. 387, the first subject group will be in the subdominant and then modulate back to tonic for the second subject group and coda. This case is also found in the first movement of Beethoven's "Kreutzer" sonata. Schubert was also a prominent user of the subdominant recapitulation; it appears for example in the opening movements of his Symphonies No. 2 and No. 5, as well as those of his piano sonatas D 279, D 459, D 537, D 575, as well as the finale of D 664. Sometimes this effect is also used for false reprises in the "wrong key" that are soon followed by the actual recapitulation in the tonic, such as in the first movement of Haydn's quartet Op. 76 No. 1 in G (false reprise in the subdominant), or the finale of Schubert's piano sonata in A, D 959 (false reprise in the major submediant). A special case is the recapitulation that begins in the tonic minor, for example in the slow movement of Haydn's quartet Op. 76 No. 4 in E, or the opening movement of Haydn's Symphony No. 47 in G major. In the Classical period, the subdominant is the only possible substitute for the tonic at this position (because any other key would need resolution and would have to be introduced as a false reprise in the development), but with the erosion of the distinction between the sharp and flat directions and the blurring of tonal areas true recapitulations beginning in other keys became possible after around 1825.
It is possible for the first subject group to begin in tonic (or a key other than tonic), modulate to another key and then back to tonic for the second subject group. In the finale of the original 1872 version of Tchaikovsky's Symphony No. 2, the first subject group begins in the tonic C major, modulates to E major, then through E major, and then modulates back to tonic for the second subject group and coda. And in the last movement of Schubert's Symphony No. 9 in C major, the first subject group is in the flattened mediant E major, modulates to the subdominant F major and then back to tonic for the second subject group and coda. It is also possible to have the second subject group in a key other than tonic while the first subject group is in the home key. For instance in the first movement of Richard Strauss's Symphony No. 2 in F minor, the recapitulation begins with the first subject group in tonic but modulates to the mediant A major for the second subject group before modulating back to F minor for the coda. Another example is the first movement of Dvorak's Symphony No. 9. The recapitulation begins in the tonic E minor for the first subject group, but the second subject group modulates to G-sharp minor, then through A-flat major before modulating back to the tonic key for the coda. Similarly, in Beethoven's "Waldstein" Sonata, the first subject group is in the tonic C major, then modulates to A major for the first part of the second subject group but quickly goes through A minor to modulate back to tonic for the rest of the second subject group and coda.
Another possibility is both subject groups in the recapitulation going through multiple keys. In the first movement of Schubert's Symphony No. 8, the first subject group begins in the tonic B minor but modulates to E minor and then to F minor. The second subject group starts in the mediant D major before modulating to the parallel tonic B major.
Romantic works even exhibit progressive tonality in sonata form: for example, the second movement 'Quasi-Faust' from Charles-Valentin Alkan's Grande sonate 'Les quatre âges' is in D minor, and while the exposition travels from D to the major subdominant G major, the recapitulation begins again in D minor and ends in the relative major F major, and stays there till the end of the movement. Such a scheme may have been constructed to conform with the programmatic nature of the movement, but also fits well with the Romantic penchant for beginning a work at maximum tension and decreasing the tension afterwards, so that the point of ultimate stability is not reached until the last possible moment. (Furthermore, the identification of a minor key with its relative major is common in the Romantic period, supplanting the earlier Classical identification of a minor key with its parallel major.)
Partial or varied recapitulations
In some pieces in sonata form, in the recapitulation, the first subject group is omitted, leaving only the second subject group, like the second movement of Haydn's Sonata Hob. XVI/35, as well as the opening movements of Chopin's Piano Sonata No. 2 and No. 3. It is also possible for the first subject group to be slightly different in comparison of the exposition, like the fourth movement of Dvorak's Symphony No. 9. Another example occurs in the finale of Mozart's string quartet K. 387, where the opening of the first subject group is cut, and in the quintet K. 515, where a later portion of the first subject group is cut. On the other hand, it is also possible for the subject groups to be reversed in order, like the fourth movement of Bruckner's Symphony No. 7, or the first movement of Mozart's piano sonata in D major, K. 311. The second subject group's melody can be different compared to the exposition, like Haydn's Symphony No. 44. Such melodic adjustment is common in minor-key sonata forms, when the mode of the second subject needs to be changed, for example in the opening movement of Mozart's wind serenade K. 388. In rare cases, the second subject theme can be omitted, as in the finale of Tchaikovsky's Violin Concerto in D major.
Truncated sonata form
Occasionally, especially in some Romantic works, the sonata form extends only as far as the end of the exposition, at which point the piece transitions directly into the next movement instead of a development section. One example is Henryk Wieniawski's Violin Concerto No. 2 in D minor. Another example is Fritz Seitz's Violin Concertos for students, where such a truncated sonata form is used ostensibly to cut down on the first movements' length. Sometimes, the third movement of such works is the recapitulation of the first movement (one example being Franz Strauss' Horn Concerto in C Minor), making the entire work effectively a single-movement sonata.
Some Classical slow movements involve a different sort of truncation, in which the development section is replaced altogether by a short retransition. This occurs in the slow movements of Mozart's quartets K. 387, K. 458, K. 465, K. 575, and K. 589. It is also common in overtures, occurring for example in Mozart's overture to Le nozze di Figaro, or Rossini's overture to Il barbiere di Siviglia. This is distinct from a short development, such as in the opening movement of Mozart's Violin Sonata in G major, K. 379.
Another instance of a truncated sonata form has the development section completely omitted altogether, and the recapitulation immediately follows the exposition (even without a retransitional passage). This occurs in the first movement of Tchaikovsky's Serenade for Strings, and is known as sonatina form.
In concerti
An important variant on traditional sonata-allegro form is found in the first movement of the Classical concerto. Here, the sonata-allegro's customary 'repeated exposition' is replaced by two different but related sections: the 'tutti exposition' and the 'solo exposition'. Prototypically the 'tutti exposition' does not feature the soloist (except, in early classical works, in a 'continuo' role), and does not contain the decisive sonata-exposition modulation to the secondary key. Only when the 'solo exposition' is under way does the solo instrument assert itself and participate in the move to (classically) the dominant or relative major. The situation is only seemingly different in the case of Mozart's concerto No. 9, where the soloist is heard at the outset: as the later unfolding of those movements makes clear, the opening piano solo or early piano flourishes actually precede the start of the exposition proper. This presentation is also found in Classical-to-Romantic transition, such as Beethoven's piano concertos No. 4 or No. 5, and Romantic concertos, like Grieg's A minor concerto or Brahms' B major concerto.
A structural feature that the special textural situation of the concerto makes possible is the 'ownership' of certain themes or materials by the solo instrument; such materials will thus not be exposed until the 'solo' exposition. Mozart was fond of deploying his themes in this way.
Towards the end of the recapitulation of a concerto movement in sonata form, there is usually a cadenza for the soloist alone. This has an improvisatory character (it may or may not actually be improvised), and, in general, serves to prolong the harmonic tension on a dominant-quality chord before the orchestra ends the piece in the tonic.
Some may decline the existence of "double exposition" - they would say the first subject theme actually extends far out from the start of the "tutti exposition" to the first subject of the "solo exposition", meaning there is only one exposition.
History
The term sonata is first found in the 17th century, when instrumental music had just begun to become increasingly separated from vocal music. The original meaning of the term (derived from the Italian word suonare, to sound on instrument) referred to a piece for playing, distinguished from cantata, a piece for singing. At this time, the term implies a binary form, usually AABB with some aspects of three part forms. Early examples of simple pre-Classical sonata forms include Pergolesi's Trio Sonata No. 3 in G major.
The Classical era established the norms of structuring first movements and the standard layouts of multi-movement works. There was a period of a wide variety of layouts and formal structures within first movements that gradually became expected norms of composition. The practice of Haydn and Mozart, as well as other notable composers, became increasingly influential on a generation that sought to exploit the possibilities offered by the forms that Haydn and Mozart had established in their works. In time, theory on the layout of the first movement became more and more focused on understanding the practice of Haydn, Mozart and, later, Beethoven. Their works were studied, patterns and exceptions to those patterns identified, and the boundaries of acceptable or usual practice set by the understanding of their works. The sonata form as it is described is strongly identified with the norms of the Classical period in music. Even before it had been described, the form had become central to music making, absorbing or altering other formal schemas for works. Examples include Beethoven's Appassionata sonata.
The Romantic era in music was to accept the centrality of this practice, codify the form explicitly and make instrumental music in this form central to concert and chamber composition and practice, in particular for works that were meant to be regarded as "serious" works of music. Various controversies in the 19th century would center on exactly what the implications of "development" and sonata practice actually meant, and what the role of the Classical masters was in music. It is ironic that, at the same time that the form was being codified (by the likes of Czerny and so forth), composers of the day were writing works that flagrantly violated some of the principles of the codified form.
It has continued to be influential through the subsequent history of classical music through to the modern period. The 20th century brought a wealth of scholarship that sought to found the theory of the sonata form on basic tonal laws. The 20th century would see a continued expansion of acceptable practice, leading to the formulation of ideas by which there existed a "sonata principle" or "sonata idea" that unified works of the type, even if they did not explicitly meet the demands of the normative description.
Sonata form and other musical forms
Sonata form shares characteristics with both binary form and ternary form. In terms of key relationships, it is very like binary form, with a first half moving from the home key to the dominant and the second half moving back again (this is why sonata form is sometimes known as compound binary form); in other ways it is very like ternary form, being divided into three sections, the first (exposition) of a particular character, the second (development) in contrast to it, the third section (recapitulation) the same as the first.
The early binary sonatas by Domenico Scarlatti provide excellent examples of the transition from binary to sonata-allegro form. Among the many sonatas are numerous examples of the true sonata form being crafted into place.
Sonata theory
The sonata form is a guide to composers as to the schematic for their works, for interpreters to understand the grammar and meaning of a work, and for listeners to understand the significance of musical events. A host of musical details are determined by the harmonic meaning of a particular note, chord or phrase. The sonata form, because it describes the shape and hierarchy of a movement, tells performers what to emphasize, and how to shape phrases of music. Its theory begins with the description, in the 18th century, of schematics for works, and was codified in the early 19th century. This codified form is still used in the pedagogy of the sonata form.
In the 20th century, emphasis moved from the study of themes and keys to how harmony changed through the course of a work and the importance of cadences and transitions in establishing a sense of "closeness" and "distance" in a sonata. The work of Heinrich Schenker and his ideas about "foreground", "middleground", and "background" became enormously influential in the teaching of composition and interpretation. Schenker believed that inevitability was the key hallmark of a successful composer, and that, therefore, works in sonata form should demonstrate an inevitable logic.
In the simplest example, playing of a cadence should be in relationship to the importance of that cadence in the overall form of the work. More important cadences are emphasized by pauses, dynamics, sustaining and so on. False or deceptive cadences are given some of the characteristics of a real cadence, and then this impression is undercut by going forward more quickly. For this reason, changes in performance practice bring changes to the understanding of the relative importance of various aspects of the sonata form. In the Classical era, the importance of sections and cadences and underlying harmonic progressions gives way to an emphasis on themes. The clarity of strongly differentiated major and minor sections gives way to a more equivocal sense of key and mode. These changes produce changes in performance practice: when sections are clear, then there is less need to emphasize the points of articulation. When they are less clear, greater importance is placed on varying the tempo during the course of the music to give "shape" to the music.
Over the last half-century, a critical tradition of examining scores, autographs, annotations, and the historical record has changed, sometimes subtly, on occasion dramatically, the way the sonata form is viewed. It has led to changes in how works are edited; for example, the phrasing of Beethoven's piano works has undergone a shift to longer and longer phrases that are not always in step with the cadences and other formal markers of the sections of the underlying form. Comparing the recordings of Schnabel, from the beginning of modern recording, with those of Barenboim and then Pratt shows a distinct shift in how the structure of the sonata form is presented to the listener over time.
For composers, the sonata form is like the plot of a play or movie script, describing when the crucial plot points are, and the kinds of material that should be used to connect them into a coherent and orderly whole. At different times the sonata form has been taken to be quite rigid, and at other times a freer interpretation has been considered permissible.
In the theory of sonata form it is often asserted that other movements stand in relation to the sonata-allegro form, either, per Charles Rosen that they are really "sonata forms", plural—or as Edward T. Cone asserts, that the sonata-allegro is the ideal to which other movement structures "aspire". This is particularly seen to be the case with other movement forms that commonly occur in works thought of as sonatas. As a sign of this the word "sonata" is sometimes prepended to the name of the form, in particular in the case of the sonata rondo form. Slow movements, in particular, are seen as being similar to sonata-allegro form, with differences in phrasing and less emphasis on the development.
However, Schoenberg and other theorists who used his ideas as a point of departure see the theme and variations as having an underlying role in the construction of formal music, calling the process continuing variation, and argue from this idea that the sonata-allegro form is a means of structuring the continuing variation process. Theorists of this school include Erwin Ratz and William E. Caplin.
Subsections of works are sometimes analyzed as being in sonata form, in particular single movement works, such as the Konzertstück in F minor of Carl Maria von Weber.
From the 1950s onward, Hans Keller developed a 'two-dimensional' method of analysis that explicitly considered form and structure from the point of view of listener expectations. In his work, the sonata-allegro was a well-implied 'background form' against whose various detailed features composers could compose their individual 'foregrounds'; the 'meaningful contradiction' of expected background by unexpected foreground was seen as generating the expressive content. In Keller's writings, this model is applied in detail to Schoenberg's 12-note works as well as the classical tonal repertoire. In recent times, two other musicologists, James Hepokoski and Warren Darcy, have presented, without reference to Keller, their analysis, which they term Sonata Theory, of the sonata-allegro form and the sonata cycle in terms of genre expectations, and categorized both the sonata-allegro movement and the sonata cycle by the compositional choices made to respect or depart from conventions. Their study focuses on the normative period of sonata practice, notable ones being the works of Haydn, Mozart, Beethoven, Schubert, and their close contemporaries, projecting this practice forward to development of the sonata-allegro form into the 19th and 20th centuries.
References
Further reading
Category:Musical development
Category:Musical form
Category:Tonality
|
arts_entertainment
| 7,416
|
85248
|
Theatre of the absurd
|
https://en.wikipedia.org/wiki/Theatre_of_the_absurd
|
The theatre of the absurd ( ) is a post–World War II designation for particular plays of absurdist fiction written by a number of primarily European playwrights in the late 1950s. It is also a term for the style of theatre the plays represent. The plays focus largely on ideas of existentialism and express what happens when human existence lacks meaning or purpose and communication breaks down. The structure of the plays is typically a round shape, with the finishing point the same as the starting point. Logical construction and argument give way to irrational and illogical speech and to the ultimate conclusion—silence.The Hutchinson Encyclopedia, Millennium Edition, Helicon 1999.
Origin
Critic Martin Esslin coined the term in his 1960 essay "The Theatre of the Absurd", which begins by focusing on the playwrights Samuel Beckett, Arthur Adamov, and Eugène Ionesco. Esslin says that their plays have a common denominator—the "absurd", a word that Esslin defines with a quotation from Ionesco: "absurd is that which has no purpose, or goal, or objective." The French philosopher Albert Camus, in his 1942 work The Myth of Sisyphus, describes the human situation as meaningless and absurd.
The absurd in these plays takes the form of man's reaction to a world apparently without meaning, or man as a puppet controlled or menaced by invisible outside forces. This style of writing was first popularized by the Eugène Ionesco play The Bald Soprano (1950). Although the term is applied to a wide range of plays, some characteristics coincide in many of the plays: broad comedy, often similar to vaudeville, mixed with horrific or tragic images; characters caught in hopeless situations forced to do repetitive or meaningless actions; dialogue full of clichés, wordplay, and nonsense; plots that are cyclical or absurdly expansive; either a parody or dismissal of realism and the concept of the "well-made play".
In his introduction to the book Absurd Drama (1965), Esslin wrote:The Theatre of the Absurd attacks the comfortable certainties of religious or political orthodoxy. It aims to shock its audience out of complacency, to bring it face to face with the harsh facts of the human situation as these writers see it. But the challenge behind this message is anything but one of despair. It is a challenge to accept the human condition as it is, in all its mystery and absurdity, and to bear it with dignity, nobly, responsibly; precisely there are no easy solutions to the mysteries of existence, because ultimately man is alone in a meaningless world. The shedding of easy solutions, of comforting illusions, may be painful, but it leaves behind it a sense of freedom and relief. And that is why, in the last resort, the Theatre of the Absurd does not provoke tears of despair but the laughter of liberation.
Etymology
In the first edition of "The Theatre of the Absurd", Esslin quotes the French philosopher Albert Camus's essay "The Myth of Sisyphus", as it uses the word "absurdity" to describe the human situation: "In a universe that is suddenly deprived of illusions and of light, man feels a stranger. … This divorce between man and his life, the actor and his setting, truly constitutes the feeling of Absurdity."Camus, Albert. Le Mythe de Sisyphe (paris: Gallimard, 1942), p.18Camus, Albert. The Myth of Sisyphus and Other Essays. Vintage (May 7, 1991) p. 2
Esslin presents the four defining playwrights of the movement as Samuel Beckett, Arthur Adamov, Eugène Ionesco, and Jean Genet, and in subsequent editions he added a fifth playwright, Harold Pinter.Martin Esslin, The Theatre of the Absurd (Garden City, NY: Doubleday, 1961). (Subsequent references to this ed. appear within parentheses in the text.)Martin Esslin, The Theatre of the Absurd, 3rd ed. (New York: Vintage [Knopf], 2004). (Subsequent references to this ed. appear within parentheses in the text.) Other writers associated with this group by Esslin and other critics include Tom Stoppard,Terry Hodgson. The plays of Tom Stoppard: for stage, radio, TV and film.Palgrave Macmillan, 2001. , . p.181. Friedrich Dürrenmatt,Joel Agee. Dürrenmatt, Friedrich: Friedrich Dürrenmatt.University of Chicago Press, 2006. , . p. xi Fernando Arrabal,Felicia Hardison Londré, Margot Berthold. The history of world theater: from the English restoration to the present. Continuum International Publishing Group, 1999. , . p. 438 Edward Albee,Barbara Lee Horn. Edward Albee: a research and production sourcebook. Greenwood Publishing Group, 2003. , . pp. 13, 17 29, 40, 55, 232. Boris Vian,Neil Cornwell. The Absurd in Literature. Manchester University Press ND, 2006. . p. 280. and Jean Tardieu.
Precursors
Tragicomedy
The mode of most "absurdist" plays is tragicomedy.Esslin, pp. 323–324J. L. Styan. Modern Drama in Theory and Practice. Cambridge University Press, 1983 , p. 125 As Nell says in Endgame, "Nothing is funnier than unhappiness … it's the most comical thing in the world".Samuel Beckett. Endgame: a play in one act, followed by Act without words, a mime for one player. Grove Press, 1958. . pp. 18–19. Esslin cites William Shakespeare as an influence on this aspect of the "absurd drama".Esslin, pp. 321–323 Shakespeare's influence is acknowledged directly in the titles of Ionesco's Macbett and Stoppard's Rosencrantz and Guildenstern Are Dead. Friedrich Dürrenmatt says in his essay "Problems of the Theatre", "Comedy alone is suitable for us … But the tragic is still possible even if pure tragedy is not. We can achieve the tragic out of comedy. We can bring it forth as a frightening moment, as an abyss that opens suddenly; indeed, many of Shakespeare's tragedies are already really comedies out of which the tragic arises."Friedrich Dürrenmatt. "Problems of the Theatre". The Marriage of Mr. Mississippi. Grove Press, 1964. . pp. 30–31.
Though layered with a significant amount of tragedy, theatre of the absurd echoes other great forms of comedic performance, according to Esslin, from Commedia dell'arte to vaudeville.Styan, p. 126 Similarly, Esslin cites early film comedians and music hall artists such as Charlie Chaplin, the Keystone Cops and Buster Keaton as direct influences. (Keaton even starred in Beckett's Film in 1965.)Esslin, p. 325
Formal experimentation
As an experimental form of theatre, many theatre of the absurd playwrights employ techniques borrowed from earlier innovators. Writers and techniques frequently mentioned in relation to the theatre of the absurd include the 19th-century nonsense poets, such as Lewis Carroll or Edward Lear;Esslin, pp. 330–331 Polish playwright Stanisław Ignacy Witkiewicz;Esslin, pp. 382–385 the Russians Daniil Kharms,Neil Cornwell. The absurd in literature. Manchester University Press ND, 2006. . p. 143. Nikolai Erdman,John Freedman. The major plays of Nikolai Erdman: The warrant and The suicide. Routledge, 1995.. xvii. and others; Bertolt Brecht's distancing techniques in his "epic theatre";Esslin, pp. 365–368 and the "dream plays" of August Strindberg.J. L. Styan. The dark comedy: the development of modern comic tragedy. Cambridge University Press, 1968. . p. 217.
One commonly cited precursor is Luigi Pirandello, especially Six Characters in Search of an Author.Annette J. Saddik. Ed. "Experimental Innovations After the Second World War". Contemporary American Drama.Edinburgh University Press, 2007. . p. 28 Pirandello was a highly regarded theatrical experimentalist who wanted to bring down the fourth wall presupposed by the realism of playwrights such as Henrik Ibsen. According to W. B. Worthen, Six Characters and other Pirandello plays use "metatheatre—roleplaying, plays-within-plays, and a flexible sense of the limits of stage and illusion—to examine a highly-theatricalized vision of identity".Worthen, p. 702
Another influential playwright was Guillaume Apollinaire whose The Breasts of Tiresias was the first work to be called "surreal".Allan Lewis. "The Theatre of the 'Absurd' – Beckett, Ionesco, Genet". The Contemporary Theatre: The Significant Playwrights of Our Time. Crown Publishers, 1966. p. 260Rupert D. V. Glasgow. Madness, Masks, and Laughter: An Essay on Comedy. Fairleigh Dickinson Univ Press, 1995. . p. 332.Deborah B. Gaensbauer. The French theater of the absurd. Twayne Publishers, 1991. . p. 17
Pataphysics, surrealism, and Dadaism
A precursor is Alfred Jarry whose Ubu plays scandalized Paris in the 1890s. Likewise, the concept of 'pataphysics—"the science of imaginary solutions"—first presented in Jarry's Gestes et opinions du docteur Faustroll, pataphysicien (Exploits and Opinions of Dr. Faustroll, pataphysician)Jill Fell. Alfred Jarry, an imagination in revolt. Fairleigh Dickinson Univ Press, 2005. . p. 53 was inspirational to many later absurdists, some of whom joined the Collège de 'pataphysique, founded in honor of Jarry in 1948Esslin, pp. 346–348 (Ionesco,Raymond Queneau, Marc Lowenthal. Stories & remarks.U of Nebraska Press, 2000
, . pp. ix–x Arrabal, and VianDavid Bellos. Georges Perec: a life in words : a biography. David R. Godine Publisher, 1993. , . p. 596 were given the title "transcendent satrape of the Collège de 'pataphysique"). The Theatre Alfred Jarry, founded by Antonin Artaud and Roger Vitrac, housed several absurdist plays, including ones by Ionesco and Adamov.Esslin, p. 373.Cornwell, p.170
In the 1860s, a gaucho author established himself as a precursor of the theater of the absurd in Brazilian lands. Qorpo-Santo, pseudonym of José Joaquim de Campos Leão, released during the last years of his life several theatrical works that can be classified as precursors of the theater of the absurd. However, he is little known, even in his homeland, but works such as "Mateus e Mateusa" are gradually being rediscovered by scholars in Brazil and around the world.
Artaud's "Theatre of Cruelty" (presented in Theatre and its Double) was a particularly important philosophical treatise. Artaud claimed theatre's reliance on literature was inadequate and that the true power of theatre was in its visceral impact.Antonin Artaud The Theatre and Its Double. Tr. Mary Caroline Richards. New York: Grove Weidenfeld, 1958., pp. 15–133.Styan, Modern p. 128Saddik, pp. 24–27. Artaud was a surrealist, and many other members of the surrealist group were significant influences on the absurdists.Esslin, pp. 372–375.Mel Gussow. Theatre on the edge: new visions, new voices. Hal Leonard Corporation, 1998. . p. 303.Eli Rozik. The roots of theatre: rethinking ritual and other theories of origin. University of Iowa Press, 2002. . p. 264.
Absurdism is also frequently compared to surrealism's predecessor, Dadaism (for example, the Dadaist plays by Tristan Tzara performed at the Cabaret Voltaire in Zürich).Richard Drain. Twentieth-century theatre: a sourcebook. Routledge, 1995. . pp. 5–7, 26. Many of the absurdists had direct connections with the Dadaists and surrealists. Ionesco,Eugène Ionesco. Present past, past present: a personal memoir. Da Capo Press, 1998. . p. 148.Lamont, pp. 41–42 Adamov,Esslin, p. 89Justin Wintle. Makers of modern culture. Routledge, 2002. . p. 3 and ArrabalC. D. Innes. Avant garde theatre, 1892–1992.Routledge, 1993. . p. 118. for example, were friends with surrealists still living in Paris at the time including Paul Eluard and André Breton, the founder of surrealism, and Beckett translated many surrealist poems by Breton and others from French into English.James Knowlson. Damned to Fame: The Life of Samuel Beckett. London. Bloomsbury Publishing, 1997. , p. 65Daniel Albright. Beckett and aesthetics.Cambridge University Press, 2003. . p. 10
Relationship with existentialism
Many of the absurdists were contemporaries with Jean-Paul Sartre, the philosophical spokesman for existentialism in Paris, but few absurdists actually committed to Sartre's own existentialist philosophy, as expressed in Being and Nothingness, and many of the absurdists had a complicated relationship with him. Sartre praised Genet's plays, stating that for Genet, "Good is only an illusion. Evil is a Nothingness which arises upon the ruins of Good".Jean-Paul Sartre. "Introduction to The Maids; and Deathwatch" The Maids; and Deathwatch. Grove Press, 1962. . p. 11.
Ionesco, however, hated Sartre bitterly.Eugène Ionesco. Present Past, Past Present. Da Capo Press, 1998. . p. 63. Ionesco accused Sartre of supporting communism but ignoring the atrocities committed by communists; he wrote Rhinoceros as a criticism of blind conformity, whether it be to Nazism or communism; at the end of the play, one man remains on Earth resisting transformation into a rhinoceros.Eugène Ionesco. Fragments of a Journal. Tr. Jean Stewart. London: Faber and Faber, 1968. p. 78.Rosette C. Lamont. Ionesco's imperatives: the politics of culture. University of Michigan Press, 1993. . p. 145. Sartre criticized Rhinoceros by questioning: "Why is there one man who resists? At least we could learn why, but no, we learn not even that. He resists because he is there.""Beyond Bourgeois Theatre" 6Lewis, p. 275. Sartre's criticism highlights a primary difference between the theatre of the absurd and existentialism: the theatre of the absurd shows the failure of man without recommending a solution.Lamont, p. 67. In a 1966 interview, , comparing the absurdists to Sartre and Camus, said to Ionesco, "It seems to me that Beckett, Adamov and yourself started out less from philosophical reflections or a return to classical sources, than from first-hand experience and a desire to find a new theatrical expression that would enable you to render this experience in all its acuteness and also its immediacy. If Sartre and Camus thought out these themes, you expressed them in a far more vital contemporary fashion." Ionesco replied, "I have the feeling that these writers – who are serious and important – were talking about absurdity and death, but that they never really lived these themes, that they did not feel them within themselves in an almost irrational, visceral way, that all this was not deeply inscribed in their language. With them it was still rhetoric, eloquence. With Adamov and Beckett it really is a very naked reality that is conveyed through the apparent dislocation of language."Claude Bonnefoy. Conversations with Eugène Ionesco. Trans. Jan Dawson. Holt, Rinehard and Winston, 1971. pp. 122–123.
In comparison to Sartre's concepts of the function of literature, Beckett's primary focus was on the failure of man to overcome "absurdity" - or the repetition of life even though the end result will be the same no matter what and everything is essentially pointless - as James Knowlson says in Damned to Fame, Beckett's work focuses, "on poverty, failure, exile and loss — as he put it, on man as a 'non-knower' and as a 'non-can-er' ."Knowlson, p. 319 Beckett's own relationship with Sartre was complicated by a mistake made in the publication of one of his stories in Sartre's journal Les Temps Modernes.Knowlson, p. 325. Beckett said, though he liked Nausea, he generally found the writing style of Sartre and Heidegger to be "too philosophical" and he considered himself "not a philosopher".Anthony Cronin, Isaac Cronin. Samuel Beckett: the last modernist. Da Capo Press, 1999. . p. 231.
History
The "absurd" or "new theater" movement was originally a Paris-based (and a Rive Gauche) avant-garde phenomenon tied to extremely small theatres in the Quartier Latin. Some of the absurdists, such as Jean Genet,Peter Norrish. New tragedy and comedy in France, 1945–1970.Rowman & Littlefield, 1988. . p. 107 Jean Tardieu,Felicia Hardison Londré, Margot Berthold. The history of world theater: from the English restoration to the present. Continuum International Publishing Group, 1999. . p. 428. and Boris Vian.,Bill Marshall, Cristina Johnston. France and the Americas: culture, politics, and history : a multidisciplinary encycopledia. ABC-CLIO, 2005. . p. 1187. were born in France. Many other absurdists were born elsewhere but lived in France, writing often in French: Beckett from Ireland; Ionesco from Romania; Arthur Adamov from Russia; Alejandro Jodorowsky from Chile and Fernando Arrabal from Spain.David Thatcher Gies. The Cambridge companion to modern Spanish culture. Cambridge University Press, 1999. . p. 229 As the influence of the absurdists grew, the style spread to other countries—with playwrights either directly influenced by absurdists in Paris or playwrights labelled absurdist by critics. In England, some of those whom Esslin considered practitioners of the theatre of the absurd include Harold Pinter, Tom Stoppard,Gabrielle H. Cody, Evert Sprinchorn. The Columbia encyclopedia of modern drama. Columbia University Press, 2007. . p. 1285. N. F. Simpson, James Saunders,Randall Stevenson, Jonathan Bate. The Oxford English Literary History: 1960–2000: The Last of England?. Oxford University Press, 2004. . p. 356. and David Campton;Stevenson, p. 358. in the United States, Edward Albee, Sam Shepard,Don Shewey. Sam Shepard. Da Capo Press, 1997. . pp. 123, 132. Jack Gelber,C. W. E. Bigsby. Modern American drama, 1945–2000. Cambridge University Press, 2000. . p. 124 and John Guare;Bigsby, p. 385. in Poland, Tadeusz Różewicz; Sławomir Mrożek, and Tadeusz Kantor;Cody, p. 1343 in Italy, Dino Buzzati;Gaetana Marrone, Paolo Puppa, Luca Somigli. Encyclopedia of Italian literary studies. CRC Press, 2006. . p. 335 and in Germany, Peter Weiss,Robert Cohen. Understanding Peter Weiss. Univ of South Carolina Press, 1993. . pp. 35–36. Wolfgang Hildesheimer, and Günter Grass. In India, both Mohit ChattopadhyayMarshall Cavendish. World and Its Peoples: Eastern and Southern Asia. Marshall Cavendish, 2007. . p. 408. and Mahesh Elkunchwar have also been labeled absurdists. Other international absurdist playwrights include Tawfiq el-Hakim from Egypt;William M. Hutchins. Tawfiq al-Hakim: a reader's guide. Lynne Rienner Publishers, 2003. . p. 1, 27. Hanoch Levin from Israel;Linda Ben-Zvi. Theater in Israel. University of Michigan Press, 1996. . p. 151. Miguel Mihura from Spain;Gies, p. 258 José de Almada Negreiros from Portugal;Anna Klobucka. The Portuguese nun: formation of a national myth. Bucknell University Press, 2000. . p. 88. Mikhail Volokhov Mikhail Volokhov from Russia; Yordan Radichkov from Bulgaria;Kalina Stefanova, Ann Waugh. Eastern European Theater After the Iron Curtain.Routledge, 2000. . p. 34 and playwright and former Czech president Václav Havel.
Major productions
Genet's The Maids (Les Bonnes) premiered in 1947.Gene A. Plunka. The Rites of Passage of Jean Genet: The Art and Aesthetics of Risk Taking. Fairleigh Dickinson Univ Press, 1992. . pp. 29, 304.
Ionesco's The Bald Soprano (La Cantatrice Chauve) was first performed on May 11, 1950, at the Théâtre des Noctambules. Ionesco followed this with The Lesson (La Leçon) in 1951 and The Chairs (Les Chaises) in 1952.Allan Lewis. Ionesco. Twayne Publishers, 1972. p. 33Lamont, p. 3
Beckett's Waiting for Godot was first performed on 5 January 1953 at the in Paris.Lawrence Graver, Raymond Federman. Samuel Beckett: The Critical Heritage. Routledge, 1997. . p. 88
In 1957, Genet's The Balcony (Le Balcon) was produced in London at the Arts Theatre.Plunka, pp. 29, 309
That May, Harold Pinter's The Room was presented at the Drama Studio at the University of Bristol.Ian Smith, Harold Pinter. Pinter in the theatre. Nick Hern Books, 2005. . p. 169. Pinter's The Birthday Party premiered in the West End in 1958.Smith, pp. 28–29
Albee's The Zoo Story premiered in West Berlin at the Schiller Theater Werkstatt in 1959.Barbara Lee Horn. Edward Albee: a research and production sourcebook. Greenwood Publishing Group, 2003. . p. 2
On October 28, 1959, Krapp's Last Tape by Beckett was first performed at the Royal Court Theatre in London.Graver, xvii
Arrabal's Picnic on the Battlefield (Pique-nique en campagne) came out in 1958.David Bradby, Maria M. Delgado. The Paris jigsaw: internationalism and the city's stages. Manchester University Press, 2002. . p. 204Styan, Modern p. 144
Genet's The Blacks (Les Nègres) was published that year but was first performed at the Théatre de Lutèce in Paris on 28 October 1959.Plunka, pp. 29, 30, 309
1959 also saw the completion of Ionesco's Rhinoceros which premiered in Paris in January 1960 at the Odeon.Lamont, p. 275
Beckett's Happy Days was first performed at the Cherry Lane Theatre in New York on 17 September 1961.Graver, p. xviii
Albee's Who's Afraid of Virginia Woolf? also premiered in New York the following year, on October 13.
Pinter's The Homecoming premiered in London in June 1965 at the Aldwych Theatre.Peter Raby. The Cambridge companion to Harold Pinter. Cambridge University Press, 2001. . p. xv.
Weiss's Marat/Sade (The Persecution and Assassination of Jean-Paul Marat as Performed by the Inmates of the Asylum of Charenton Under the Direction of the Marquis de Sade) was first performed in West Berlin in 1964 and in New York City a year later.Peter Weiss, Robert Cohen. Marat/Sade; The investigation; and The shadow of the coachman's body. Continuum International Publishing Group, 1998. . p. xxvi.
Stoppard's Rosencrantz & Guildenstern Are Dead premiered at the Edinburgh Festival Fringe in 1966.Anthony Jenkins. The theatre of Tom Stoppard. Cambridge University Press, 1989. . p. 37.
Arrabal's Automobile Graveyard (Le Cimetière des voitures) was first performed in 1966.
Lebanese author Issam Mahfouz's play The Dictator premiered in Beirut in 1969.
Beckett's Catastrophe—dedicated to then-imprisoned Czech dissident playwright Václav Havel, who became president of Czechoslovakia after the 1989 Velvet Revolution—was first performed at the Avignon Festival on July 21, 1982.Knowlson, p. 741.Enoch Brater. Beyond Minimalism: Beckett's Late Style in the Theater. Oxford University Press US, 1990. . p. 139. The film version (Beckett on Film, 2001) was directed by David Mamet and performed by Pinter, Sir John Gielgud, and Rebecca Pidgeon.Chris Ackerley, S. E. Gontarski. The Grove companion to Samuel Beckett: a reader's guide to his works, life, and thought. Grove Press, 2004. . p. 44
Theatrical features
Plays within this group are absurd in that they focus not on logical acts, realistic occurrences, or traditional character development; they, instead, focus on human beings trapped in an incomprehensible world subject to any occurrence, no matter how illogical.Styan, Dark 218Saddik, p. 29Norrish, pp. 2–8. The theme of incomprehensibility is coupled with the inadequacy of language to form meaningful human connections. According to Martin Esslin, absurdism is "the inevitable devaluation of ideals, purity, and purpose"Esslin, p. 24 Absurdist drama asks its viewer to "draw his own conclusions, make his own errors".Esslin, p. 20 Though Theatre of the Absurd may be seen as nonsense, they have something to say and can be understood".Esslin, p. 21 Esslin makes a distinction between the dictionary definition of absurd ("out of harmony" in the musical sense) and drama's understanding of the absurd: "Absurd is that which is devoid of purpose... Cut off from his religious, metaphysical, and transcendental roots, man is lost; all his actions become senseless, absurd, useless."Ionesco in Esslin, p. 23
Characters
The characters in absurdist drama are lost and floating in an incomprehensible universe and they abandon rational devices and discursive thought because these approaches are inadequate.Watt and Richardson 1154 Many characters appear as automatons stuck in routines speaking only in cliché (Ionesco called the Old Man and Old Woman in The Chairs "übermarionettes").Lamont, p. 72 Characters are frequently stereotypical, archetypal, or flat character types as in Commedia dell'arte.Anthony Cronin, Isaac Cronin. Samuel Beckett: the last modernist. Da Capo Press, 1999. . p. 424.Dave Bradby. Modern French Drama: 1940–1990. Cambridge University Press, 1991. . 58.Esslin, p. 402
The more complex characters are in crisis because the world around them is incomprehensible. Many of Pinter's plays, for example, feature characters trapped in an enclosed space menaced by some force the character cannot understand. Pinter's first play was The Room – in which the main character, Rose, is menaced by Riley who invades her safe space though the actual source of menace remains a mystery.Katherine H. Burkman. The dramatic world of Harold Pinter: its basis in ritual. Ohio State University Press, 1971 , . pp. 70–73. In Friedrich Dürrenmatt's The Visit, the main character, Alfred, is menaced by Claire Zachanassian; Claire, richest woman in the world, with a decaying body and multiple husbands throughout the play, has guaranteed a payout for anyone in the town willing to kill Alfred.Roger Alan Crockett. Understanding Friedrich Dürrenmatt.Univ of South Carolina Press, 1998. , . p.81 Characters in absurdist drama may also face the chaos of a world that science and logic have abandoned. Ionesco's recurring character Berenger, for example, faces a killer without motivation in The Killer, and Berenger's logical arguments fail to convince the killer that killing is wrong.Leonard Cabell Pronko. Avant-garde: the experimental theater in France. University of California Press, 1966. pp. 96–102. In Rhinocéros, Berenger remains the only human on Earth who has not turned into a rhinoceros and must decide whether or not to conform.Harold Bloom. Bloom's Major Dramatists: Eugène Ionesco. 2003. Infobase Publishing. p106-110.Robert B. Heilman. The Ghost on the Ramparts. University of Georgia Press, 2008 , . pp. 170–171. Characters may find themselves trapped in a routine, or in a metafictional conceit, trapped in a story; the title characters in Stoppard's Rosencrantz & Guildenstern Are Dead, for example, find themselves in a story (Hamlet) in which the outcome has already been written.Bradby, Modern p. 59Victor L. Cahn. Beyond Absurdity: The Plays of Tom Stoppard. London: Associated University Presses, 1979. pp. 36–39. Cahn asserts that though Stoppard began writing in the absurdist mode, in his increasing focus on order, optimism, and the redemptive power of art, Stoppard has moved "beyond" absurdism, as the title implies.
The plots of many absurdist plays feature characters in interdependent pairs, commonly either two males or a male and a female. Some Beckett scholars call this the "pseudocouple".Ackerley, pp. 334, 465, 508Alan Astro. Understanding Samuel Beckett. Univ of South Carolina Press, 1990
, . p. 116. The two characters may be roughly equal or have a begrudging interdependence (like Vladimir and Estragon in Waiting for Godot or the two main characters in Rosencrantz & Guildenstern Are Dead); one character may be clearly dominant and may torture the passive character (like Pozzo and Lucky in Waiting for Godot or Hamm and Clov in Endgame); the relationship of the characters may shift dramatically throughout the play (as in Ionesco's The LessonHinden, p. 401. or in many of Albee's plays, The Zoo StoryLeslie Kane. The language of silence: on the unspoken and the unspeakable in modern drama. Fairleigh Dickinson Univ Press, 1984. . pp. 159–160Lisa M. Siefker Bailey, Bruce J. Mann. Edward Albee: A Casebook. 2003. Routledge. pp. 33–44. for example).
Language
Despite its reputation for nonsense language, much of the dialogue in absurdist plays is naturalistic. The moments when characters resort to nonsense language or clichés—when words appear to have lost their denotative function, thus creating misunderstanding among the characters—make the theatre of the absurd distinctive.Esslin, p. 26 Language frequently gains a certain phonetic, rhythmical, almost musical quality, opening up a wide range of often comedic playfulness.Edward Albee, Philip C. Kolin. Conversations with Edward Albee. Univ. Press of Mississippi, 1988. . p. 189. Tardieu, for example, in the series of short pieces Theatre de Chambre arranged the language as one arranges music.Leonard Cabell Pronko. Avant-Garde. University of California Press, 2003. pp.155–156 Distinctively absurdist language ranges from meaningless clichés to vaudeville-style word play to meaningless nonsense.Jeanette R. Malkin. Verbal Violence in Contemporary Drama: From Handke to Shepard. Cambridge University Press, 1992. . p. 40. The Bald Soprano, for example, was inspired by a language book in which characters would exchange empty clichés that never ultimately amounted to true communication or true connection.Styan, Dark p. 221Erich Segal. The Death of Comedy. Harvard University Press, 2001. p. 422. Likewise, the characters in The Bald Soprano—like many other absurdist characters—go through routine dialogue full of clichés without actually communicating anything substantive or making a human connection.Saddik, p. 30Guido Almansi, Simon Henderson. Harold Pinter. Routledge, 1983. . p. 37. In other cases, the dialogue is purposefully elliptical; the language of absurdist theater becomes secondary to the poetry of the concrete and objectified images of the stage.Kane, pp. 17, 19 Many of Beckett's plays devalue language for the sake of the striking tableau.Saddik, p. 32 Harold Pinter—famous for his "Pinter pause"—presents more subtly elliptical dialogue; often the primary things characters should address are replaced by ellipsis or dashes. The following exchange between Aston and Davies in The Caretaker is typical of Pinter:
Aston: More or less exactly what you...
Davies: That's it … that's what I'm getting at is … I mean, what sort of jobs … (Pause.)
Aston: Well, there's things like the stairs … and the … the bells …
Davies: But it'd be a matter … wouldn't it … it'd be a matter of a broom … isn't it?Harold Pinter. The Caretaker. DPS, 1991., p. 32
Much of the dialogue in absurdist drama (especially in Beckett's and Albee's plays) reflects this kind of evasiveness and inability to make a connection. When language that is apparently nonsensical appears, it also demonstrates this disconnection. It can be used for comic effect, as in Lucky's long speech in Godot when Pozzo says Lucky is demonstrating a talent for "thinking" as other characters comically attempt to stop him:
Lucky: Given the existence as uttered forth in the public works of Puncher and Wattmann of a personal God quaquaquaqua with white beard quaquaquaqua outside time without extension who from the heights of divine apathia divine athambia divine aphasia loves us dearly with some exceptions for reasons unknown but time will tell and suffers like the divine Miranda with those who for reasons unknown but time will tell are plunged in torment...David Bradby. Beckett, Waiting for Godot. Camberidge University Press, 2001. , p. 81.
Nonsense may also be used abusively, as in Pinter's The Birthday Party when Goldberg and McCann torture Stanley with apparently nonsensical questions and non-sequiturs:
Goldberg: What do you use for pajamas?
Stanley: Nothing.
Goldberg: You verminate the sheet of your birth.
Mccann: What about the Albigensenist heresy?
Goldberg: Who watered the wicket in Melbourne?
Mccann: What about the blessed Oliver Plunkett?
Goldberg: Speak up Webber. Why did the chicken cross the road?Harold Pinter. The Birthday Party and The Room: Two Plays. Grove Press, 1994. . p. 51.
As in the above examples, nonsense in absurdist theatre may be also used to demonstrate the limits of language while questioning or parodying the determinism of science and the knowability of truth.Raymond Williams. "The Birthday Party: Harold Pinter". Modern Critical Views: Harold Pinter. New York: Chelsea House Publishers, 1987. . p. 22–23.Marc Silverstein. Harold Pinter and the language of cultural power. Bucknell University Press, 1993 , . pg. 33–34.Richard Hornby. Drama, Metadrama and perception. Associated University Presse, 1986 , . pp. 61–63. In Ionesco's The Lesson, a professor tries to force a pupil to understand his nonsensical philology lesson:
Professor: … In Spanish: the roses of my grandmother are as yellow as my grandfather who is Asiatic; in Latin: the roses of my grandmother are as yellow as my grandfather who is Asiatic. Do you detect the difference? Translate this into … Romanian
Pupil: The … how do you say "roses" in Romanian?
Professor: But "roses", what else? … "roses" is a translation in Oriental of the French word "roses", in Spanish "roses", do you get it? In Sardanapali, "roses"...Eugène Ionesco. The Bald Soprano and Other Plays. Grove Press, 1982. . p. 67.
Plot
Traditional plot structures are rarely a consideration in the theatre of the absurd.Claude Schumacher. Encyclopedia of Literature & Criticism. 1990. Routledge. p. 10. Plots can consist of the absurd repetition of cliché and routine, as in Godot or The Bald Soprano.Sydney Homan. Beckett's theaters: interpretations for performance. Bucknell University Press, 1984. . p. 198. Often there is a menacing outside force that remains a mystery; in The Birthday Party, for example, Goldberg and McCann confront Stanley, torture him with absurd questions, and drag him off at the end, but it is never revealed why.Kane, pp. 132, 134 In later Pinter plays, such as The CaretakerKatherine H. Burkman. The dramatic world of Harold Pinter: its basis in ritual. Ohio State University Press, 1971. , . pp. 76–89 and The Homecoming,Marc Silverstein. Harold Pinter and the language of cultural power. Bucknell University Press, 1993., . pp. 76–94. the menace is no longer entering from the outside but exists within the confined space. Other absurdists use this kind of plot, as in Albee's A Delicate Balance: Harry and Edna take refuge at the home of their friends, Agnes and Tobias, because they suddenly become frightened.Stephen James Bottoms. The Cambridge Companion to Edward Albee. Cambridge University Press, 2005. . p. 221. They have difficulty explaining what has frightened them:
Harry: There was nothing … but we were very scared.
Edna: We … were … terrified.
Harry: We were scared. It was like being lost: very young again, with the dark, and lost. There was no … thing … to be … frightened of, but …
Edna: We were frightened … and there was nothing.Edward Albee. A delicate balance: a play in three acts. Samuel French, Inc., 1994. . p. 31.
Absence, emptiness, nothingness, and unresolved mysteries are central features in many absurdist plots:Les Essif. Empty figure on an empty stage: the theatre of Samuel Beckett and his generation. Indiana University Press, 2001. . pp. 1–9 for example, in The Chairs, an old couple welcomes a large number of guests to their home, but these guests are invisible, so all we see are empty chairs, a representation of their absence.Alice Rayner. Ghosts: death's double and the phenomena of theatre. U of Minnesota Press, 2006. . p. 120. Likewise, the action of Godot is centered around the absence of a man named Godot, for whom the characters perpetually wait. In many of Beckett's later plays, most features are stripped away and what's left is a minimalistic tableau: a woman walking slowly back and forth in Footfalls,Morris Beja, S. E. Gontarski, Pierre A. G. Astier. Samuel Beckett—humanistic perspectives.Ohio State University Press, 1983. . p. 8 for example, or in Breath only a junk heap on stage and the sounds of breathing.Alan Astro. Understanding Samuel Beckett. Univ of South Carolina Press, 1990. . p. 177.Ruby Cohn. A Beckett Canon. University of Michigan Press, 2001. pp. 298, 337.
The plot may also revolve around an unexplained metamorphosis, a supernatural change, or a shift in the laws of physics. For example, in Ionesco's Amédée, or How to Get Rid of It, a couple must deal with a corpse that is steadily growing larger and larger; Ionesco never fully reveals the identity of the corpse, how this person died, or why it is continually growing, but the corpse ultimately – and, again, without explanation – floats away.Lamont, p. 101Justin Wintle. The Makers of Modern Culture. Routledge, 2002. . p. 243. In Tardieu's "The Keyhole" a lover watches a woman through a keyhole as she removes her clothes and then her flesh.Pronko, p. 157.
Like Pirandello, many absurdists use meta-theatrical techniques to explore role fulfillment, fate, and the theatricality of theatre. This is true for many of Genet's plays: for example, in The Maids, two maids pretend to be their mistress; in The Balcony brothel patrons take on elevated positions in role-playing games, but the line between theatre and reality starts to blur. Another complex example of this is Rosencrantz and Guildenstern are Dead: it is a play about two minor characters in Hamlet; these characters, in turn, have various encounters with the players who perform The Mousetrap, the play-within-the-play in Hamlet.June Schlueter. Metafictional Characters in Modern Drama. Columbia University Press, 1979. . p. 53. In Stoppard's Travesties, James Joyce and Tristan Tzara slip in and out of the plot of The Importance of Being Earnest.Peter K. W. Tan, Tom Stoppard. A stylistics of drama: with special focus on Stoppard's Travesties. NUS Press, 1993. , .
Plots are frequently cyclical: for example, Endgame begins where the play endedKatherine H. Burkman. Myth and ritual in the plays of Samuel Beckett. Fairleigh Dickinson Univ Press, 1987. . p. 24. – at the beginning of the play, Clov says, "Finished, it's finished, nearly finished, it must be nearly finished"Samuel Beckett. Endgame: a play in one act, followed by Act without words, a mime for one player.Grove Press, 1958. . p. 1. – and themes of cycle, routine, and repetition are explored throughout.Andrew K. Kennedy. Samuel Beckett. Cambridge University Press, 1989. . p. 48.
References
Further reading
Ackerley, C. J. and S. E. Gontarski, ed. The Grove Companion to Samuel Beckett. New York: Grove P, 2004.
Adamov, Jacqueline, "Censure et représentation dans le théâtre d’Arthur Adamov", in P. Vernois (Textes recueillis et présentés par), L’Onirisme et l’insolite dans le théâtre français contemporain. Actes du colloque de Strasbourg, Paris, Editions Klincksieck, 1974.
Baker, William, and John C. Ross, comp. Harold Pinter: A Bibliographical History. London: The British Library and New Castle, DE: Oak Knoll P, 2005. (10). (13).
Bennett, Michael Y. Reassessing the Theatre of the Absurd: Camus, Beckett, Ionesco, Genet, and Pinter. New York: Palgrave Macmillan, 2011.
Bennett, Michael Y. The Cambridge Introduction to Theatre and Literature of the Absurd. Cambridge: Cambridge University Press, 2015.
Brook, Peter. The Empty Space: A Book About the Theatre: Deadly, Holy, Rough, Immediate. Touchstone, 1995. (10).
Caselli, Daniela. Beckett's Dantes: Intertextuality in the Fiction and Criticism. .
Cronin, Anthony. Samuel Beckett: The Last Modernist. New York: Da Capo P, 1997.
Driver, Tom Faw. Jean Genet. New York: Columbia UP, 1966.
Esslin, Martin. The theatre of the absurd. London: Pelican, 1980.
Gaensbauer, Deborah B. Eugène Ionesco Revisited. New York: Twayne, 1996.
Haney, W.S., II. "Beckett Out of His Mind: The Theatre of the Absurd". Studies in the Literary IMagination. Vol. 34 (2).
La Nouvelle Critique, numéro spécial "Arthur Adamov", août-septembre 1973.
Lewis, Allan. Ionesco. New York: Twayne, 1972.
McMahon, Joseph H. The Imagination of Jean Genet. New Haven: Yale UP, 1963.
Mercier, Vivian. Beckett/Beckett. Oxford UP, 1977. .
Youngberg, Q. Mommy's American Dream in Edward Albee's the American Dream. The Explicator, (2), 108.
Zhu, Jiang. "Analysis on the Artistic Features and Themes of the Theater of the Absurd". Theory & Practice in Language Studies, 3(8).
Category:Absurdist fiction
Category:Concepts in aesthetics
Category:Concepts in epistemology
Category:Concepts in metaphysics
Category:Existentialist concepts
Category:Metaphors
Category:Modernist theatre
Category:Postmodern literature
Category:Surrealism
*
Category:Theatrical genres
Category:Types of existentialism
|
arts_entertainment
| 6,141
|
99040
|
Free jazz
|
https://en.wikipedia.org/wiki/Free_jazz
|
Free jazz (also known as free form jazz) is a style of avant-garde jazz or an experimental approach to jazz improvisation that developed in the late 1950s and early 1960s, when musicians attempted to change or break down jazz conventions, such as regular tempos, tones, and chord changes. Musicians during this period believed that the bebop and modal jazz that had been played before them was too limiting, and became preoccupied with creating something new. The term "free jazz" was drawn from the 1960 Ornette Coleman recording Free Jazz: A Collective Improvisation. Europeans tend to favor the term "free improvisation". Others have used "modern jazz", "creative music", and "art music".
The ambiguity of free jazz presents problems of definition. Although it is usually played by small groups or individuals, free jazz big bands have existed. Although musicians and critics claim it is innovative and forward-looking, it draws on early styles of jazz and has been described as an attempt to return to primitive, often religious, roots. Although jazz is an American invention, free jazz musicians drew heavily from world music and ethnic music traditions from around the world. Sometimes they played African or Asian instruments, unusual instruments, or invented their own. They emphasized emotional intensity and sound for its own sake, exploring timbre.
Characteristics
Free jazz was a reaction to the convolution of bop. Conductor and jazz writer Loren Schoenberg wrote that free jazz "gave up on functional harmony altogether, relying instead on a far ranging, stream-of-consciousness approach to melodic variation". The style was largely inspired by the work of jazz saxophonist Ornette Coleman.
Some jazz musicians resist any attempt at classification. One difficulty is that most jazz has an element of improvisation. Many musicians draw on free jazz concepts and idioms, and free jazz was never entirely distinct from other genres, but free jazz does have some unique characteristics. Pharoah Sanders and John Coltrane used harsh overblowing or other extended techniques to elicit unconventional sounds from their instruments. Like other forms of jazz it places an aesthetic premium on expressing the "voice" or "sound" of the musician, as opposed to the classical tradition in which the performer is seen more as expressing the thoughts of the composer.
Earlier jazz styles typically were built on a framework of song forms, such as twelve-bar blues or the 32-bar AABA popular song form with chord changes. In free jazz, however, the dependence on a fixed and pre-established form is often eliminated, and the role of improvisation is correspondingly increased.
Other forms of jazz use regular meters and pulsed rhythms, usually in 4/4 or (less often) 3/4. Free jazz retains pulsation and sometimes swings but without regular meter. Frequent accelerando and ritardando give an impression of rhythm that moves like a wave.
Previous jazz forms used harmonic structures, usually cycles of diatonic chords. When improvisation occurred, it was founded on the notes in the chords. Free jazz almost by definition is free of such structures, but also by definition (it is, after all, "jazz" as much as it is "free") it retains much of the language of earlier jazz playing. It is therefore very common to hear diatonic, altered dominant and blues phrases in this music.thumb|right|Pharoah Sanders
Guitarist Marc Ribot commented that Ornette Coleman and Albert Ayler "although they were freeing up certain strictures of bebop, were in fact each developing new structures of composition." Some forms use composed melodies as the basis for group performance and improvisation. Free jazz practitioners sometimes use such material. Other compositional structures are employed, some detailed and complex.
The breakdown of form and rhythmic structure has been seen by some critics to coincide with jazz musicians' exposure to and use of elements from non-Western music, especially African, Arabic, and Indian. The atonality of free jazz is often credited by historians and jazz performers to a return to non-tonal music of the nineteenth century, including field hollers, street cries, and jubilees (part of the "return to the roots" element of free jazz). This suggests that perhaps the movement away from tonality was not a conscious effort to devise a formal atonal system, but rather a reflection of the concepts surrounding free jazz. Jazz became "free" by removing dependence on chord progressions and instead using polytempic and polyrhythmic structures.
Rejection of the bop aesthetic was combined with a fascination with earlier styles of jazz, such as dixieland with its collective improvisation, as well as African music. Interest in ethnic music resulted in the use of instruments from around the world, such as Ed Blackwell's West African talking drum, and Leon Thomas's interpretation of pygmy yodeling. Ideas and inspiration were found in the music of John Cage, Musica Elettronica Viva, and the Fluxus movement.
Many critics, particularly at the music's inception, suspected that abandonment of familiar elements of jazz pointed to a lack of technique on the part of the musicians. By 1974, such views were more marginal, and the music had built a body of critical writing.
Many critics have drawn connections between the term "free jazz" and the American social setting during the late 1950s and 1960s, especially the emerging social tensions of racial integration and the civil rights movement. Many argue those recent phenomena such as the landmark Brown v. Board of Education decision in 1954, the emergence of the Freedom Riders in 1961, the 1963 Freedom Summer of activist-supported black voter registration, and the free alternative black Freedom Schools demonstrate the political implications of the word "free" in context of free jazz. Thus many consider free jazz to be not only a rejection of certain musical credos and ideas, but a musical reaction to the oppression and experience of black Americans.
History
Although free jazz is widely considered to begin in the late 1950s, there are compositions that precede this era that have notable connections to the free jazz aesthetic. Some of the works of Lennie Tristano in the late 1940s, particularly "Intuition", "Digression", and "Descent into the Maelstrom" exhibit the use of techniques associated with free jazz, such as atonal collective improvisation and lack of discrete chord changes. Other notable examples of proto-free jazz include City of Glass written in 1948 by Bob Graettinger for the Stan Kenton band and Jimmy Giuffre's 1953 "Fugue". It can be argued, however, that these works are more representative of third stream jazz with its references to contemporary classical music techniques such as serialism.
Keith Johnson of AllMusic describes a "Modern Creative" genre, in which "musicians may incorporate free playing into structured modes—or play just about anything." He includes John Zorn, Henry Kaiser, Eugene Chadbourne, Tim Berne, Bill Frisell, Steve Lacy, Cecil Taylor, Ornette Coleman, and Ray Anderson in this genre, which continues "the tradition of the '50s to '60s free-jazz mode".
Ornette Coleman rejected pre-written chord changes, believing that freely improvised melodic lines should serve as the basis for harmonic progression. His first notable recordings for Contemporary included Tomorrow Is the Question! and Something Else!!!! in 1958. These albums do not follow typical 32-bar form and often employ abrupt changes in tempo and mood.
The free jazz movement received its biggest impetus when Coleman moved from the west coast to New York City and was signed to Atlantic. Albums such as The Shape of Jazz to Come and Change of the Century marked a radical step beyond his more conventional early work. On these albums, he strayed from the tonal basis that formed the lines of his earlier albums and began truly examining the possibilities of atonal improvisation. The most important recording to the free jazz movement from Coleman during this era, however, came with Free Jazz, recorded in A&R Studios in New York in 1960. It marked an abrupt departure from the highly structured compositions of his past. Recorded with a double quartet separated into left and right channels, Free Jazz brought a more aggressive, cacophonous texture to Coleman's work, and the record's title would provide the name for the nascent free jazz movement.
Pianist Cecil Taylor was also exploring the possibilities of avant-garde free jazz. A classically trained pianist, Taylor's main influences included Thelonious Monk and Horace Silver, who prove key to Taylor's later unconventional uses of the piano. Jazz Advance, his album released in 1956 for Transition showed ties to traditional jazz, albeit with an expanded harmonic vocabulary. But the harmonic freedom of these early releases would lead to his transition into free jazz during the early 1960s. Key to this transformation was the introduction of saxophonist Jimmy Lyons and drummer Sunny Murray in 1962 because they encouraged more progressive musical language, such as tone clusters and abstracted rhythmic figures.
On Unit Structures (Blue Note, 1966) Taylor marked his transition to free jazz, as his compositions were composed almost without notated scores, devoid of conventional jazz meter, and harmonic progression. This direction influenced by drummer Andrew Cyrille, who provided rhythmic dynamism outside the conventions of bebop and swing Taylor also began exploring classical avant-garde, as in his use of prepared pianos developed by composer John Cage.
Albert Ayler was one of the essential composers and performers during the beginning period of free jazz. He began his career as a bebop tenor saxophonist in Scandinavia, and had already begun pushing the boundaries of tonal jazz and blues to their harmonic limits. He soon began collaborating with notable free jazz musicians, including Cecil Taylor in 1962. He pushed the jazz idiom to its absolute limits, and many of his compositions bear little resemblance to jazz of the past. Ayler's musical language focused on the possibilities of microtonal improvisation and extended saxophone technique, creating squawks and honks with his instrument to achieve multiphonic effects. Yet amidst Ayler's progressive techniques, he shows an attachment for simple, rounded melodies reminiscent of folk music, which he explores via his more avant-garde style.
One of Ayler's key free jazz recordings is Spiritual Unity, including his often recorded and most famous composition, Ghosts, in which a simple spiritual-like melody is gradually shifted and distorted through Ayler's unique improvisatory interpretation. Ultimately, Ayler serves as an important example of many ways which free jazz could be interpreted, as he often strays into more tonal areas and melodies while exploring the timbral and textural possibilities within his melodies. In this way, his free jazz is built upon both a progressive attitude towards melody and timbre as well as a desire to examine and recontextualize the music of the past.
In a 1963 interview with Jazz Magazine, Coltrane said he felt indebted to Coleman. While Coltrane's desire to explore the limits of solo improvisation and the possibilities of innovative form and structure was evident in records like A Love Supreme, his work owed more to the tradition of modal jazz and post-bop. But with the recording of Ascension in 1965, Coltrane demonstrated his appreciation for the new wave of free jazz innovators. On Ascension Coltrane augmented his quartet with six horn players, including Archie Shepp and Pharoah Sanders. The composition includes free-form solo improvisation interspersed with sections of collective improvisation reminiscent of Coleman's Free Jazz. The piece sees Coltrane exploring the timbral possibilities of his instrument, using over-blowing to achieve multiphonic tones. Coltrane continued to explore the avant-garde in his following compositions, including such albums as Om, Kulu Se Mama, and Meditations, as well as collaborating with John Tchicai.
Much of Sun Ra's music could be classified as free jazz, especially his work from the 1960s, although Sun Ra said repeatedly that his music was written and boasted that what he wrote sounded more free than what "the freedom boys" played. The Heliocentric Worlds of Sun Ra (1965) was steeped in what could be referred to as a new black mysticism. But Sun Ra's penchant for nonconformity aside, he was along with Coleman and Taylor an integral voice to the formation of new jazz styles during the 1960s. As evidenced by his compositions on the 1956 record Sounds of Joy, Sun Ra's early work employed a typical bop style. But he soon foreshadowed the free jazz movements with compositions like "A Call for All Demons" off of the 1955–57 record Angels and Demons at Play, which combines atonal improvisation with Latin-inspired mambo percussion. His period of fully realized free jazz experimentation began in 1965, with the release of The Heliocentric Worlds of Sun Ra and The Magic City. These records placed a musical emphasis on timbre and texture over meter and harmony, employing a wide variety of electronic instruments and innovative percussion instruments, including the electric celeste, Hammond B-3, bass marimba, harp, and timpani. As result, Sun Ra proved to be one of the first jazz musicians to explore electronic instrumentation, as well as displaying an interest in timbral possibilities through his use of progressive and unconventional instrumentation in his compositions.
The title track of Charles Mingus' Pithecanthropus Erectus contained one improvised section in a style unrelated to the piece's melody or chord structure. His contributions were primarily in his efforts to bring back collective improvisation in a music scene that had become dominated by solo improvisation as a result of big bands.
Outside of New York, a number of significant free jazz scenes appeared in the 1960s. They often gave birth to collectives. In Chicago, numerous artists were affiliated with the Association for the Advancement of Creative Musicians, founded in 1965. In St. Louis, the multidisciplinary Black Artists Group was active between 1968 and 1972. Pianist Horace Tapscott founded the Pan Afrikan Peoples Arkestra and Union of God's Musicians and Artists Ascension in Los Angeles. Although they did not organize as formally, a notable number of free jazz musicians were also active in Albert Ayler's hometown of Cleveland. They included Charles Tyler, Norman Howard, and the Black Unity Trio.
By the 1970s, the setting for avant-garde jazz was shifting to New York City. Arrivals included Arthur Blythe, James Newton, and Mark Dresser, beginning the period of New York loft jazz. As the name may imply, musicians during this time would perform in private homes and other unconventional spaces. The status of free jazz became more complex, as many musicians sought to bring in different genres into their works. Free jazz no longer necessarily indicated the rejection of tonal melody, overarching harmonic structure, or metrical divide, as laid out by Coleman, Coltrane, and Taylor. Instead, the free jazz that developed in the 1960s became one of many influences, including pop music and world music.
Paul Tanner, Maurice Gerow, and David Megill have suggested,
the freer aspects of jazz, at least, have reduced the freedom acquired in the sixties. Most successful recording artists today construct their works in this way: beginning with a strain with which listeners can relate, following with an entirely free portion, and then returning to the recognizable strain. The pattern may occur several times in a long selection, giving listeners pivotal points to cling to. At this time, listeners accept this – they can recognize the selection while also appreciating the freedom of the player in other portions. Players, meanwhile, are tending toward retaining a key center for the seemingly free parts. It is as if the musician has learned that entire freedom is not an answer to expression, that the player needs boundaries, bases, from which to explore.
Tanner, Gerow and Megill name Miles Davis, Cecil Taylor, John Klemmer, Keith Jarrett, Chick Corea, Pharoah Sanders, McCoy Tyner, Alice Coltrane, Wayne Shorter, Anthony Braxton, Don Cherry, and Sun Ra as musicians who have employed this approach.
Other media
Canadian artist Stan Douglas uses free jazz as a direct response to complex attitudes towards African-American music. Exhibited at documenta 9 in 1992, his video installation Hors-champs (meaning "off-screen") addresses the political context of free jazz in the 1960s, as an extension of black consciousnessKrajewsk, "Stan Douglas, 15 September 2007 — 6 January 2008, Staatsgalerie & Wurttembergischer" and is one of his few works to directly address race.Milroy, "These artists know how to rock", p. R7 Four American musicians, George E. Lewis (trombone), Douglas Ewart (saxophone), Kent Carter (bass) and Oliver Johnson (drums) who lived in France during the free jazz period in the 1960s, improvise Albert Ayler's 1965 composition "Spirits Rejoice."Gale, "Stan Douglas: Evening and others", p. 363
New York Eye and Ear Control is Canadian artist Michael Snow's 1964 film with a soundtrack of group improvisations recorded by an augmented version of Albert Ayler's group and released as the album New York Eye and Ear Control.Review by Scott Yanow, Allmusic. Critics have compared the album with the key free jazz recordings: Ornette Coleman's Free Jazz: A Collective Improvisation and John Coltrane's Ascension. John Litweiler regards it favourably in comparison because of its "free motion of tempo (often slow, usually fast); of ensemble density (players enter and depart at will); of linear movement". Ekkehard Jost places it in the same company and comments on "extraordinarily intensive give-and-take by the musicians" and "a breadth of variation and differentiation on all musical levels".
French artist Jean-Max Albert, as trumpet playerDictionnaire du jazz, Sous la direction de Philippe Carles, Jean-Louis Comolli et André Clergeat. Éditions Robert Laffont, coll. "Bouquins", 1994Sklower, Jedediah (2006). Free Jazz, la catastrophe féconde.Page 147 Une histoire du monde éclaté du jazz en France (1960-1982). Collection logiques sociales. Paris: Harmattan. . of Henri Texier's first quintet, participated in the 1960s in one of the first expressions of free jazz in France. As a painter, he then experimented plastic transpositions of Ornette Coleman's approach. Free jazz, painted in 1973, used architectural structures in correspondence to the classical chords of standard harmonies confronted with an unrestrained all over painted improvisation.Jean-Max Albert, Peinture, ACAPA, Angoulême, 1982
Jean-Max Albert still explores the free jazz lessons, collaborating with pianist François Tusques in experimental films : Birth of Free Jazz, Don Cherry... these topics considered through a pleasant and poetic way.Clifford Allen, The New York City Jazz Record p10, June 2011
In the world
Founded in 1967, the Quatuor de Jazz Libre du Québec was Canada's most notable early free jazz outfit.Crépon, Pierre. "Free Jazz/Québec Libre: Le Quatuor de Jazz Libre du Québec, 1967-1975", Point of Departure, September 2020. Outside of North America, free jazz scenes have become established in Europe and Japan. Alongside the aforementioned Joe Harriott, saxophonists Peter Brötzmann, Evan Parker, trombonist Conny Bauer, guitarist Derek Bailey, pianists François Tusques, Fred Van Hove, Misha Mengelberg, drummer Han Bennink, saxophonist and bass clarinetist Willem Breuker were among the most well-known early European free jazz performers. European free jazz can generally be seen as approaching free improvisation, with an ever more distant relationship to jazz tradition. Specifically Brötzmann has had a significant impact on the free jazz players of the United States.
Japan's first free jazz musicians included drummer Masahiko Togashi, guitarist Masayuki Takayanagi, pianists Yosuke Yamashita and Masahiko Satoh, saxophonist Kaoru Abe, bassist Motoharu Yoshizawa, and trumpeter Itaru Oki.Crépon, Pierre. "Omnidirectional Projection: Teruto Soejima and Japanese Free Jazz", Point of Departure, June 2019. A relatively active free jazz scene behind the iron curtain produced musicians like Janusz Muniak, Tomasz Stańko, Zbigniew Seifert, Vyacheslav Ganelin and Vladimir Tarasov. Some international jazz musicians have come to North America and become immersed in free jazz, most notably Ivo Perelman from Brazil and Gato Barbieri of Argentina (this influence is more evident in Barbieri's early work).
South African artists, including early Dollar Brand, Zim Ngqawana, Chris McGregor, Louis Moholo, and Dudu Pukwana experimented with a form of free jazz (and often big-band free jazz) that fused experimental improvisation with African rhythms and melodies.Orlov, Piotr. "How South Africa's Blue Notes Helped Invent European Free Jazz", Bandcamp, September 2020. American musicians like Don Cherry, John Coltrane, Milford Graves, and Pharoah Sanders integrated elements of the music of Africa, India, and the Middle East for world-influenced free jazz.
Further reading
Articles from Jazz & Pop Magazine. Reprint of the 1970 edition, New York: World Publishing Co.
Such, David Glen (1993). Avant-Garde Jazz Musicians: Performing "Out There". Iowa City: University Of Iowa Press. (cloth) (pbk.).
Szwed, John F. (2000). Jazz 101: A Complete Guide to Learning and Loving Jazz. New York: Hyperion. .
Levin, Robert, "Free Jazz: The Jazz Revolution of the '60s"
References
External links
The Real Godfathers of Punk by Billy Bob Hargus (July 1996).
Category:1960s fads and trends
Category:Experimental music
*
Category:Jazz genres
Category:Jazz terminology
Category:Outsider music
|
arts_entertainment
| 3,372
|
106128
|
First Crusade
|
https://en.wikipedia.org/wiki/First_Crusade
|
The First Crusade (1096–1099) was the first and most successful of a series of religious wars, or Crusades, which were initiated, supported and at times directed by the Latin Church in the Middle Ages. Their aim was to return the Holy Landwhich had been conquered by the Rashidun Caliphate in the 7th centuryto Christian rule. By the 11th century, although Jerusalem had then been ruled by Muslims for hundreds of years, the practices of the Seljuk rulers in the region began to threaten local Christian populations, pilgrimages from the West and the Byzantine Empire itself. The earliest impetus for the First Crusade came in 1095 when Byzantine emperor Alexios I Komnenos sent ambassadors to the Council of Piacenza to request military support in the empire's conflict with the Seljuk-led Turks. This was followed later in the year by the Council of Clermont, at which Pope Urban II gave a speech supporting the Byzantine request and urging faithful Christians to undertake an armed pilgrimage to Jerusalem.
This call was met with an enthusiastic popular response across all social classes in western Europe. Thousands of predominantly poor Christians, led by the French priest Peter the Hermit, were the first to respond. What has become known as the People's Crusade passed through Germany and indulged in wide-ranging anti-Jewish activities, including the Rhineland massacres. On leaving Byzantine-controlled territory in Anatolia, they were annihilated in a Turkish ambush led by the Seljuk Kilij Arslan I at the Battle of Civetot in October 1096.
In what has become known as the Princes' Crusade, members of the high nobility and their followers embarked in late-summer 1096 and arrived at Constantinople between November and April the following year. This was a large feudal host led by notable Western European princes: southern French forces under Raymond IV of Toulouse and Adhemar of Le Puy; men from Upper and Lower Lorraine led by Godfrey of Bouillon and his brother Baldwin of Boulogne; Italo-Norman forces led by Bohemond of Taranto and his nephew Tancred; as well as various contingents consisting of northern French and Flemish forces under Robert Curthose of Normandy, Stephen of Blois, Hugh of Vermandois, and Robert II of Flanders. In total and including non-combatants, the forces are estimated to have numbered as many as 100,000.
The crusader forces gradually arrived in Anatolia. With Kilij Arslan absent, a Frankish attack and Byzantine naval assault during the Siege of Nicaea in June 1097 resulted in an initial crusader victory. In July, the crusaders won the Battle of Dorylaeum, fighting Turkish lightly armoured mounted archers. After a difficult march through Anatolia, the crusaders began the Siege of Antioch, capturing the city in June 1098. Jerusalem, then ruled by the Fatimids, was reached in June 1099, and the ensuing Siege of Jerusalem culminated in the Crusader armies storming and capturing the city on 15 July 1099, during which assault a large fraction of the residents were massacred. A Fatimid counterattack was repulsed later that year at the Battle of Ascalon, which marked the end of the First Crusade. Afterwards, the majority of the crusaders returned home.
Four Crusader states were established in the Holy Land: the Kingdom of Jerusalem, the County of Edessa, the Principality of Antioch, and the County of Tripoli. The Crusaders maintained some form of presence in the region until the loss of the last major Crusader stronghold in the 1291 Siege of Acre, after which there were no further substantive Christian campaigns in the Levant.
Historical context
Christian and Muslim states had been in conflict since the establishment of Islam in the 7th century. In the span of approximately 120 years after the death of the Islamic prophet Muhammad in 632, Muslim forces conquered the Levant (including Jerusalem), as well as North Africa and most of the Iberian Peninsula, all of which had previously been under Christian rule. By the 11th century, Christians werethrough the Reconquistagradually reversing the 8th-century Muslim conquest of Iberia, but their ties to the Holy Land had deteriorated. Muslim authorities in the Levant often enforced harsh rules against any overt expressions of the Christian faith. Approximately two-thirds of land held by Christians had been conquered by Muslim forces prior to the First Crusade.
The First Crusade was the response of the Christian world to the expansion of Islam, due to the Fatimids and Seljuks, into the Holy Land and Byzantium. In Western Europe, Jerusalem was an increasingly important destination for Christian pilgrimages. While the Seljuk hold on Jerusalem was weak (the group later lost the city to the Fatimids), returning pilgrims reported difficulties and the oppression of Christians. The Byzantine need for military support coincided with an increase in the willingness of the western European warrior class to accept papal military command.
Situation in Europe
By the 11th century, the population of Europe had increased greatly as technological and agricultural innovations allowed trade to flourish. The Catholic Church had become a dominant influence on Western civilization. Society was organized by manorialism and feudalism, political structures whereby knights and other nobles owed military service to their overlords in return for the right to rent from lands and manors.Painter, Sidney (1969). "Western Europe on the Eve of the Crusades ". In Setton, K., A History of the Crusades: Volume I. pp. 3–30.
In the period from 1050 until 1080, the Gregorian Reform movement developed increasingly more assertive policies, eager to increase its power and influence. This prompted conflict with eastern Christians rooted in the doctrine of papal supremacy. The Eastern church viewed the pope as only one of the five patriarchs of the Church, alongside the patriarchates of Alexandria, Antioch, Constantinople and Jerusalem. In 1054 differences in custom, creed and practice spurred Pope Leo IX to send a legation to Patriarch Michael I Cerularius of Constantinople, which ended in mutual excommunication and an East–West Schism.Adrian Fortescue (1912). "The Eastern Schism". In Catholic Encyclopedia. 13. New York: Robert Appleton Company.
Early Christians were used to the employment of violence for communal purposes. A Christian theology of war inevitably evolved from the point when Roman citizenship and Christianity became linked. Citizens were required to fight against the empire's enemies. Dating from the works of the 4th-century theologian Augustine of Hippo, a doctrine of holy war developed. Augustine wrote that aggressive war was sinful, but war could be justified if proclaimed by a legitimate authority such as a king or bishop, it was defensive or for the recovery of lands, and it did not involve excessive violence. The breakdown of the Carolingian Empire in Western Europe created a warrior caste who now had little to do but fight amongst themselves. Violent acts were commonly used for dispute resolution, and the papacy attempted to mitigate it.
Pope Alexander II developed recruitment systems via oaths for military resourcing that Pope Gregory VII further extended across Europe. These were deployed by the Church in the Christian conflicts with Muslims in the Iberian Peninsula and for the Norman conquest of Sicily. Gregory went further in 1074, planning a display of military power to reinforce the principle of papal sovereignty in a holy war supporting Byzantium against the Seljuks, but was unable to build support for this. Theologian Anselm of Lucca took the decisive step towards an authentic crusader ideology, stating that fighting for legitimate purposes could result in the remission of sins.
On the Iberian Peninsula, there was no significant Christian polity. The Christian realms of León, Navarre and Catalonia lacked a common identity and shared history based on tribe or ethnicity so they frequently united and divided during the 11th and 12th centuries. Although small, all developed an aristocratic military technique and, in 1031, the disintegration of the Caliphate of Córdoba in southern Spain created the opportunity for the territorial gains that later became known as the Reconquista. In 1063, William VIII of Aquitaine led a combined force of French, Aragonese and Catalan knights in the Siege of Barbastro, taking the city that had been in Muslim hands since the year 711. This had the full support of Alexander II, and a truce was declared in Catalonia with indulgences granted to the participants. It was a holy war but differed from the First Crusade in that there was no pilgrimage, no vow, and no formal authorisation by the church. Shortly before the First Crusade, Urban II had encouraged the Iberian Christians to take Tarragona, using much of the same symbolism and rhetoric that was later used to preach the crusade to the people of Europe.
The Italo-Normans were successful in seizing much of Southern Italy and Sicily from the Byzantines and North African Arabs in the decades before the First Crusade. This brought them into conflict with the papacy leading to a campaign against them by Pope Leo IX who they defeated at the Battle of Civitate. Nevertheless, when they invaded Muslim Sicily in 1059, they did so under the papal banner Invexillum sancti Petrior, or banner of St. Peter. Robert Guiscard captured the Byzantine city of Bari in 1071 and campaigned along the Eastern Adriatic coast around Dyrrachium in 1081 and 1085.
Situation in the East
Since its founding, the Byzantine Empire was a historic centre of wealth, culture and military power.Papayianni, Aphrodite (2006). "Byzantine Empire". In The Crusades – An Encyclopedia. pp. 188–196. Under Basil II, the territorial recovery of the empire reached its furthest extent in 1025. The Empire's frontiers stretched east to Iran, Bulgaria and much of southern Italy were under control, and piracy in the Mediterranean Sea had been suppressed. Relations with the Empire's Islamic neighbours were no more quarrelsome than relations with the Slavs or Western Christians. Normans in Italy; Pechenegs, Serbs and Cumans to the north; and Seljuk Turks in the east all competed with the Empire, and to meet these challenges the emperors recruited mercenaries, even on occasion from their enemies.
The Islamic world also experienced great success since its foundation in the 7th century, with major changes to come.Gibb, Hamilton A. R. (1969). "The Caliphate and the Arab States ". In Setton, K., A History of the Crusades: Volume I. pp. 81–98. The first waves of Turkic migration into the Middle East enmeshed Arab and Turkic history from the 9thcentury. The status quo in Western Asia was challenged by later waves of Turkish migration, particularly the arrival of the Seljuk Turks in the 10thcentury. These were a minor ruling clan from Transoxania. They converted to Islam and migrated to Iran to seek their fortune. In the following two decades they conquered Iran, Iraq and the Near East. The Seljuks and their followers were Sunni Muslims, which led to conflict in Palestine and Syria with the Shi'ite Fatimid Caliphate.
From 1092, the status quo in the Middle East disintegrated following the death of the effective ruler of the Seljuk Empire, Nizam al-Mulk. This was closely followed by the deaths of the Seljuk sultan Malik-Shah and the Fatimid caliph al-Mustansir Billah. Wracked by confusion and division, the Islamic world disregarded the world beyond, so that, when the First Crusade arrived, it came as a surprise. Malik-Shah was succeeded in the Anatolian Sultanate of Rûm by Kilij Arslan, and in Syria by his brother Tutush I who started a civil war against Berkyaruq to become sultan himself. When Tutush was killed in 1095, his sons Ridwan and Duqaq inherited Aleppo and Damascus, respectively, further dividing Syria amongst emirs antagonistic towards each other, as well as Kerbogha, the atabeg of Mosul. Egypt and much of Palestine were controlled by the Fatimids. The Fatimids, under the nominal rule of caliph al-Musta'li but actually controlled by vizier al-Afdal Shahanshah, lost Jerusalem to the Seljuks in 1073 but succeeded in recapturing the city in 1098 from the Artuqids, a smaller Turkish tribe associated with the Seljuks, just before the arrival of the crusaders.
Persecution of Christians
According to historian Jonathan Riley-Smith and social historian Rodney Stark, Muslim authorities in the Holy Land often enforced harsh rules "against any open expressions of the Christian faith":
The persecution of Christians became even worse after the Seljuk Turks invasion. Villages occupied by Turks along the route to Jerusalem began exacting tolls on Christian pilgrims. In principle, the Seljuks allowed pilgrims access to Jerusalem, but they often imposed huge tariffs and condoned local attacks. Many pilgrims were kidnapped and sold into slavery while others were tortured. Soon only large, well-armed groups would dare to attempt a pilgrimage, and even so, many died and many more turned back. The pilgrims that survived these extremely dangerous journeys, “returned to the West weary and impoverished, with a dreadful tale to tell.” News of these deadly attacks on pilgrims as well as the persecution of the native Eastern Christians caused anger in Europe.
News of these persecutions reached European Christians in the West in the few years after the Battle of Manzikert. A Frankish eyewitness says: "Far and wide they [Muslim Turks] ravaged cities and castles together with their settlements. Churches were razed down to the ground. Of the clergyman and monks whom they captured, some were slaughtered while others were with unspeakable wickedness given up, priests and all, to their dire dominion and nuns—alas for the sorrow of it!—were subjected to their lusts." It was in this climate that the Byzantine emperor Alexios I Komnenos wrote a letter to Robert II of Flanders saying:
The emperor warned that if Constantinople fell to the Turks, not only would thousands more Christians be tortured, raped and murdered, but “the most holy relics of the Saviour,” gathered over the centuries, would be lost. “Therefore in the name of God... we implore you to bring this city all the faithful soldiers of Christ... in your coming you will find your reward in heaven, and if you do not come, God will condemn you.”
Destruction of the Church of the Holy Sepulchre
In 996, the "mad caliph" al-Hakim bi-Amr Allah rose to power in the heterodox Ismaili Shi'a Fatimid dynasty, which controlled Jerusalem at the time. Reports differ as to whether he was mad or merely eccentric. What is certain is that he was determined to completely annihilate his Christian and Jewish subjects . His administration was marked by confiscation of property, pillage, humiliation, imprisonment, and executions. Al-Hakim enforced distinctive dress on Christians, whom he command to wear a five-pound cross, and on Jews, who were required to hang a heavy bell around their neck. Christians were barred from administrative positions and churches were demolished.
In 1009 al-Hakim ordered Yaruk, governor of Ramla, "to demolish the church of the Resurrection and to remove its symbols, and to get rid of all trace and remembrance of it." This referred to the Church of the Holy Sepulchre, the site where Christians believed Jesus was entombed. The church was "knocked to its foundations," and even much of the cave was scraped away. Constantine's church of the Martyrion was demolished and has yet to be rebuilt. Later, al-Hakim's successor permitted reconstruction of the church, although the destruction done to the grotto was permanent. News of this outrage was spread through Europe by multiple eyewitnesses, including Ulric, bishop of Orléans and Adémar of Chabannes, and contributed to the zealous response to Pope Urban II's call for the First Crusade.
Council of Clermont
Urban responded favourably, perhaps hoping to heal the East-West Schism of forty years earlier, and to reunite the Church under papal primacy by helping the Eastern churches in their time of need. Alexios and Urban had previously been in close contact in 1089 and after, and had discussed openly the prospect of the reunion of the Christian churches. There were signs of considerable cooperation between Rome and Constantinople in the years immediately before the crusade.Blumenthal, Uta-Renate (2006). "Urban II (d. 1099)". In The Crusades – An Encyclopedia. pp. 1214–1217.
In July 1095, Urban turned to his homeland of France to recruit men for the expedition. His travels there culminated in the ten-day Council of Clermont, where on 27 November he gave an impassioned sermon to a large audience of French nobles and clergy.Blumenthal, Uta-Renate (2006). "Clermont, Council of (1095)". In The Crusades – An Encyclopedia. pp. 263–265. There are five versions of the speech recorded by people who may have been at the council (Baldric of Dol, Guibert of Nogent, Robert the Monk, and Fulcher of Chartres) or who went on crusade (Fulcher and the anonymous author of the Gesta Francorum), as well as other versions found in the works of later historians (such as William of Malmesbury and William of Tyre)."Urban II (1088–1099): Speech at Council of Clermont, 1095. Five versions of the Speech". Internet Medieval Sourcebook. Fordham University. All of these versions were written after Jerusalem had been captured, and it is difficult to know what was actually said versus what was recreated in the aftermath of the successful crusade. The only contemporary records are a few letters written by Urban in 1095. It is also thought that Urban also may have preached the crusade at Piacenza, but the only record of which is by Bernold of St. Blasien in his Chronicon.Munro, Dana C. (1922). Did the Emperor Alexios I ask for aid at the Council of Piacenza, 1095? In, American Historical Review, XXVII (1922). pp. 731–733.
The five versions of the speech differ widely from one another regarding particulars, but all versions except that in the Gesta Francorum agree that Urban talked about the violence of European society and the necessity of maintaining the Peace of God; about helping the Greeks, who had asked for assistance; about the crimes being committed against Christians in the east; and about a new kind of war, an armed pilgrimage, and of rewards in heaven, where remission of sins was offered to any who might die in the undertaking.Munro, Dana Carleton. (1906). The speech of Pope Urban II. at Clermont, 1095. Reprinted from the American Historical Review. New York. They do not all specifically mention Jerusalem as the ultimate goal. However, it has been argued that Urban's subsequent preaching reveals that he expected the expedition to reach Jerusalem all along.Urban and the Crusaders. In Translations and reprints from the original sources of European history. Dept. of History, University of Pennsylvania. Volume 1, No. 2. pp. 2–12. According to one version of the speech, the enthusiastic crowd responded with cries of Deus lo volt!—God wills it.
Peter the Hermit and the People's Crusade
The great French nobles and their trained armies of knights were not the first to undertake the journey towards Jerusalem.Murray, Alan V. (2006)."People's Crusades (1096)". In The Crusades – An Encyclopedia. pp. 939–941. Urban had planned the departure of the first crusade for 15 August 1096, the Feast of the Assumption, but months before this, a number of unexpected armies of peasants and petty nobles set off for Jerusalem on their own, led by a charismatic priest called Peter the Hermit.Louis René Bréhier (1911). "Peter the Hermit". In Catholic Encyclopedia. 11. New York: Robert Appleton Company. Peter was the most successful of the preachers of Urban's message, and developed an almost hysterical enthusiasm among his followers, although he was probably not an "official" preacher sanctioned by Urban at Clermont. It is commonly believed that Peter's followers consisted entirely of a massive group of untrained and illiterate peasants who did not even know where Jerusalem was, but there were also many knights among the peasants, including Walter Sans Avoir, who was lieutenant to Peter and led a separate army.
Lacking military discipline, Peter's fledgling army quickly found itself in trouble despite the fact they were still in Christian territory. The army led by Walter plundered the Belgrade and Zemun areas, and arrived in Constantinople with little resistance. Meanwhile, the army led by Peter, which marched separately from Walter's army, also fought with the Hungarians and may have captured Belgrade. At Niš, the Byzantine governor tried to supply them, but Peter had little control over his followers and Byzantine troops were needed to quell their attacks. Peter arrived at Constantinople in August, where his army joined with the one led by Walter, which had already arrived, as well as separate bands of crusaders from France, Germany, and Italy. Another army of Bohemians and Saxons did not make it past Hungary before splitting up.
Peter's and Walter's unruly mob began to pillage outside the city in search of supplies and food, prompting Alexios to hurriedly ferry the gathering across the Bosporus one week later. After crossing into Asia Minor, the crusaders split up and began to pillage the countryside, wandering into Seljuk territory around Nicaea. The far more-experienced Turks massacred most of this group. Some Italian and German crusaders were defeated at the Siege of Xerigordon at the end of September. Meanwhile, Walter and Peter's followers, who, although for the most part untrained in battle but led by about 50 knights, fought the Turks at the Battle of Civetot in October 1096. The Turkish archers destroyed the crusader army, and Walter was among the dead. Peter, who was absent in Constantinople at the time, later joined the second wave of crusaders, along with the few survivors of Civetot.
At a local level, the preaching of the First Crusade ignited the Rhineland massacres perpetrated against Jews. At the end of 1095 and the beginning of 1096, months before the departure of the official crusade in August, there were attacks on Jewish communities in France and Germany. In May 1096, Emicho of Flonheim (sometimes incorrectly known as Emicho of Leiningen) attacked the Jews at Speyer and Worms. Other unofficial crusaders from Swabia, led by Hartmann of Dillingen, along with French, English, Lotharingian and Flemish volunteers, led by Drogo of Nesle and William the Carpenter, as well as many locals, joined Emicho in the destruction of the Jewish community of Mainz at the end of May. In Mainz, one Jewish woman killed her children rather than let the crusaders kill them. Chief rabbi Kalonymus Ben Meshullam committed suicide in anticipation of being killed. Emicho's company then went on to Cologne, and others continued on to Trier, Metz, and other cities. Peter the Hermit also may have been involved in violence against the Jews, and an army led by a priest named Folkmar attacked Jews further east in Bohemia.
Coloman of Hungary had to deal with the problems that the armies of the First Crusade caused during their march across his country towards the Holy Land in 1096. He crushed two crusader hordes that had been pillaging the kingdom. Emicho's army eventually continued into Hungary but was also defeated by Coloman, at which point, Emicho's followers dispersed. Some eventually joined the main armies, although Emicho himself went home. Many of the attackers seem to have wanted to force the Jews to convert, although they were also interested in acquiring money from them. Physical violence against Jews was never part of the church hierarchy's official policy for crusading, and the Christian bishops, especially the Archbishop of Cologne, did their best to protect the Jews. A decade before, the Bishop of Speyer had taken the step of providing the Jews of that city with a walled ghetto to protect them from Christian violence and given their chief rabbis the control of judicial matters in the quarter. Nevertheless, some also took money in return for their protection. The attacks may have originated in the belief that Jews and Muslims were equally enemies of Christ, and enemies were to be fought or converted to Christianity.
From Clermont to Constantinople
Recruitment
alt=Origin of the known participants on the First Crusade|thumb|Origin of the known participants on the First Crusade
Recruitment for such a large enterprise was continent-wide. Estimates as to the size of the crusader armies have been given as 70,000 to 80,000 on the number who left Western Europe in the year after Clermont, and more joined in the three-year duration. Estimates for the number of knights range from 7,000 to 10,000; 35,000 to 50,000 foot soldiers; and including non-combatants a total of 60,000 to 100,000.Appendix II: The Numerical Strength of the Crusaders. In Runciman, Steven (1951), A History of the Crusades, Volume One. pp. 336–341. But Urban's speech had been well-planned. He had discussed the crusade with Adhemar of Le PuyBrundage, James A. "Adhemar of Puy: The Bishop and His Critics." Speculum, vol. 34, no. 2 [Medieval Academy of America, Cambridge University Press, University of Chicago Press] (1959). pp. 201–212. and Raymond IV, Count of Toulouse,Louis René Bréhier (1911). "Raymond IV, of Saint-Gilles". In Catholic Encyclopedia. 12. New York: Robert Appleton Company. and instantly the expedition had the support of two of southern France's most important leaders. Adhemar himself was present at the council and was the first to "take the cross". During the rest of 1095 and into 1096, Urban spread the message throughout France, and urged his bishops and legates to preach in their own dioceses elsewhere in France, Germany, and Italy as well. However, it is clear that the response to the speech was much greater than even the Pope, let alone Alexios, expected. On his tour of France, Urban tried to forbid certain people (including women, monks, and the sick) from joining the crusade, but found this nearly impossible. In the end, most who took up the call were not knights, but peasants who were not wealthy and had little in the way of fighting skills, in an outpouring of a new emotional and personal piety that was not easily harnessed by the ecclesiastical and lay aristocracy. Typically, preaching would conclude with every volunteer taking a vow to complete a pilgrimage to the Church of the Holy Sepulchre; they were also given a cross, usually sewn onto their clothes.
It is difficult to assess the motives of the thousands of participants for whom there is no historical record, or even those of important knights, whose stories were usually retold by monks or clerics. It is quite likely that personal piety was a major factor for many crusaders. Even with this popular enthusiasm, Urban ensured that there would be an army of knights, drawn from the French aristocracy. Aside from Adhemar and Raymond, other leaders he recruited throughout 1096 included Bohemond of Taranto,Ernest Barker (1911). "Bohemund". In Chisholm, Hugh (ed.) Encyclopædia Britannica. 4. (11th ed.). Cambridge University Press. pp. 135–136. a southern Italian ally of the reform popes; Bohemond's nephew Tancred;Chisholm, Hugh, ed. (1911). "Tancred (crusader)". Encyclopædia Britannica. 26. (11th ed.). Cambridge University Press. pp. 394–395. Godfrey of Bouillon,Louis René Bréhier (1909). "Godfrey of Bouillon". In Catholic Encyclopedia. 6. New York: Robert Appleton Company. who had previously been an anti-reform ally of the Holy Roman Emperor; his brother Baldwin of Boulogne;Ernest Barker (1911). "Baldwin I (king of Jerusalem)". In Chisholm, Hugh (ed.) Encyclopædia Britannica. 3. (11th ed.). Cambridge University Press. pp. 245–246. Hugh I, Count of Vermandois,Bull, Marcus, "The Capetian Monarchy and the Early Crusade Movement: Hugh of Vermandois and Louis VII," Nottingham Medieval Studies 40 (1996), 25–46. brother of the excommunicated Philip I of France; Robert Curthose,David, C. Wendell (1920). Robert Curthose. Cambridge: Harvard university press. brother of William II of England; and his relatives Stephen II, Count of Blois,Brundage, James A. "An Errant Crusader: Stephen of Blois." Traditio, Volume 16. Fordham University (1960). pp. 380–395. and Robert II, Count of Flanders.Knappen, Marshall M., "Robert II of Flanders in the First Crusade," in The Crusades and Other Historical Essays Presented to Dana C. Munro by His Former Students, ed. Louis J. Paetow (New York: Crofts, 1928), pp. 79–100. The crusaders represented northern and southern France, Flanders, Germany, and southern Italy, and so were divided into four separate armies that were not always cooperative, though they were held together by their common ultimate goal.
The crusade was led by some of the most powerful nobles of France, many of whom left everything behind, and it was often the case that entire families went on crusade at their own great expense. For example, Robert of Normandy loaned the Duchy of Normandy to his brother William II of England, and Godfrey sold or mortgaged his property to the church. Tancred was worried about the sinful nature of knightly warfare, and was excited to find a holy outlet for violence. Tancred and Bohemond, as well as Godfrey, Baldwin, and their older brother Eustace III, Count of Boulogne,Chisholm, Hugh, ed. (1911). "Eustace". Encyclopædia Britannica. 9. (11th ed.). Cambridge University Press. pp. 956–957. are examples of families who crusaded together. Much of the enthusiasm for the crusade was based on family relations, as most of the French crusaders were distant relatives. Nevertheless, in at least some cases, personal advancement played a role in the Crusaders' motives. For instance, Bohemond was motivated by the desire to carve himself out a territory in the east, and had previously campaigned against the Byzantines to try to achieve this. The crusade gave him a further opportunity, which he took after the Siege of Antioch, taking possession of the city and establishing the Principality of Antioch.
The road to Constantinople
upright=1.3|thumb|Major routes taken during the First Crusade|alt=A map of the Mediterranean, with the routes of Hugh I of Vermandois, Godfrey of Bouillon, Bohemond of Taranto, Raymond IV of Toulouse, Robert Curthose, and Baldwin of Boulogne highlighted. The major Christian and Muslim empires at the time of the crusade are also highlighted. Major battles in Asia Minor are marked.
alt=Route of the First Crusade through Asia|thumb|Route of the First Crusade through Asia
The four main crusader armies left Europe around the appointed time in August 1096. They took different routes to Constantinople, some through Eastern Europe and the Balkans, some crossing the Adriatic Sea. They gathered outside the Roman-era Walls of Constantinople between November 1096 and April 1097. Hugh of Vermandois arrived first, followed by Godfrey, Raymond, and Bohemond.Duncalf, Frederic (1969). "The First Crusade: From Clermont to Constantinople ". In Setton, K. A History of the Crusades: Volume I. pp. 253–279.
Godfrey took the land route through the Balkans,Runciman, S. (1949). The First Crusaders' Journey across the Balkan Peninsula. Byzantion, 19, 207–221. Coloman of Hungary allowed Godfrey and his troops to cross Hungary only after his brother Baldwin was offered as a hostage to guarantee his troops' good conduct. Raymond of Toulouse led the Provençals down the inland and coast of Sclavonia or Dalmatia which is the Kingdom of Croatia. There they encountered a hostile population (in anarchy after death of Croatian king Demetrius Zvonimir), passing through Constantine Bodin's kingdom of Duklja and into Durrës, and then due east to Constantinople.Barker, Ernest (1911). "Raymund of Toulouse". In Chisholm, Hugh (ed.). Encyclopædia Britannica. 22. (11th ed.), Cambridge University Press. pp. 934–935. Bohemond and Tancred led their Normans by sea to Durrës, and thence by land to Constantinople.Barker, Ernest (1911). "Bohemund". In Chisholm, Hugh (ed.). Encyclopædia Britannica. 4. (11th ed.), Cambridge University Press. pp. 135–136.
The armies arrived in Constantinople with little food and expected provisions and help from Alexios. Alexios was understandably suspicious after his experiences with the People's Crusade, and also because the knights included his old Norman enemy, Bohemond, who had invaded Byzantine territory on numerous occasions with his father and may have even attempted to organize an attack on Constantinople while encamped outside the city. This time, Alexios was more prepared for the crusaders and there were fewer incidents of violence along the way.
The crusaders may have expected Alexios to become their leader, but he had no interest in joining them, and was mainly concerned with transporting them into Asia Minor as quickly as possible. In return for food and supplies, Alexios requested the leaders to swear fealty to him and promise to return to the Byzantine Empire any land recovered from the Turks. Godfrey was the first to take the oath, and almost all the other leaders followed him, although they did so only after warfare had almost broken out in the city between the citizens and the crusaders, who were eager to pillage for supplies. Raymond alone avoided swearing the oath, instead pledging that he would simply cause no harm to the empire. Before ensuring that the various armies were shuttled across the Bosporus, Alexios advised the leaders on how best to deal with the Seljuk armies that they would soon encounter.
Siege of Nicaea
The Crusader armies crossed over into Asia Minor during the first half of 1097, where they were joined by Peter the Hermit and the remainder of his relatively small army. In addition, Alexios also sent two of his generals, Manuel Boutoumites and Tatikios, to assist the crusaders. The first objective of their campaign was Nicaea, a city once under Byzantine rule, but which had become the capital of the Seljuk Sultanate of Rûm under Kilij Arslan.Savvides, Alexios G. C. (2006). "Qilij Arslān of Rûm (d. 1107)". In The Crusades – An Encyclopedia. p. 998. Arslan was away campaigning against the Danishmends in central Anatolia at the time, and had left behind his treasury and his family, underestimating the strength of these new crusaders.
Upon the Crusaders' arrival on 14 May 1097, the city was subjected to siege, and when Arslan had word of it he rushed back to Nicaea and attacked the crusader army on 16 May. He was driven back by the unexpectedly large crusader force, with heavy losses being suffered on both sides in the ensuing battle. The siege continued, but the crusaders had little success as they found they could not blockade Lake İznik, which the city was situated on, and from which it could be provisioned. To break the city, Alexios had the Crusaders' ships rolled over land on logs, and at the sight of them, the Turkish garrison finally surrendered on 18 June.
There was some discontent amongst the Franks who were forbidden from looting the city. This was ameliorated by Alexius financially rewarding the crusaders. Later chronicles exaggerate tension between the Greeks and Franks but Stephen of Blois, in a letter to his wife Adela of Blois confirms goodwill and cooperation continued at this point.The First Crusade. Letters of the Crusaders. By Dana Carleton Munro (1902. Philadelphia, Pa. pp. 2–11. The fall of Nicaea is viewed as a rare product of close cooperation between the Crusaders and the Byzantines.
Battle of Dorylaeum
At the end of June, the crusaders marched on through Anatolia. They were accompanied by some Byzantine troops under Tatikios, and still harboured the hope that Alexios would send a full Byzantine army after them. They also divided the army into two more-easily managed groups—one contingent led by the Normans, the other by the French. The two groups intended to meet again at Dorylaeum, but on 1 July the Normans, who had marched ahead of the French, were attacked by Kilij Arslan.France, John (2006). "Dorylaion, Battle of (1097)". In The Crusades – An Encyclopedia. pp. 363–364. Arslan had gathered a much larger army than he previously had after his defeat at Nicaea, and now surrounded the Normans with his fast-moving mounted archers. The Normans "deployed in a tight-knit defensive formation", surrounding all their equipment and the non-combatants who had followed them along the journey, and sent for help from the other group. When the French arrived, Godfrey broke through the Turkish lines and the legate Adhemar outflanked the Turks from the rear. The Turks, who had expected to destroy the Normans and did not anticipate the quick arrival of the French, fled rather than face the combined crusader army.
The crusaders' march through Anatolia was thereafter unopposed, but the journey was unpleasant, as Arslan had burned and destroyed everything he left behind in his army's flight. It was the middle of summer, and the crusaders had very little food and water; many men and horses died. Fellow Christians sometimes gave them gifts of food and money, but more often than not, the crusaders simply looted and pillaged whenever the opportunity presented itself. Individual leaders continued to dispute the overall leadership, although none of them were powerful enough to take command on their own, as Adhemar was always recognized as the spiritual leader.
The Armenian interlude
After passing through the Cilician Gates, Baldwin and Tancred broke away from the main body of the army and set off towards the Armenian lands. Baldwin desired to create a fiefdom for himself in the Holy Land,Asbridge, Thomas (2004). Baldwin's Cold-Blooded Ambition. In The First Crusade: A New History. pp. 149–152. and, in Armenia, he could count on the support of the locals, especially an adventurer named Bagrat. Baldwin and Tancred led two separate contingents, departing Heraclea on 15 September. Tancred arrived first at Tarsus where he persuaded the Seljuk garrison to raise his flag on the citadel. Baldwin reached Tarsus the next day and, in a reversal, the Turks allowed Baldwin to take possession of two towers. Heavily outnumbered, Tancred decided not to fight for the town. Shortly thereafter, a group of Norman knights arrived, but Baldwin denied entry to them. The Turks slaughtered the Normans during the night, and Baldwin's men blamed him for their fate and massacred the remaining Seljuk garrison. Baldwin took shelter in a tower and convinced his soldiers of his innocence. A pirate captain, Guynemer of Boulogne, sailed up the Berdan River to Tarsus and swore fealty to Baldwin, who hired Guynemer's men to garrison the city while he continued his campaign.
Tancred had meanwhile seized the town of Mamistra. Baldwin reached the town on around 30 September. The Norman Richard of Salerno wanted to take revenge for Tarsus, causing a skirmish between the soldiers of Baldwin and Tancred. Baldwin left Mamistra and joined the main army at Marash, but Bagrat persuaded him to launch a campaign across a region densely populated by Armenians and he left the main army on 17 October. The Armenians welcomed Baldwin, and the local population massacred the Seljuks, seizing the fortresses Ravendel and Turbessel before the end of 1097. Baldwin made Bagrat the governor of Ravendel.
The Armenian lord Thoros of Edessa sent envoys to Baldwin in early 1098, seeking his assistance against the nearby Seljuks.Morris, Rosemary (2006). " T'oros of Edessa (d. 1098)". The Crusades – An Encyclopedia. pp. 1185–1186. Before departing for Edessa, Baldwin ordered the arrest of Bagrat, accused of collaboration with the Seljuks. Bagrat was tortured and forced to surrender Ravendel. Baldwin left for Edessa in early February, being harassed en route by the forces of Balduk, emir of Samosata. Reaching the city, he was well-received by both Thoros and the local Christian population. Remarkably, Thoros adopted Baldwin as a son, making him co-regent of Edessa. Strengthened by troops from Edessa, Baldwin raided Balduk's territory and placed a garrison in a small fortress near Samosata.Laurent, J. (1924). Des Grecs aux Croisés: Étude sur l'histoire d'Edesse entre 1071 et 1098. Byzantion, 1, 367–449.
Shortly after Baldwin's return from the campaign, a group of local nobles began plotting against Thoros, likely with Baldwin's consent. A riot broke out in the town, forcing Thoros to take refuge in the citadel. Baldwin pledged to save his adoptive father, but when the rioters broke into the citadel on 9 March and murdered both Thoros and his wife, he did nothing to stop them. On the following day, after the townspeople acknowledged Baldwin as their ruler, he assumed the title of Count of Edessa, and so established the first of the Crusader states.MacEvitt, Christopher (2006). "Edessa, County of". The Crusades – An Encyclopedia. pp. 379–385.
While the Byzantines had lost Edessa to the Seljuks in 1087, the emperor did not demand that Baldwin hand over the town. Moreover, the acquisition of Ravendel, Turbessel and Edessa strengthened the position of the main crusader army later at Antioch. The lands along the Euphrates secured a supply of food for the crusaders, and the fortresses hindered the movement of the Seljuk troops.
As his force was small, Baldwin had used diplomacy to secure his rule in Edessa. He married Arda of Armenia, who later became queen consort of the Kingdom of Jerusalem, and encouraged his retainers to marry local women. The city's rich treasury enabled him to employ mercenaries and to buy Samosata from Balduk. The resultant treaty for the transfer of Samosata was the first friendly arrangement between a crusader leader and a Muslim ruler, who remained governor of the city.
An important figure in the kingdom in the 12th century was Belek Ghazi, grandson of the former Seljuk governor of Jerusalem, Artuk. Belek was to play a small role in this story who, as an Artuqid emir, had hired Baldwin to suppress a revolt in Saruj.Taef El-Azhari (2006). "Balak (d. 1124)". The Crusades – An Encyclopedia. pp. 129–130. When the Muslim leaders of the town approached Balduk to come to their rescue, Balduk hurried to Saruj, but it soon became apparent that his forces were not able to resist a siege and the defenders yielded to Baldwin. Baldwin demanded Balduk's wife and children as hostages, and upon his refusal, Baldwin had him captured and executed. With Saruj, Baldwin now had consolidated the county and ensured his communications with the main body of Crusaders. Kerbogha, ever on guard to defeat the Crusaders, gathered a large army to eliminate Baldwin. During his march towards Antioch, Kerbogha besieged the walls of Edessa for three weeks in May, but could not capture it. This delay played a crucial part in the Crusader victory at Antioch.
Siege of Antioch
The crusader army, without Baldwin and Tancred, had marched on to Antioch, situated midway between Constantinople and Jerusalem. Described in a letter by Stephen of Blois as "a city very extensive, fortified with incredible strength and almost impregnable", the idea of taking the city by assault was a discouraging one to the crusaders. Hoping rather to force a capitulation, or find a traitor inside the city—a tactic that had previously seen Antioch change to the control of the Byzantines and then the Seljuk Turks—the crusader army began a siege on 20 October 1097. Antioch was so large that the crusaders did not have enough troops to fully surround it, and as a result it was able to stay partially supplied.France, John (2006)."Sieges of Antioch (1097–1098)". In The Crusades – An Encyclopedia. pp. 79–81. The subsequent Siege of Antioch has been called the "most interesting siege in history."
By January the attritional eight-month siege led to hundreds, or possibly thousands, of crusaders dying of starvation. Adhemar believed this to have been caused by their sinful nature, and rituals of fasting, prayer, alms-giving and procession were undertaken. Women were expelled from the camp. Many deserted, including Stephen of Blois. Foraging systems eased the situation, as did supplies from Cicilia and Edessa, through the recently captured ports of Latakia and St Symeon. In March a small English fleet arrived with supplies. The Franks benefited from disunity in the Muslim world and the possibility that they mistakenly believed the crusaders to be Byzantine mercenaries. The Seljuk brothers, Duqaq of Syria and Ridwan of Aleppo, dispatched separate relief armies in December and February that, had they been combined, would probably have been victorious.
After these failures, KerboghaTaefl El-Azhari (2006). "Karbughā (d. 1102)". In The Crusades – An Encyclopedia. pp. 704–705. raised a coalition from southern Syria, northern Iraq and Anatolia with the ambition of extending his power from Syria to the Mediterranean. His coalition first spent three weeks attempting to recapture Saruj, a decisive delay.
Bohemond persuaded the other leaders that, if Antioch fell, he would keep it for himself and that an Armenian commander of a section of the city's walls had agreed to allow the crusaders to enter.
Stephen of Blois had deserted, and his message to Alexios that the cause was lost persuaded the Emperor to halt his advance through Anatolia at Philomelium before returning to Constantinople. (Alexios' failure to reach the siege would be used by Bohemond to rationalise his refusal to return the city to the Empire as promised.)
The Armenian Firouz helped Bohemond and a small party enter the city on 2 June and open a gate, at which point horns were sounded, the city's Christian majority opened the other gates and the crusaders entered. In the sack, they killed most of the Muslim inhabitants and many Christian Greeks, Syrians and Armenians in the confusion.
On 4 June the vanguard of Kerbogha's 40,000-strong army arrived surrounding the Franks. From 10 June for 4 days waves of Kerbogha's men assailed the city walls from dawn until dusk. Bohemond and Adhemar barred the city gates to prevent mass desertions and managed to hold out. Kerbogha then changed tactics to try to starve the crusaders out. Morale inside the city was low and defeat looked imminent but a peasant visionary called Peter Bartholomew claimed the apostle St. Andrew came to him to show the location of the Holy Lance that had pierced Christ on the cross. This supposedly encouraged the crusaders but the accounts are misleading as it was two weeks before the final battle for the city. On 24 June the Franks sought terms for surrender that were refused. On 28 June 1098 at dawn, the Franks marched out of the city in four battle groups to engage the enemy. Kerbogha allowed them to deploy with the aim of destroying them in the open. However, the discipline of the Muslim army did not hold and a disorderly attack was launched. Unable to overrun a bedraggled force they outnumbered two-to-one, Muslims attacking the Bridge Gate fled through the advancing main body of the Muslim army. With very few casualties the Muslim army broke and fled the battle.
Stephen of Blois was in Alexandretta when he learned of the situation in Antioch. It seemed like their situation was hopeless so he left the Middle East, warning Alexios and his army on his way back to France. Because of what looked like a massive betrayal, the leaders at Antioch, most notably Bohemond, argued that Alexios had deserted the Crusade and thus invalidated all of their oaths to him. While Bohemond asserted his claim to Antioch, not everyone agreed (most notably Raymond of Toulouse), so the crusade was delayed for the rest of the year while the nobles argued amongst themselves. When discussing this period, a common historiographical viewpoint advanced by some scholars is that the Franks of northern France, the Provençals of southern France,At that time the terms "Provençal" or "Provence" were not limited to the present-day region of Provence, but encompassed the regions of southern France that spoke the Occitan language. They were then equivalent to the terms "Occitan" or "Occitania" which appeared later. and the Normans of southern Italy considered themselves separate nations, creating turmoil as each tried to increase its individual status. Others argue that while this may have had something to do with the disputes, personal ambition among the Crusader leaders might be just as easily blamed.
Meanwhile, a plague broke out, killing many in the army, including the legate Adhemar, who died on 1 August.. There were now even fewer horses than before, and worse, the Muslim peasants in the area refused to supply the crusaders with food. Thus, in December, following the Siege of Ma'arrat al-Numan, some historians described the first occurrence of cannibalism among the crusaders, even though this account does not appear in any contemporary Muslim chronicle. At the same time, the minor knights and soldiers had become increasingly restless and threatened to continue to Jerusalem without their squabbling leaders. Finally, at the beginning of 1099, the march restarted, leaving Bohemond behind as the first Prince of Antioch..Fink, Harold S. (1969). "Chapter XII. The Foundations of the Latin States, 1099–1118 ." In Setton, Kenneth M.; Baldwin, Marshall W. (eds.). A History of the Crusades: I. The First Hundred Years. Madison: The University of Wisconsin Press. p. 372.
From Antioch to Jerusalem
Proceeding down the Mediterranean coast, the crusaders encountered little resistance, as local rulers preferred to make peace with them and furnish them with supplies rather than fight. Their forces were evolving, with Robert Curthose and Tancred agreeing to become vassals of Raymond IV of Toulouse, who was wealthy enough to compensate them for their service. Godfrey of Bouillon, now supported by his brother's territories in Edessa, refused to do the same. In January, Raymond dismantled the walls of Ma'arrat al-Numan, and he began the march south to Jerusalem, barefoot and dressed as a pilgrim, followed by Robert and Tancred and their respective armies.Runciman, Steven (1969). "Chapter X. The First Crusade: Antioch to Ascalon. " In Setton, Kenneth M.; Baldwin, Marshall W. (eds.). A History of the Crusades: I. The First Hundred Years. Madison: The University of Wisconsin Press. pp. 328–333.
Raymond planned to take Tripoli to set up a state equivalent to Antioch, but first initiated a siege of Arqa, a city in northern Lebanon, on 14 February 1099. Meanwhile, Godfrey, along with Robert II of Flanders, who had also refused vassalage to Raymond, joined with the remaining Crusaders at Latakia and marched south in February. Bohemond had originally marched out with them but quickly returned to Antioch in order to consolidate his rule against the advancing Byzantines. Tancred left Raymond's service and joined with Godfrey. A separate force linked to Godfrey's was led by Gaston IV of Béarn.
Godfrey, Robert, Tancred, and Gaston arrived at Arqa in March, but the siege continued. Pons of Balazun died, struck by a stone missile. The situation was tense not only among the military leaders, but also among the clergy. Since Adhemar's death there had been no real leader of the crusade, and ever since the discovery of the Holy Lance, there had been accusations of fraud among the clerical factions. On 8 April, Arnulf of Chocques challenged Peter Bartholomew to an ordeal by fire. Peter underwent the ordeal and died after days of agony from his wounds, which discredited the Holy Lance as a fake. This also undermined Raymond's authority over the Crusade, as he was the main proponent of its authenticity.Whalen, Brett Edward (2006). "Holy Lance". In The Crusades – An Encyclopedia. pp. 588–589.
The siege of Arqa lasted until 13 May, when the Crusaders left having captured nothing. The Fatimids had recaptured Jerusalem from the Seljuks the year before and attempted to make a deal with the Crusaders, promising freedom of passage to any pilgrims to the Holy Land on the condition that the Crusaders not advance into their domains, but this was rejected. The Fatimid Iftikhar al-Dawla was governor of Jerusalem and well aware of the Crusaders' intentions. Therefore, he expelled all of Jerusalem's Christian inhabitants. He also poisoned most of the wells in the area. On 13 May, the Crusaders came to Tripoli, where the emir Jalal al-Mulk Abu'l Hasan provided the Crusader army with horses and vowed to convert to Christianity if the Crusaders defeated the Fatimids. Continuing south along the coast, the Crusaders passed Beirut on 19 May and Tyre on 23 May. Turning inland at Jaffa, on 3 June they reached Ramla, which had been abandoned by its inhabitants. The bishopric of Ramla-Lydda was established there at the Church of St. George before they continued to Jerusalem. On 6 June, Godfrey sent Tancred and Gaston to capture Bethlehem, where Tancred flew his banner over the Church of the Nativity. On 7 June, the Crusaders reached Jerusalem. Many Crusaders wept upon seeing the city they had journeyed so long to reach..
Siege of Jerusalem
The Crusaders' arrival at Jerusalem revealed an arid countryside, lacking in water or food supplies. Here there was no prospect of relief, even as they feared an imminent attack by the local Fatimid rulers. There was no hope of trying to blockade the city as they had at Antioch; the crusaders had insufficient troops, supplies, and time. Rather, they resolved to take the city by assault.France, John (2006). "Jerusalem, Siege of (1099)". The Crusades – An Encyclopedia. pp. 677–679. They might have been left with little choice, as by the time the Crusader army reached Jerusalem, it has been estimated that only about 12,000 men including 1,500 cavalry remained.. Thus began the decisive Siege of Jerusalem. These contingents, composed of men with differing origins and varying allegiances, were also approaching another low ebb in their camaraderie. While Godfrey and Tancred made camp to the north of the city, Raymond made his to the south. In addition, the Provençal contingent did not take part in the initial assault on 13 June 1099. This first assault was perhaps more speculative than determined, and after scaling the outer wall the Crusaders were repulsed from the inner one.
After the failure of the initial assault, a meeting between the various leaders was organized in which it was agreed that a more concerted attack would be required in the future. On 17 June, a party of Genoese mariners under Guglielmo Embriaco arrived at Jaffa, and provided the Crusaders with skilled engineers, and perhaps more critically, supplies of timber (stripped from the ships) to build siege engines.. The Crusaders' morale was raised when the priest Peter Desiderius claimed to have had a divine vision of Adhemar of Le Puy, instructing them to fast and then march in a barefoot procession around the city walls, after which the city would fall, following the Biblical story of the battle of Jericho. After a three-day fast, on 8 July the Crusaders performed the procession as they had been instructed by Desiderius, ending on the Mount of Olives where Peter the Hermit preached to them,. and shortly afterwards the various bickering factions arrived at a public rapprochement. News arrived shortly after that a Fatimid relief army had set off from Egypt, giving the Crusaders a very strong incentive to make another assault on the city.
The final assault on Jerusalem began on 13 July. Raymond's troops attacked the south gate while the other contingents attacked the northern wall. Initially, the Provençals at the southern gate made little headway, but the contingents at the northern wall fared better, with a slow but steady attrition of the defence. On 15 July, a final push was launched at both ends of the city, and eventually, the inner rampart of the northern wall was captured. In the ensuing panic, the defenders abandoned the walls of the city at both ends, allowing the Crusaders to finally enter.
Massacre
The massacre that followed the capture of Jerusalem has attained particular notoriety, as a "juxtaposition of extreme violence and anguished faith".. The eyewitness accounts from the crusaders themselves leave little doubt that there was great slaughter in the aftermath of the siege. Nevertheless, some historians propose that the scale of the massacre has been exaggerated in later medieval sources.Kedar, Benjamin Z. (2004). The Jerusalem Massacre of July 1099. In Crusades: Volume 3. pp. 15–76.
After the successful assault on the northern wall, the defenders fled to the Temple Mount, pursued by Tancred and his men. Arriving before the defenders could secure the area, Tancred's men assaulted the precinct, butchering many of the defenders, with the remainder taking refuge in the Al-Aqsa Mosque. Tancred then called a halt to the slaughter, offering those in the mosque his protection. When the defenders on the southern wall heard of the fall of the northern wall, they fled to the citadel, allowing Raymond and the Provençals to enter the city. Iftikhar al-Dawla, the commander of the garrison, struck a deal with Raymond, surrendering the citadel in return for being granted safe passage to Ascalon.
The slaughter continued for the rest of the day; Muslims were indiscriminately killed, and Jews who had taken refuge in their synagogue died when it was burnt down by the Crusaders. The following day, Tancred's prisoners in the mosque were slaughtered. Nevertheless, it is clear that some Muslims and Jews of the city survived the massacre, either escaping or being taken prisoner to be ransomed. The Letter of the Karaite elders of Ascalon provides details of Ascalon Jews making great efforts to ransom such Jewish captives and send them to safety in Alexandria. The Eastern Christian population of the city had been expelled before the siege by the governor, and thus escaped the massacre.
Establishment of the Kingdom of Jerusalem
On 22 July, a council was held in the Church of the Holy Sepulchre to establish governance for Jerusalem. The death of the Greek Patriarch meant there was no obvious ecclesiastical candidate to establish a religious lordship, as a body of opinion maintained. Although Raymond of Toulouse could claim to be the pre-eminent crusade leader from 1098 his support had waned since his failed attempts to besiege Arqa and create his own realm. This may have been why he piously refused the crown on the grounds that it could only be worn by Christ. It may also have been an attempt to persuade others to reject the title, but Godfrey was already familiar with such a position. Probably more persuasive was the presence of the large army from Lorraine, led by him and his brothers, Eustace and Baldwin, vassals of the Ardennes–Bouillion dynasty. Godfrey was then elected leader, accepting the title Advocatus Sancti Sepulchri or Defender of the Holy Sepulchre. Raymond, incensed at this development, attempted to seize the Tower of David before leaving the city.
Urban II died on 29 July 1099, fourteen days after the capture of Jerusalem by the Crusaders, but before news of the event had reached Rome. He was succeeded by Pope Paschal II, who would serve almost 20 years.
While the Kingdom of Jerusalem would remain until 1291, the city of Jerusalem would be lost to the Muslims under Saladin in 1187, a result of the decisive Battle of Hattin. The history of Jerusalem would record Muslim rule for 40 years, returning to Christian control following a series of later Crusades..
Battle of Ascalon
In August 1099, Fatimid vizier al-Afdal Shahanshah landed a force of 20,000 North Africans at Ascalon.Mulinder, Alec (2006). "Ascalon, Battle of (1099)". In The Crusades – An Encyclopedia. p. 113. Geoffrey and Raymond marched out to meet this force on 9 August at the Battle of Ascalon with a force of only 1,200 knights and 9,000 foot soldiers. Outnumbered two to one, the Franks launched a surprise dawn attack and routed the overconfident and unprepared Muslim force. The opportunity was wasted though, as squabbling between Raymond and Godfrey prevented an attempt by the city's garrison to surrender to the more trusted Raymond. The crusaders had won a decisive victory, but the city remained in Muslim hands and a military threat to the nascent kingdom.
Aftermath and legacy
The majority of crusaders now considered their pilgrimage complete and returned home. Only 300 knights and 2,000 infantry remained to defend Palestine. It was the support of the knights from Lorraine that enabled Godfrey to take leadership of Jerusalem, over the claims of Raymond. When he died a year later these same Lorrainers thwarted the papal legate Dagobert of Pisa and his plans to make Jerusalem a theocracy and instead made Baldwin the first Latin king of Jerusalem. Bohemond returned to Europe to fight the Byzantines from Italy but he was defeated in 1108 at Dyrrhachium. After Raymond's death, his heirs captured Tripoli in 1109 with Genoese support. Relations between the newly created Crusader states of the County of Edessa and Principality of Antioch were variable. They fought together in the crusader defeat at the Battle of Harran in 1104, but the Antiocheans claimed suzerainty and blocked the return of Baldwin II of Jerusalem after his capture at the battle. The Franks became fully engaged in Near East politics with the result that Muslims and Christians often fought each other. Antioch's territorial expansion ended in 1119 with a major defeat to the Turks at the Battle of Ager Sanguinis, the Field of Blood.
200px|thumb|left|A map of western Anatolia showing the routes taken by Christian armies during the Crusade of 1101|alt=A map of western Anatolia, showing the routes taken by Christian armies during the crusade of 1101 There were many who had gone home before reaching Jerusalem, and many who had never left Europe at all. When the success of the Crusade became known, these people were mocked and scorned by their families and threatened with excommunication by the pope. Back at home in Western Europe, those who had survived to reach Jerusalem were treated as heroes. Robert II of Flanders was nicknamed Hierosolymitanus thanks to his exploits. Among the participants in the later Crusade of 1101 were Stephen of Blois, and Hugh of Vermandois, both of whom had returned home before reaching Jerusalem. This crusader force was almost annihilated in Asia Minor by the Seljuks, but the survivors helped to reinforce the kingdom upon their arrival in Jerusalem.
There is limited written evidence of the Islamic reaction dating from before 1160, but what there is indicates the crusade was barely noticed. This may be the result of a cultural misunderstanding in that the Turks and Arabs did not recognise the crusaders as religiously motivated warriors seeking conquest and settlement, assuming that the crusaders were just the latest in a long line of Byzantine mercenaries. Also, the Islamic world remained divided among rival rulers in Cairo, Damascus, Aleppo, and Baghdad. There was no pan-Islamic counter-attack, giving the crusaders the opportunity to consolidate. However in the 1110's, the Seljuk Sultan Muhammad I Tapar, would order many great Islamic counterattacks led by warriors like Mawdud, Aqsunqur al-Bursuqi, Bursuq II, and Ilghazi but all of them would fail. It was only till Imad al-Din Zengi when the Muslims would lead successful counterattacks and invasions of the Crusader states, slowly chipping away towns and land from them, until they squeezed them to extinction.
Historiography
Latin Christendom was amazed by the success of the First Crusade for which the only credible explanation was divine providence. If the crusade had failed it is likely that the paradigm of crusading would have been abandoned. Instead, this form of religious warfare was popular for centuries and the crusade itself became one of the most written-about historic events of the medieval period.Bréhier, Louis René (1908). "Crusades (Sources and Bibliography)". In Herbermann, Charles (ed.). Catholic Encyclopedia. 4. New York: Robert Appleton Company. The historiography (history of the histories) of the First Crusade and the Crusades in general, as expected, show works that reflect the views of the authors and the times that they lived in. Critical analyses of these works can be found in studies by Jonathan Riley-Smith and Christopher Tyerman.
Original sources
The 19th-century French work Recueil des historiens des croisades (RHC) documents the original narrative sources of the First Crusade from Latin, Arabic, Greek, Armenian and Syriac authors. The documents are presented in their original language with French translations. The work is built on the 17th-century work Gesta Dei per Francos, compiled by Jacques Bongars.Chisholm, Hugh, ed. (1911). Jacques Bongars. Encyclopædia Britannica. 4 (11th ed.). Cambridge University Press. pg. 204. Several Hebrew sources on the First Crusade also exist. A complete bibliography can be found in The Routledge Companion to the Crusades. See also Crusade Texts in Translation and Selected Sources: The Crusades, in Fordham University's Internet Medieval Sourcebook.
The Latin narrative sources for the First Crusade are: (1) the anonymous Gesta Francorum; (2) Peter Tudebode's Historia de Hierosolymitano itinere; (3) the Monte Cassino chronicle Historia belli sacri; (4) Historia Francorum qui ceperunt Iherusalem by Raymond of Aguilers; (5) Gesta Francorum Iherusalem Perefrinantium by Fulcher of Chartres; (6) Albert of Aachen's Historia Hierosolymitanae expeditionis; (7) Ekkehard of Aura's Hierosolymita; (8) Robert the Monk's Historia Hierosolymitana; (9) Baldric of Dol's Historiae Hierosolymitanae libri IV; (10) Radulph of Caen's Gesta Tancredi in expeditione Hierosolymitana; and (11) Dei gesta per Francos by Guibert of Nogent. These include multiple first-hand accounts of the Council of Clermont and the crusade itself.Edgington, Susan, and Murray, Alan V. (2006). "Western Sources". In The Crusades – An Encyclopedia. pp. 1269–1276. American historian August Krey has created a narrative The First Crusade: The Accounts of Eyewitnesses and Participants,Krey, August Charles. (1921). The First Crusade. Princeton: Princeton university press. verbatim from the various chronologies and letters which offers considerable insight into the endeavour.
Important related works include the Greek perspective offered in the Alexiad by Byzantine princess Anna Komnene, daughter of the emperor. The view of the Crusades from the Islamic perspective is found in two major sources. The first, The Chronicle of Damascus, is by Arab historian Ibn al-Qalanisi. The second is The Complete History by the Arab (or Kurdish) historian Ali ibn al-Athir. Minor but important works from the Armenian and Syriac are Matthew of Edessa's Chronicle and the Chronicle of Michael the Syrian. The three Hebrew chronicles include the Solomon bar Simson Chronicle discussing the Rhineland massacres.Angeliki E. Laiou and Roy Parviz Mottahedeh (2001) The Crusades from the Perspective of Byzantium and the Muslim World Dumbarton Oaks. A complete description of sources of the First Crusade is found in Claude Cahen's La Syrie du nord à l'époque des croisades et la principauté franque d'Antioche.
The anonymous authors of the Gesta, Fulcher of Chartres and Raymond of Aguilers were all participants in the Crusade, accompanied different contingents, and their works are regarded as foundational. Fulcher and Raymond both utilized Gesta to some extent, as did Peter Tudebode and the Historia Belli Sacri, with some variations. The Gesta was reworked (some with other eyewitness accounts) by Guibert of Nogent, Baldric of Dol, and Robert the Monk, whose work was the most widely read. Albert's account appears to be written independently of the Gesta, relying on other eyewitness reports. Derivative accounts of the Crusade include Bartolf of Nangis' Gesta Francorum Iherusalem expugnatium, Henry of Huntingdon's De Captione Antiochiae,Luard, Henry (1891). "Henry of Huntingdon". In Lee, Sidney (ed.). Dictionary of National Biography. 26. London: Smith, Elder & Co. p. 118. Sigebert of Gembloux's Chronicon sive Chronographia, and Benedetto Accolti's De Bello a Christianis contra Barbaros.Chisholm, Hugh, ed. (1911). "Accolti, Benedetto". Encyclopædia Britannica. 1 (11th ed.). Cambridge University Press. p. 121.
A 19th-century perspective of these works can be found in Heinrich von Sybel's History and Literature of the Crusades.Sybel, H. von (1861). Literature of the Crusades. In The history and literature of the crusades. London. pp. 99–272. Von Sybel also discusses some of the more important letters and correspondence from the First Crusade that provide some historical insight.Barber, Malcolm, and Bate, Keith, Letters from the East: Crusaders, Pilgrims and Settlers in the 12th–13th Centuries, Routledge, New York, 2016 See also the works Die Kreuzzugsbriefe aus den Jahren, 1088–1100,Hagenmeyer, H. (1901). Epistvlæ et chartæ ad historiam primi belli sacri spectantes qvæ svpersvnt ævo æqvales ac genvinæ: Die kreuzzugsbriefe aus den jahren 1088–1100. Innsbruck. by Heinrich Hagenmeyer and Letters of the Crusaders,Munro, D. Carleton. (1902). Letters of the crusaders. rev. ed Philadelphia, Pa.: The Dept. of history of the University of Pennsylvania. by Dana Carleton Munro. Hagenmeyer also prepared the Chronologie de la première croisade 1094–1100, a day-by-day account of the First Crusade, cross-referenced to original sources, with commentary.
Later works through the 18th century
The popularity of these works shaped how crusading was viewed in the medieval mind. Numerous poems and songs sprung from the First Crusade, including Gilo of Toucy's Historia de via Hierosolymitana.Derecki, Pawel, "Gilo of Toucy", in: Encyclopedia of the Medieval Chronicle, Edited by: Graeme Dunphy, Cristian Bratu. The well-known chanson de geste, Chanson d'Antioche, describes the First Crusade from the original preaching through the taking of Antioch in 1098 and into 1099. Based on Robert's work, Chanson d'Antioche was a valuable resource in helping catalog participants in the early Crusades and shaped how crusading was viewed in the medieval mind. A later poem was Torquato Tasso's 16th century Gerusalemme liberata, was based on Accolti's work and popular for nearly two centuries.Symonds, John Addington (1911). "Torquato Tasso" . In Chisholm, Hugh (ed.). Encyclopædia Britannica. 26 (11th ed.). Cambridge University Press. pp. 443–446. Tasso's work was converted into the biography Godfrey of Bulloigne, or, The recoverie of Jerusalem,Tasso, T., Fairfax, E. (1600). Godfrey of Bulloigne, or, The recoverie of Jerusalem. London: A. Hatfield for J. Jaggard and M. Lownes. by Edward Fairfax.
Later histories include English chronicler Orderic Vitalis' Historia Ecclesiastica.Kingsford, Charles Lethbridge (1900). "Ordericus Vitalis". In Lee, Sidney (ed.). Dictionary of National Biography. 42. London: Smith, Elder & Co. pp. 241–242. The work was a general social history of medieval England that includes a section on the First Crusade based on Baldric's account, with added details from oral sources and biographical details. The Gesta and the more detailed account of Albert of Aachen were used as the basis of the work of William of Tyre, Historia rerum in partibus transmarinis gestarum and its extensions.Chisholm, Hugh, ed. (1911). "William, archbishop of Tyre". Encyclopædia Britannica. 28. (11th ed.). Cambridge University Press. p. 677. The archbishop of Tyre's work was a major primary source for the history of the First Crusade and is regarded as their first analytical history. Later histories, through the 17th century, relied heavily on his writings. These histories used primary source materials, but they used them selectively to talk of Holy War (bellum sacrum), and their emphasis was upon prominent individuals and upon battles and the intrigues of high politics.*****
Others included in Jacques Bongars' work are Historia Hierosolymitana written by theologian and historian Jacques de Vitry, a participant in a later crusade; Historia by Byzantine emperor John VI Kantakouzenos, an account of Godfrey of Bouillon's arrival in Constantinople in 1096; and Liber Secretorum Fidelium Crucis by Venetian statesman and geographer Marino Sanuto, whose work on geography was invaluable to later historians. A biography of Godfrey of Bouillon, Historia et Gesta Ducis Gotfridi seu historia de desidione Terræ sanctæ, was written by anonymous German authors in 1141, relying on the original narratives and later histories, and appears in the RHC.
The first use of the term crusades was by 17th century French Jesuit and historian Louis MaimbourgWeber, Nicholas Aloysious (1910). "Louis Maimbourg". In Catholic Encyclopedia. 9. New York. in his Histoire des Croisades pour la délivrance de la Terre Sainte,Maimbourg, L. (1677). Histoire des croisades pour la délivrance de la Terre Sainte. 2d ed. Paris. a populist and royalist history of the Crusades from 1195 to 1220. An earlier work by Thomas Fuller,Stephen, Leslie (1889). "Thomas Fuller". In Dictionary of National Biography. 20. London. pp. 315–320. The Historie of the Holy Warre refers to the entire enterprise as the Holy War, with individual campaigns called voyages. Fuller's account was more anecdotal than historical, and was very popular until the Restoration. The work used original sources from Gesta Dei per Francos.Fuller, T. (1840). The history of the holy war. London: W. Pickering.
Notable works of the 18th century include Histoire des Croisades,Voltaire (1751). Histoire des croisades. Berlin. a history of the Crusades from the rise of the Seljuks until 1195 by French philosopher Voltaire. Scottish philosopher and historian David Hume did not write directly of the First Crusade, but his The History of EnglandDavid Hume (1983). The History of England from the Invasion of Julius Caesar to the Revolution in 1688. Indianapolis. described the Crusades as the "nadir of Western civilization." This view was continued by Edward Gibbon in his History of the Decline and Fall of the Roman Empire, excerpted as The Crusades, A.D. 1095–1261. This edition also includes an essay on chivalry by Sir Walter Scott, whose works helped popularize the Crusades.Gibbon, E., Kaye, J., Scott, W., Caoursin, G. (1870). The crusades. London.
The 19th and 20th centuries
Early in the 19th century, the monumental Histoire des CroisadesMichaud, J. Fr. (Joseph Fr.). (1841). Histoire des croisades. 6. éd. Paris. was published by the French historian Joseph François Michaud.Chisholm, Hugh, ed. (1911). "Michaud, Joseph François". Encyclopædia Britannica. 18 (11th ed.). Cambridge University Press. p. 361. under the editorship of Jean Poujoulat. This provided a major new narrative based on original sources and was translated into English as The History of the Crusades.Michaud, J. Fr., Robson, W. (1881). The history of the crusades. New ed. London. The work covers the First Crusade and its causes, and the crusades through 1481. French historian Jean-François-Aimé Peyré expanded Michaud's work on the First Crusade with his Histoire de la Première Croisade, a 900-page, two-volume set with extensive sourcing.Peyré, J. F. A. (1859). Histoire de la première croisade. Paris.
The English school of Crusader historians included Charles MillsGoodwin, Gordon (1894). "Mills, Charles" . In Dictionary of National Biography. 37. London. p. 444. who wrote History of the Crusades for the Recovery and Possession of the Holy Land,Mills, C. (1822). The history of the crusades for the recovery and possession of the Holy Land. 3d ed. London. a complete history of nine Crusades, disparaging Gibbon's work as superficial. Henry StebbingsCourtney, William Prideaux (1898). "Stebbing, Henry (1799–1883)" . In Dictionary of National Biography. 54. London. pp. 124–125. wrote his History of Chivalry and the Crusades,Stebbing, H. (1830). The history of chivalry and the crusades. Edinburgh. a discussion of chivalry and history of the first seven Crusades. Thomas Archer and Charles Kingsford wrote The Crusades: The Story of the Latin Kingdom of Jerusalem, rejecting the idea that the Fourth Crusade and the Albigensian Crusade should be designated as crusades.Archer, T. Andrew; Kingsford, C. Lethbridge. (1904). The crusades: The story of the Latin kingdom of Jerusalem. New York.
The German school of Crusaders was led by Friederich Wilken,Stoll, A. (1898). "Friedrich Wilken". In Allgemeine Deutsche Biographie (ADB). 43. Berlin. whose Geschichte der KreuzzügeWilken, F. (1807–1832). Geschichte der Kreuzzüge nach morgenländischen und abendländischen Berichten. Leipzig. was a complete history of the Crusades, based on Western, Arabic, Greek and Armenian sources. Later, Heinrich von Sybel,Chisholm, Hugh, ed. (1911). "Sybel, Heinrich von" . Encyclopædia Britannica. 26 (11th ed.). Cambridge University Press. pp. 275–276. who studied under Leopold von Ranke (the father of modern source-based history) challenged the work of William of Tyre as being secondary. His Geschichte des ersten KreuzzugesSybel, H. von. (1841). Geschichte des ersten Kreuzzugs. Düsseldorf. was a history of the First Crusade and contains a full study of the authorities for the First Crusade, and was translated to History and Literature of the Crusades by English author Lucie, Lady Duff-Gordon.George Clement Boase (1890). "Gordon, Lucie". In Dictionary of National Biography. 22. London. p. 220.
The greatest German historian of the Crusades was then Reinhold Röhricht. His histories of the First Crusade, Geschichte des ersten Kreuzzuges,Röhricht, R. (1901). Geschichte des ersten Kreuzzuges. Innsbruck. and of the kings of Jerusalem, Geschichte des Königreichs Jerusalem,Röhricht, R. (1898). Geschichte des königreichs Jerusalem (1100–1291). Innsbruck. laid the foundation of all modern crusade research.La Monte, J. (1940). Some Problems in Crusading Historiography. Speculum, 15(1), 57–75. His Bibliotheca geographica PalaestinaeRöhricht, R. (1890). Bibliotheca geographica Palaestinae. Berlin: H. Reuther. summarizes over 3500 books on the geography of the Holy Land, providing a valuable resource for historians. Röhricht's colleague Heinrich Hagenmeyer wrote Peter der Eremite,Hagenmeyer, H. (1879). Peter der Eremite. Leipzig. a critical contribution to the history of the First Crusade and the role of Peter the Hermit.
Two encyclopedia articles appeared in the early 20th century that are frequently called out by Crusader historians. The first of these is Crusades,Bréhier, Louis René. (1908). "Crusades". In Herbermann, Charles (ed.). Catholic Encyclopedia. 4. New York: Robert Appleton Company. by French historian Louis R. Bréhier, appearing in the Catholic Encyclopedia, based on his L'Église et l'Orient au Moyen Âge: Les Croisades.Bréhier, L. (1907). L'église et l'Orient au moyen âge: les croisades. Paris: Lecoffre, J. Gabalda. The second is The Crusades,Barker, Ernest (1911). "Crusades". In Chisholm, Hugh (ed.). Encyclopædia Britannica. 7 (11th ed.), Cambridge University Press. pp. 524–552. by English historian Ernest Barker, in the Encyclopædia Britannica (11th edition). Collectively, Bréhier and Barker wrote more than 50 articles for these two publications.Louis René Bréhier (1868–1951) (1913). In Herbermann, Charles (ed.). Catholic Encyclopedia. 4. New York: Robert Appleton Company.Ernest Barker (1874–1960) (1911). In Chisholm, Hugh (ed.). Encyclopædia Britannica. Index (11th ed.), Cambridge University Press. Barker's work was later revised as The Crusades and Bréhier published Histoire anonyme de la première croisade.Bréhier, L. (1924). Histoire anonyme de la première croisade. Paris: H. Champion. According to the Routledge Companion, these articles are evidence that "not all old things are useless."
According to the Routledge Companion, the three works that rank as being monumental by 20th century standards are: René Grousset's Histoire des croisades et du royaume franc de Jérusalem; Steven Runciman's 3-volume set of A History of the Crusades, and the Wisconsin Collaborative History of the Crusades (Wisconsin History). Grousset's volume on the First Crusade was L'anarchie musulmane, 1095–1130,Grousset, R. (193436). Histoire des croisades et du royaume franc de Jérusalem. Paris: Plon. a standard reference in the mid-twentieth century. The next two are still enjoying widespread use today. Runciman's first volume The First Crusade and the Foundation of the Kingdom of Jerusalem. has been criticized for being out-of-date and biased, but remains one of the most widely read accounts of the crusade. The first volume of the Wisconsin History, Volume 1: The First One Hundred Years, first appeared in 1969 and was edited by Marshall W. Baldwin. The chapters on the First Crusade were written by Runciman and Frederic Duncalf and again are dated, but still well-used references. Additional background chapters on related events of the 11th century are: Western Europe, by Sidney Painter; the Byzantine Empire, by Peter Charanis; the Islamic world by H. A. B. Gibb; the Seljuk invasion, by Claude Cahen; and the Assassins, by Bernard Lewis.
Bibliographies of works on the First Crusade through the 20th century include ones by French medievalist and Byzantinist Ferdinand Chalandon in his Histoire de la Première Croisade jusqu'à l'élection de Godefroi de Bouillon and the Select Bibliography on the Crusades, compiled by Hans E. Mayer and Joyce McLellan.
Modern histories of the First Crusade
Since the 1970s, the Crusades have attracted hundreds of scholars to their study, many of whom are identified in the online database Historians of the Crusades, part of the Resources for Studying the Crusades created at Queen Mary University of London in 2007–2008. Some of the more notable historians of the First Crusade include Jonathan Riley-Smith (1938–2016), the leading historian of the Crusades of his generation. His work includes The First Crusade and the Idea of Crusading (1993). and The First Crusaders, 1095–1131 (1998).. His doctoral students are among the most renowned in the world and he led the team that created the Database of Crusaders to the Holy Land, 1096–1149. Carole Hillenbrand (born 1943) is an Islamic scholar whose work The Crusades: Islamic Perspectives (1999) discusses themes that highlight how Muslims reacted to the presence of the Crusaders in the heart of traditionally Islamic territory and is regarded as one of the most influential works on the First Crusade. Other current researchers include Christopher Tyerman (born 1953) whose God's War: A New History of the Crusades (2006). is regarded as the definitive account of all the crusades. In his An Eyewitness History of the Crusades (2004),Tyerman, Christopher (2004). An Eyewitness History of the Crusades. Folio Society. Tyerman provides the history of the crusades told from original eyewitness sources, both Christian and Muslim. Thomas Asbridge (born 1969) has written The First Crusade: A New History: The Roots of Conflict between Christianity and Islam (2004) and the more expansive The Crusades: The Authoritative History of the War for the Holy Land (2012). Thomas Madden (born 1960) has written The New Concise History of the Crusades (2005)The New Concise History of the Crusades (Lanham: Rowman and Littlefield, 2005; repr New York: Barnes and Noble, 2007). and The Real History of the Crusades (2011)."The Real History of the Crusades", ARMA, March 19, 2011 (updated 2005 piece) The Crusades—An Encyclopedia (2006)Murray, Alan V. (2006). The Crusades – An Encyclopedia, ABC-CLIO, Santa Barbara. edited by historian Alan V. Murray provides a comprehensive treatment of the Crusades with over 1000 entries written by 120 authors from 25 countries. The list of other historians is extensive and excellent bibliographies include that by Asbridge and in The Routledge Companion to the Crusades.
See also
Pilgrim's Road, followed by the Crusaders
Notes
References
Bibliography
External links
Category:1090s conflicts
Category:1090s in Asia
Category:1090s in Europe
Category:1090s in the Byzantine Empire
Category:1090s in the Kingdom of Jerusalem
Category:11th century in the Fatimid Caliphate
Category:11th-century crusades
Category:History of Antioch
Category:Wars involving Armenia
Category:Wars involving the Kingdom of France (987–1792)
Category:Wars involving the Byzantine Empire
Category:Crusader–Fatimid wars
Category:Wars involving the Holy Roman Empire
Category:Wars involving the Republic of Genoa
Category:Wars involving the Republic of Pisa
|
wars_military
| 13,361
|
113302
|
Surface tension
|
https://en.wikipedia.org/wiki/Surface_tension
|
Surface tension is the tendency of liquid surfaces at rest to shrink into the minimum surface area possible. Surface tension is what allows objects with a higher density than water such as razor blades and insects (e.g. water striders) to float on a water surface without becoming even partly submerged.
At liquid–air interfaces, surface tension results from the greater attraction of liquid molecules to each other (due to cohesion) than to the molecules in the air (due to adhesion).
There are two primary mechanisms in play. One is an inward force on the surface molecules causing the liquid to contract. Second is a tangential force parallel to the surface of the liquid. This tangential force is generally referred to as the surface tension. The net effect is the liquid behaves as if its surface were covered with a stretched elastic membrane. Surface tension is an inherent property of the liquid–air or liquid–vapour interface.
Because of the relatively high attraction of water molecules to each other through a web of hydrogen bonds, water has a higher surface tension (72.8 millinewtons (mN) per meter at 20 °C) than most other liquids. Surface tension is an important factor in the phenomenon of capillarity.
Surface tension has the dimension of force per unit length, or of energy per unit area. The two are equivalent, but when referring to energy per unit of area, it is common to use the term surface energy, which is a more general term in the sense that it applies also to solids. Surface tension is used for liquids, while surface stress and surface energy are relevant for solids.
Causes
Due to the cohesive forces, a molecule located away from the surface is pulled equally in every direction by neighboring liquid molecules, resulting in a net force of zero. The molecules at the surface do not have the same molecules on all sides of them and therefore are pulled inward. This creates some internal pressure and forces liquid surfaces to contract to the minimum area.
There is also a tension parallel to the surface at the liquid-air interface which will resist an external force, due to the cohesive forces between the molecules.
The forces of attraction acting between molecules of the same type are called cohesive forces, while those acting between molecules of different types are called adhesive forces. The balance between the cohesion of the liquid and its adhesion to the material of the container determines the degree of wetting, the contact angle, and the shape of meniscus. When cohesion dominates (specifically, adhesion energy is less than half of cohesion energy) the wetting is low and the meniscus is convex at a vertical wall (as for mercury in a glass container). On the other hand, when adhesion dominates (when adhesion energy is more than half of cohesion energy) the wetting is high and the similar meniscus is concave (as in water in a glass).
Surface tension is responsible for the shape of liquid droplets. Although easily deformed, droplets of water tend to be pulled into a spherical shape by the imbalance in cohesive forces of the surface layer. In the absence of other forces, drops of virtually all liquids would be approximately spherical. The spherical shape minimizes the necessary "wall tension" of the surface layer according to Laplace's law.
Another way to view surface tension is in terms of energy. A molecule in contact with a neighbor is in a lower state of energy than if it were alone. The interior molecules have as many neighbors as they can possibly have, but the boundary molecules are missing neighbors (compared to interior molecules) and therefore have higher energy. For the liquid to minimize its energy state, the number of higher energy boundary molecules must be minimized. The minimized number of boundary molecules results in a minimal surface area.
As a result of surface area minimization, a surface will assume a smooth shape.
Physics
Physical units
Surface tension, represented by the symbol γ (alternatively σ or T), is measured in force per unit length. Its SI unit is newton per metre but the cgs unit of dyne per centimetre is also used, particularly in the older literature. For example,
Definition
Surface tension can be defined in terms of force or energy.
In terms of force
Surface tension of a liquid is the force per unit length. In the illustration on the right, the rectangular frame, composed of three unmovable sides (black) that form a "U" shape, and a fourth movable side (blue) that can slide to the right. Surface tension will pull the blue bar to the left; the force required to hold the movable side is proportional to the length of the immobile side. Thus the ratio depends only on the intrinsic properties of the liquid (composition, temperature, etc.), not on its geometry. For example, if the frame had a more complicated shape, the ratio , with the length of the movable side and the force required to stop it from sliding, is found to be the same for all shapes. We therefore define the surface tension as
The reason for the is that the film has two sides (two surfaces), each of which contributes equally to the force; so the force contributed by a single side is .
In terms of energy
Surface tension of a liquid is the ratio of the change in the energy of the liquid to the change in the surface area of the liquid (that led to the change in energy). This can be easily related to the previous definition in terms of force: if is the force required to stop the side from starting to slide, then this is also the force that would keep the side in the state of sliding at a constant speed (by Newton's second law). But if the side is moving to the right (in the direction the force is applied), then the surface area of the stretched liquid is increasing while the applied force is doing work on the liquid. This means that increasing the surface area increases the energy of the film. The work done by the force in moving the side by distance is ; at the same time the total area of the film increases by (the factor of 2 is here because the liquid has two sides, two surfaces). Thus, multiplying both the numerator and the denominator of by , we get
This work is, by the usual arguments, interpreted as being stored as potential energy. Consequently, surface tension can be also measured in SI system as joules per square meter and in the cgs system as ergs per cm2. Since mechanical systems try to find a state of minimum potential energy, a free droplet of liquid naturally assumes a spherical shape, which has the minimum surface area for a given volume.
The equivalence of measurement of energy per unit area to force per unit length can be proven by dimensional analysis.
Effects
Water
Several effects of surface tension can be seen with ordinary water:
Surfactants
Surface tension is visible in other common phenomena, especially when surfactants are used to decrease it:
Soap bubbles have very large surface areas with very little mass. Bubbles in pure water are unstable. The addition of surfactants, however, can have a stabilizing effect on the bubbles (see Marangoni effect). Surfactants actually reduce the surface tension of water by a factor of three or more.
Emulsions are a type of colloidal dispersion in which surface tension plays a role. Tiny droplets of oil dispersed in pure water will spontaneously coalesce and phase separate. The addition of surfactants reduces the interfacial tension and allow for the formation of oil droplets in the water medium (or vice versa). The stability of such formed oil droplets depends on many different chemical and environmental factors.
Surface curvature and pressure
If no force acts normal to a tensioned surface, the surface must remain flat. But if the pressure on one side of the surface differs from pressure on the other side, the pressure difference times surface area results in a normal force. In order for the surface tension forces to cancel the force due to pressure, the surface must be curved. The diagram shows how surface curvature of a tiny patch of surface leads to a net component of surface tension forces acting normal to the center of the patch. When all the forces are balanced, the resulting equation is known as the Young–Laplace equation:
where:
is the pressure difference, known as the Laplace pressure.
is surface tension.
and are radii of curvature in each of the axes that are parallel to the surface.
The quantity in parentheses on the right hand side is in fact (twice) the mean curvature of the surface (depending on normalisation).
Solutions to this equation determine the shape of water drops, puddles, menisci, soap bubbles, and all other shapes determined by surface tension (such as the shape of the impressions that a water strider's feet make on the surface of a pond).
The table below shows how the internal pressure of a water droplet increases with decreasing radius. For not very small drops the effect is subtle, but the pressure difference becomes enormous when the drop sizes approach the molecular size. (In the limit of a single molecule the concept becomes meaningless.)
+ for water drops of different radii at STP Droplet radius 1 mm 0.1 mm 1 μm 10 nm (atm) 0.0014 0.0144 1.436 143.6
Floating objects
When an object is placed on a liquid, its weight depresses the surface, and if surface tension and downward force become equal then it is balanced by the surface tension forces on either side , which are each parallel to the water's surface at the points where it contacts the object. Notice that small movement in the body may cause the object to sink. As the angle of contact decreases, surface tension decreases. The horizontal components of the two arrows point in opposite directions, so they cancel each other, but the vertical components point in the same direction and therefore add up to balance . The object's surface must not be wettable for this to happen, and its weight must be low enough for the surface tension to support it. If denotes the mass of the needle and acceleration due to gravity, we have
Liquid surface
To find the shape of the minimal surface bounded by some arbitrary shaped frame using strictly mathematical means can be a daunting task. Yet by fashioning the frame out of wire and dipping it in soap-solution, a locally minimal surface will appear in the resulting soap-film within seconds.Aaronson, Scott (March 2005) NP-complete Problems and Physical Reality . ACM SIGACT News
The reason for this is that the pressure difference across a fluid interface is proportional to the mean curvature, as seen in the Young–Laplace equation. For an open soap film, the pressure difference is zero, hence the mean curvature is zero, and minimal surfaces have the property of zero mean curvature.
Contact angles
The surface of any liquid is an interface between that liquid and some other medium.In a mercury barometer, the upper liquid surface is an interface between the liquid and a vacuum containing some molecules of evaporated liquid. The top surface of a pond, for example, is an interface between the pond water and the air. Surface tension, then, is not a property of the liquid alone, but a property of the liquid's interface with another medium. If a liquid is in a container, then besides the liquid/air interface at its top surface, there is also an interface between the liquid and the walls of the container. The surface tension between the liquid and air is usually different (greater) than its surface tension with the walls of a container. And where the two surfaces meet, their geometry must be such that all forces balance.
Where the two surfaces meet, they form a contact angle, , which is the angle the tangent to the surface makes with the solid surface. Note that the angle is measured through the liquid, as shown in the diagrams above. The diagram to the right shows two examples. Tension forces are shown for the liquid–air interface, the liquid–solid interface, and the solid–air interface. The example on the left is where the difference between the liquid–solid and solid–air surface tension, , is less than the liquid–air surface tension, , but is nevertheless positive, that is
In the diagram, both the vertical and horizontal forces must cancel exactly at the contact point, known as equilibrium. The horizontal component of is canceled by the adhesive force, .
The more telling balance of forces, though, is in the vertical direction. The vertical component of must exactly cancel the difference of the forces along the solid surface, .
+Some liquid–solid contact angles Liquid Solid Contactangle watersoda-lime glass lead glass fused quartz0° ethanol diethyl ether carbon tetrachloride glycerol acetic acidwater paraffin wax107° silver90° methyl iodide soda-lime glass29° lead glass30° fused quartz33° mercury soda-lime glass140°
Since the forces are in direct proportion to their respective surface tensions, we also have:
where
is the liquid–solid surface tension,
is the liquid–air surface tension,
is the solid–air surface tension,
is the contact angle, where a concave meniscus has contact angle less than 90° and a convex meniscus has contact angle of greater than 90°.Sears, Francis Weston; Zemanski, Mark W. (1955) University Physics 2nd ed. Addison Wesley
This means that although the difference between the liquid–solid and solid–air surface tension, , is difficult to measure directly, it can be inferred from the liquid–air surface tension, , and the equilibrium contact angle, , which is a function of the easily measurable advancing and receding contact angles (see main article contact angle).
This same relationship exists in the diagram on the right. But in this case we see that because the contact angle is less than 90°, the liquid–solid/solid–air surface tension difference must be negative:
Special contact angles
Observe that in the special case of a water–silver interface where the contact angle is equal to 90°, the liquid–solid/solid–air surface tension difference is exactly zero.
Another special case is where the contact angle is exactly 180°. Water with specially prepared Teflon approaches this. Contact angle of 180° occurs when the liquid–solid surface tension is exactly equal to the liquid–air surface tension.
Liquid in a vertical tube
An old style mercury barometer consists of a vertical glass tube about 1 cm in diameter partially filled with mercury, and with a vacuum (called Torricelli's vacuum) in the unfilled volume (see diagram to the right). Notice that the mercury level at the center of the tube is higher than at the edges, making the upper surface of the mercury dome-shaped. The center of mass of the entire column of mercury would be slightly lower if the top surface of the mercury were flat over the entire cross-section of the tube. But the dome-shaped top gives slightly less surface area to the entire mass of mercury. Again the two effects combine to minimize the total potential energy. Such a surface shape is known as a convex meniscus.
We consider the surface area of the entire mass of mercury, including the part of the surface that is in contact with the glass, because mercury does not adhere to glass at all. So the surface tension of the mercury acts over its entire surface area, including where it is in contact with the glass. If instead of glass, the tube was made out of copper, the situation would be very different. Mercury aggressively adheres to copper. So in a copper tube, the level of mercury at the center of the tube will be lower than at the edges (that is, it would be a concave meniscus). In a situation where the liquid adheres to the walls of its container, we consider the part of the fluid's surface area that is in contact with the container to have negative surface tension. The fluid then works to maximize the contact surface area. So in this case increasing the area in contact with the container decreases rather than increases the potential energy. That decrease is enough to compensate for the increased potential energy associated with lifting the fluid near the walls of the container.
If a tube is sufficiently narrow and the liquid adhesion to its walls is sufficiently strong, surface tension can draw liquid up the tube in a phenomenon known as capillary action. The height to which the column is lifted is given by Jurin's law:
where
is the height the liquid is lifted,
is the liquid–air surface tension,
is the density of the liquid,
is the radius of the capillary,
is the acceleration due to gravity,
is the angle of contact described above. If is greater than 90°, as with mercury in a glass container, the liquid will be depressed rather than lifted.
Puddles on a surface
Pouring mercury onto a horizontal flat sheet of glass results in a puddle that has a perceptible thickness. The puddle will spread out only to the point where it is a little under half a centimetre thick, and no thinner. Again this is due to the action of mercury's strong surface tension. The liquid mass flattens out because that brings as much of the mercury to as low a level as possible, but the surface tension, at the same time, is acting to reduce the total surface area. The result of the compromise is a puddle of a nearly fixed thickness.
The same surface tension demonstration can be done with water, lime water or even saline, but only on a surface made of a substance to which water does not adhere. Wax is such a substance. Water poured onto a smooth, flat, horizontal wax surface, say a waxed sheet of glass, will behave similarly to the mercury poured onto glass.
The thickness of a puddle of liquid on a surface whose contact angle is 180° is given by:
where
is the depth of the puddle in centimeters or meters.
is the surface tension of the liquid in dynes per centimeter or newtons per meter.
is the acceleration due to gravity and is equal to 980 cm/s2 or 9.8 m/s2
is the density of the liquid in grams per cubic centimeter or kilograms per cubic meter
In reality, the thicknesses of the puddles will be slightly less than what is predicted by the above formula because very few surfaces have a contact angle of 180° with any liquid. When the contact angle is less than 180°, the thickness is given by:
For mercury on glass, = 487 dyn/cm, = 13.5 g/cm3 and = 140°, which gives = 0.36 cm. For water on paraffin at 25 °C, = 72 dyn/cm, = 1.0 g/cm3, and = 107° which gives = 0.44 cm.
The formula also predicts that when the contact angle is 0°, the liquid will spread out into a micro-thin layer over the surface. Such a surface is said to be fully wettable by the liquid.
Breakup of streams into drops
In day-to-day life all of us observe that a stream of water emerging from a faucet will break up into droplets, no matter how smoothly the stream is emitted from the faucet. This is due to a phenomenon called the Plateau–Rayleigh instability, which is entirely a consequence of the effects of surface tension.
The explanation of this instability begins with the existence of tiny perturbations in the stream. These are always present, no matter how smooth the stream is. If the perturbations are resolved into sinusoidal components, we find that some components grow with time while others decay with time. Among those that grow with time, some grow at faster rates than others. Whether a component decays or grows, and how fast it grows is entirely a function of its wave number (a measure of how many peaks and troughs per centimeter) and the radii of the original cylindrical stream.
Gallery
Thermodynamics
Thermodynamic theories of surface tension
J.W. Gibbs developed the thermodynamic theory of capillarity based
on the idea of surfaces of discontinuity. Gibbs considered the case of a sharp mathematical surface being placed somewhere within the microscopically fuzzy physical interface that exists between two homogeneous substances. Realizing that the exact choice of the surface's location was somewhat arbitrary, he left it flexible. Since the interface exists in thermal and chemical equilibrium with the substances around it (having temperature and chemical potentials ), Gibbs considered the case where the surface may have excess energy, excess entropy, and excess particles, finding the natural free energy function in this case to be , a quantity later named as the grand potential and given the symbol .
Considering a given subvolume containing a surface of discontinuity, the volume is divided by the mathematical surface into two parts A and B, with volumes and , with exactly. Now, if the two parts A and B were homogeneous fluids (with pressures , ) and remained perfectly homogeneous right up to the mathematical boundary, without any surface effects, the total grand potential of this volume would be simply . The surface effects of interest are a modification to this, and they can be all collected into a surface free energy term so the total grand potential of the volume becomes:
For sufficiently macroscopic and gently curved surfaces, the surface free energy must simply be proportional to the surface area:
for surface tension and surface area .
As stated above, this implies the mechanical work needed to increase a surface area A is , assuming the volumes on each side do not change. Thermodynamics requires that for systems held at constant chemical potential and temperature, all spontaneous changes of state are accompanied by a decrease in this free energy , that is, an increase in total entropy taking into account the possible movement of energy and particles from the surface into the surrounding fluids. From this it is easy to understand why decreasing the surface area of a mass of liquid is always spontaneous, provided it is not coupled to any other energy changes. It follows that in order to increase surface area, a certain amount of energy must be added.
Gibbs and other scientists have wrestled with the arbitrariness in the exact microscopic placement of the surface. For microscopic surfaces with very tight curvatures, it is not correct to assume the surface tension is independent of size, and topics like the Tolman length come into play. For a macroscopic-sized surface (and planar surfaces), the surface placement does not have a significant effect on ; however, it does have a very strong effect on the values of the surface entropy, surface excess mass densities, and surface internal energy, which are the partial derivatives of the surface tension function .
Gibbs emphasized that for solids, the surface free energy may be completely different from surface stress (what he called surface tension): the surface free energy is the work required to form the surface, while surface stress is the work required to stretch the surface. In the case of a two-fluid interface, there is no distinction between forming and stretching because the fluids and the surface completely replenish their nature when the surface is stretched. For a solid, stretching the surface, even elastically, results in a fundamentally changed surface. Further, the surface stress on a solid is a directional quantity (a stress tensor) while surface energy is scalar.
Fifteen years after Gibbs, J.D. van der Waals developed the theory of capillarity effects based on the hypothesis of a continuous variation of density. He added to the energy density the term where c is the capillarity coefficient and ρ is the density. For the multiphase equilibria, the results of the van der Waals approach practically coincide with the Gibbs formulae, but for modelling of the dynamics of phase transitions the van der Waals approach is much more convenient. The van der Waals capillarity energy is now widely used in the phase field models of multiphase flows. Such terms are also discovered in the dynamics of non-equilibrium gases.
Thermodynamics of bubbles
The pressure inside an ideal spherical bubble can be derived from thermodynamic free energy considerations. The above free energy can be written as:
where is the pressure difference between the inside (A) and outside (B) of the bubble, and is the bubble volume. In equilibrium, , and so,
For a spherical bubble, the volume and surface area are given simply by
and
Substituting these relations into the previous expression, we find
which is equivalent to the Young–Laplace equation when .
Influence of temperature
Surface tension is dependent on temperature. For that reason, when a value is given for the surface tension of an interface, temperature must be explicitly stated. The general trend is that surface tension decreases with the increase of temperature, reaching a value of 0 at the critical temperature. For further details see Eötvös rule. There are only empirical equations to relate surface tension and temperature:
Eötvös: Here is the molar volume of a substance, is the critical temperature and is a constant valid for almost all substances. A typical value is = . For water one can further use = 18 ml/mol and = 647 K (374 °C). A variant on Eötvös is described by Ramay and Shields: where the temperature offset of 6 K provides the formula with a better fit to reality at lower temperatures.
Guggenheim–Katayama: is a constant for each liquid and is an empirical factor, whose value is for organic liquids. This equation was also proposed by van der Waals, who further proposed that could be given by the expression where is a universal constant for all liquids, and is the critical pressure of the liquid (although later experiments found to vary to some degree from one liquid to another).
Both Guggenheim–Katayama and Eötvös take into account the fact that surface tension reaches 0 at the critical temperature, whereas Ramay and Shields fails to match reality at this endpoint.
Influence of solute concentration
Solutes can have different effects on surface tension depending on the nature of the surface and the solute:
Little or no effect, for example sugar at water|air, most organic compounds at oil/air
Increase surface tension, most inorganic salts at water|air
Non-monotonic change, most inorganic acids at water|air
Decrease surface tension progressively, as with most amphiphiles, e.g., alcohols at water|air
Decrease surface tension until certain critical concentration, and no effect afterwards: surfactants that form micelles
What complicates the effect is that a solute can exist in a different concentration at the surface of a solvent than in its bulk. This difference varies from one solute–solvent combination to another.
Gibbs isotherm states that:
is known as surface concentration, it represents excess of solute per unit area of the surface over what would be present if the bulk concentration prevailed all the way to the surface. It has units of mol/m2
is the concentration of the substance in the bulk solution.
is the gas constant and the temperature
Certain assumptions are taken in its deduction, therefore Gibbs isotherm can only be applied to ideal (very dilute) solutions with two components.
Influence of particle size on vapor pressure
The Clausius–Clapeyron relation leads to another equation also attributed to Kelvin, as the Kelvin equation. It explains why, because of surface tension, the vapor pressure for small droplets of liquid in suspension is greater than standard vapor pressure of that same liquid when the interface is flat. That is to say that when a liquid is forming small droplets, the equilibrium concentration of its vapor in its surroundings is greater. This arises because the pressure inside the droplet is greater than outside.
is the standard vapor pressure for that liquid at that temperature and pressure.
is the molar volume.
is the gas constant
is the Kelvin radius, the radius of the droplets.
The effect explains supersaturation of vapors. In the absence of nucleation sites, tiny droplets must form before they can evolve into larger droplets. This requires a vapor pressure many times the vapor pressure at the phase transition point.
This equation is also used in catalyst chemistry to assess mesoporosity for solids.Ertl, G.; Knözinger, H. and Weitkamp, J. (1997). Handbook of heterogeneous catalysis, Vol. 2, p. 430. Wiley-VCH, Weinheim.
The effect can be viewed in terms of the average number of molecular neighbors of surface molecules (see diagram).
The table shows some calculated values of this effect for water at different drop sizes:
for water drops of different radii at STPDroplet radius (nm)1000100101 1.001 1.0111.114 2.95
The effect becomes clear for very small drop sizes, as a drop of 1 nm radius has about 100 molecules inside, which is a quantity small enough to require a quantum mechanics analysis.
Methods of measurement
Because surface tension manifests itself in various effects, it offers a number of paths to its measurement. Which method is optimal depends upon the nature of the liquid being measured, the conditions under which its tension is to be measured, and the stability of its surface when it is deformed. An instrument that measures surface tension is called tensiometer.
Du Noüy ring method: The traditional method used to measure surface or interfacial tension. Wetting properties of the surface or interface have little influence on this measuring technique. Maximum pull exerted on the ring by the surface is measured.
Wilhelmy plate method: A universal method especially suited to check surface tension over long time intervals. A vertical plate of known perimeter is attached to a balance, and the force due to wetting is measured.
Spinning drop method: This technique is ideal for measuring low interfacial tensions. The diameter of a drop within a heavy phase is measured while both are rotated.
Pendant drop method: Surface and interfacial tension can be measured by this technique, even at elevated temperatures and pressures. Geometry of a drop is analyzed optically. For pendant drops the maximum diameter and the ratio between this parameter and the diameter at the distance of the maximum diameter from the drop apex has been used to evaluate the size and shape parameters in order to determine surface tension.
Bubble pressure method (Jaeger's method): A measurement technique for determining surface tension at short surface ages. Maximum pressure of each bubble is measured.
Drop volume method: A method for determining interfacial tension as a function of interface age. Liquid of one density is pumped into a second liquid of a different density and time between drops produced is measured.
Capillary rise method: The end of a capillary is immersed into the solution. The height at which the solution reaches inside the capillary is related to the surface tension by the equation discussed above.
Stalagmometric method: A method of weighting and reading a drop of liquid.
Sessile drop method: A method for determining surface tension and density by placing a drop on a substrate and measuring the contact angle (see Sessile drop technique).
Du Noüy–Padday method: A minimized version of Du Noüy method uses a small diameter metal needle instead of a ring, in combination with a high sensitivity microbalance to record maximum pull. The advantage of this method is that very small sample volumes (down to few tens of microliters) can be measured with very high precision, without the need to correct for buoyancy (for a needle or rather, rod, with proper geometry). Further, the measurement can be performed very quickly, minimally in about 20 seconds.
Vibrational frequency of levitated drops: The natural frequency of vibrational oscillations of magnetically levitated drops has been used to measure the surface tension of superfluid 4He. This value is estimated to be 0.375 dyn/cm at = 0 K.
Resonant oscillations of spherical and hemispherical liquid drop: The technique is based on measuring the resonant frequency of spherical and hemispherical pendant droplets driven in oscillations by a modulated electric field. The surface tension and viscosity can be evaluated from the obtained resonant curves.
Drop-bounce method: This method is based on aerodynamic levitation with a split-able nozzle design. After dropping a stably levitated droplet onto a platform, the sample deforms and bounces back, oscillating in mid-air as it tries to minimize its surface area. Through this oscillation behavior, the liquid's surface tension and viscosity can be measured.
Values
Data table
Surface tension of various liquids in dyn/cm against airLange's Handbook of Chemistry (1967) 10th ed. pp 1661–1665 (11th ed.)Mixture compositions denoted "%" are by mass dyn/cm is equivalent to the SI units of mN/m (millinewton per meter) Liquid Temperature (°C) Surface tension, Acetic acid 20 27.60 Acetic acid (45.1%) + Water 30 40.68 Acetic acid (10.0%) + Water 30 54.56 Acetone 20 23.70 Benzene 20 28.88 Blood 22 55.89 Butyl acetate 20 25.09 Butyric acid 20 26.51 Carbon tetrachloride 25 26.43 Chloroform 25 26.67 Diethyl ether 20 17.00 Diethylene glycol 20 30.09 Dimethyl sulfoxide 20 43.54 Ethanol 20 22.27 Ethanol (40%) + Water 25 29.63 Ethanol (11.1%) + Water 25 46.03 Ethylene glycol 25 47.3 Glycerol 20 63.00 Heptane 20 20.14 n-Hexane 20 18.40 Hydrochloric acid 17.7 M aqueous solution 20 65.95 Isopropanol 20 21.70 Liquid helium II −273 0.37 Mercury 20 486.5 Liquid nitrogen −196 8.85 Nonane 20 22.85 Liquid oxygen −182 13.2 Mercury 15 487.00 Methanol 20 22.60 Methylene iodide 20 67.00 Molten Silver chloride 650 163 Molten Sodium chloride/Calcium chloride (47/53 mole %) 650 139 n-Octane 20 21.62Lange's Handbook of Chemistry (1967) 10th ed. pp 1661–1665 (11th ed.) Propionic acid 20 26.69 Propylene carbonate 20 41.1 Sodium chloride 6.0 M aqueous solution 20 82.55 Sodium chloride (molten) 1073 115 Sucrose (55%) + water 20 76.45 Toluene 25 27.73 Water 0 75.64 Water 25 71.97 Water 50 67.91 Water 100 58.85
Surface tension of water
The surface tension of pure liquid water in contact with its vapor has been given by IAPWS as
where both and the critical temperature = 647.096 K are expressed in kelvins. The region of validity the entire vapor–liquid saturation curve, from the triple point (0.01 °C) to the critical point. It also provides reasonable results when extrapolated to metastable (supercooled) conditions, down to at least −25 °C. This formulation was originally adopted by IAPWS in 1976 and was adjusted in 1994 to conform to the International Temperature Scale of 1990.
The uncertainty of this formulation is given over the full range of temperature by IAPWS. For temperatures below 100 °C, the uncertainty is ±0.5%.
Surface tension of seawater
Nayar et al. published reference data for the surface tension of seawater over the salinity range of and a temperature range of at atmospheric pressure. The range of temperature and salinity encompasses both the oceanographic range and the range of conditions encountered in thermal desalination technologies. The uncertainty of the measurements varied from 0.18 to 0.37 mN/m with the average uncertainty being 0.22 mN/m.
Nayar et al. correlated the data with the following equation
where is the surface tension of seawater in mN/m, is the surface tension of water in mN/m, is the reference salinity in g/kg, and is temperature in degrees Celsius. The average absolute percentage deviation between measurements and the correlation was 0.19% while the maximum deviation is 0.60%.
The International Association for the Properties of Water and Steam (IAPWS) has adopted this correlation as an international standard guideline.
See also
Agnes Pockels — early surface sciences researcher
Anti-fog
Capillary wave — short waves on a water surface, governed by surface tension and inertia
Cheerio effect — the tendency for small wettable floating objects to attract one another
Cohesion
Dimensionless numbers
Bond number or Eötvös number
Capillary number
Marangoni number
Weber number
Dortmund Data Bank — contains experimental temperature-dependent surface tensions
Electrodipping force
Electrowetting
Electrocapillarity
Eötvös rule — a rule for predicting surface tension dependent on temperature
Hydrostatic equilibrium — the effect of gravity pulling matter into a round shape
Interface (chemistry)
Meniscus — surface curvature formed by a liquid in a container
Mercury beating heart — a consequence of inhomogeneous surface tension
Sessile drop technique
Spinning drop method
Stalagmometric method
Surface pressure
Surface science
Surface tension biomimetics
Surface tension values
Surfactants — substances which reduce surface tension.
Szyszkowski equation — calculating surface tension of aqueous solutions
Tears of wine — the surface tension induced phenomenon seen on the sides of glasses containing alcoholic beverages.
Tolman length — leading term in correcting the surface tension for curved surfaces.
Wetting and dewetting
Explanatory notes
References
Further reading
External links
"Why is surface tension parallel to the interface?". Physics Stack Exchange. Retrieved 2021-03-19.3854
On surface tension and interesting real-world cases
Surface Tensions of Various Liquids
Calculation of temperature-dependent surface tensions for some common components
Surface tension calculator for aqueous solutions containing the ions H+, , Na+, K+, Mg2+, Ca2+, , , Cl−, , Br− and OH−.
T. Proctor Hall (1893) New methods of measuring surface tension in liquids, Philosophical Magazine (series 5, 36: 385–415), link from Biodiversity Heritage Library.
The Bubble Wall (Audio slideshow from the National High Magnetic Field Laboratory explaining cohesion, surface tension and hydrogen bonds)
C. Pfister: Interface Free Energy. Scholarpedia 2010 (from first principles of statistical mechanics)
Surface and Interfacial Tension
Category:Articles containing video clips
Category:Fluid dynamics
Category:Intermolecular forces
Category:Surface science
Category:Mechanical quantities
|
chemistry
| 6,189
|
125297
|
Dynamic programming
|
https://en.wikipedia.org/wiki/Dynamic_programming
|
Dynamic programming is both a mathematical optimization method and an algorithmic paradigm. The method was developed by Richard Bellman in the 1950s and has found applications in numerous fields, such as aerospace engineering and economics.
In both contexts it refers to simplifying a complicated problem by breaking it down into simpler sub-problems in a recursive manner. While some decision problems cannot be taken apart this way, decisions that span several points in time do often break apart recursively. Likewise, in computer science, if a problem can be solved optimally by breaking it into sub-problems and then recursively finding the optimal solutions to the sub-problems, then it is said to have optimal substructure.
If sub-problems can be nested recursively inside larger problems, so that dynamic programming methods are applicable, then there is a relation between the value of the larger problem and the values of the sub-problems.Cormen, T. H.; Leiserson, C. E.; Rivest, R. L.; Stein, C. (2001), Introduction to Algorithms (2nd ed.), MIT Press & McGraw–Hill, . pp. 344. In the optimization literature this relationship is called the Bellman equation.
Overview
Mathematical optimization
In terms of mathematical optimization, dynamic programming usually refers to simplifying a decision by breaking it down into a sequence of decision steps over time.
This is done by defining a sequence of value functions V1, V2, ..., Vn taking y as an argument representing the state of the system at times i from 1 to n.
The definition of Vn(y) is the value obtained in state y at the last time n.
The values Vi at earlier times i = n −1, n − 2, ..., 2, 1 can be found by working backwards, using a recursive relationship called the Bellman equation.
For i = 2, ..., n, Vi−1 at any state y is calculated from Vi by maximizing a simple function (usually the sum) of the gain from a decision at time i − 1 and the function Vi at the new state of the system if this decision is made.
Since Vi has already been calculated for the needed states, the above operation yields Vi−1 for those states.
Finally, V1 at the initial state of the system is the value of the optimal solution. The optimal values of the decision variables can be recovered, one by one, by tracking back the calculations already performed.
Control theory
In control theory, a typical problem is to find an admissible control which causes the system to follow an admissible trajectory on a continuous time interval that minimizes a cost function
The solution to this problem is an optimal control law or policy , which produces an optimal trajectory and a cost-to-go function . The latter obeys the fundamental equation of dynamic programming:
a partial differential equation known as the Hamilton–Jacobi–Bellman equation, in which and . One finds that minimizing in terms of , , and the unknown function and then substitutes the result into the Hamilton–Jacobi–Bellman equation to get the partial differential equation to be solved with boundary condition . In practice, this generally requires numerical techniques for some discrete approximation to the exact optimization relationship.
Alternatively, the continuous process can be approximated by a discrete system, which leads to a following recurrence relation analog to the Hamilton–Jacobi–Bellman equation:
at the -th stage of equally spaced discrete time intervals, and where and denote discrete approximations to and . This functional equation is known as the Bellman equation, which can be solved for an exact solution of the discrete approximation of the optimization equation.
Example from economics: Ramsey's problem of optimal saving
In economics, the objective is generally to maximize (rather than minimize) some dynamic social welfare function. In Ramsey's problem, this function relates amounts of consumption to levels of utility. Loosely speaking, the planner faces the trade-off between contemporaneous consumption and future consumption (via investment in capital stock that is used in production), known as intertemporal choice. Future consumption is discounted at a constant rate . A discrete approximation to the transition equation of capital is given by
where is consumption, is capital, and is a production function satisfying the Inada conditions. An initial capital stock is assumed.
Let be consumption in period , and assume consumption yields utility as long as the consumer lives. Assume the consumer is impatient, so that he discounts future utility by a factor each period, where . Let be capital in period . Assume initial capital is a given amount , and suppose that this period's capital and consumption determine next period's capital as , where is a positive constant and . Assume capital cannot be negative. Then the consumer's decision problem can be written as follows:
subject to for all
Written this way, the problem looks complicated, because it involves solving for all the choice variables . (The capital is not a choice variable—the consumer's initial capital is taken as given.)
The dynamic programming approach to solve this problem involves breaking it apart into a sequence of smaller decisions. To do so, we define a sequence of value functions , for which represent the value of having any amount of capital at each time . There is (by assumption) no utility from having capital after death, .
The value of any quantity of capital at any previous time can be calculated by backward induction using the Bellman equation. In this problem, for each , the Bellman equation is
subject to
This problem is much simpler than the one we wrote down before, because it involves only two decision variables, and . Intuitively, instead of choosing his whole lifetime plan at birth, the consumer can take things one step at a time. At time , his current capital is given, and he only needs to choose current consumption and saving .
To actually solve this problem, we work backwards. For simplicity, the current level of capital is denoted as . is already known, so using the Bellman equation once we can calculate , and so on until we get to , which is the value of the initial decision problem for the whole lifetime. In other words, once we know , we can calculate , which is the maximum of , where is the choice variable and .
Working backwards, it can be shown that the value function at time is
where each is a constant, and the optimal amount to consume at time is
which can be simplified to
We see that it is optimal to consume a larger fraction of current wealth as one gets older, finally consuming all remaining wealth in period , the last period of life.
Computer science
There are two key attributes that a problem must have in order for dynamic programming to be applicable: optimal substructure and overlapping sub-problems. If a problem can be solved by combining optimal solutions to non-overlapping sub-problems, the strategy is called "divide and conquer" instead. This is why merge sort and quick sort are not classified as dynamic programming problems.
Optimal substructure means that the solution to a given optimization problem can be obtained by the combination of optimal solutions to its sub-problems. Such optimal substructures are usually described by means of recursion. For example, given a graph G=(V,E), the shortest path p from a vertex u to a vertex v exhibits optimal substructure: take any intermediate vertex w on this shortest path p. If p is truly the shortest path, then it can be split into sub-paths p1 from u to w and p2 from w to v such that these, in turn, are indeed the shortest paths between the corresponding vertices (by the simple cut-and-paste argument described in Introduction to Algorithms). Hence, one can easily formulate the solution for finding shortest paths in a recursive manner, which is what the Bellman–Ford algorithm or the Floyd–Warshall algorithm does.
Overlapping sub-problems means that the space of sub-problems must be small, that is, any recursive algorithm solving the problem should solve the same sub-problems over and over, rather than generating new sub-problems. For example, consider the recursive formulation for generating the Fibonacci sequence: Fi = Fi−1 + Fi−2, with base case F1 = F2 = 1. Then F43 = F42 + F41, and F42 = F41 + F40. Now F41 is being solved in the recursive sub-trees of both F43 as well as F42. Even though the total number of sub-problems is actually small (only 43 of them), we end up solving the same problems over and over if we adopt a naive recursive solution such as this. Dynamic programming takes account of this fact and solves each sub-problem only once.
This can be achieved in either of two ways:
Top-down approach: This is the direct fall-out of the recursive formulation of any problem. If the solution to any problem can be formulated recursively using the solution to its sub-problems, and if its sub-problems are overlapping, then one can easily memoize or store the solutions to the sub-problems in a table (often an array or hashtable in practice). Whenever we attempt to solve a new sub-problem, we first check the table to see if it is already solved. If a solution has been recorded, we can use it directly, otherwise we solve the sub-problem and add its solution to the table.
Bottom-up approach: Once we formulate the solution to a problem recursively as in terms of its sub-problems, we can try reformulating the problem in a bottom-up fashion: try solving the sub-problems first and use their solutions to build-on and arrive at solutions to bigger sub-problems. This is also usually done in a tabular form by iteratively generating solutions to bigger and bigger sub-problems by using the solutions to small sub-problems. For example, if we already know the values of F41 and F40, we can directly calculate the value of F42.
Some programming languages can automatically memoize the result of a function call with a particular set of arguments, in order to speed up call-by-name evaluation (this mechanism is referred to as call-by-need). Some languages make it possible portably (e.g. Scheme, Common Lisp, Perl or D). Some languages have automatic memoization built in, such as tabled Prolog and J, which supports memoization with the M. adverb. In any case, this is only possible for a referentially transparent function. Memoization is also encountered as an easily accessible design pattern within term-rewrite based languages such as Wolfram Language.
Bioinformatics
Dynamic programming is widely used in bioinformatics for tasks such as sequence alignment, protein folding, RNA structure prediction and protein-DNA binding. The first dynamic programming algorithms for protein-DNA binding were developed in the 1970s independently by Charles DeLisi in the US and by Georgii Gurskii and Alexander Zasedatelev in the Soviet Union. Recently these algorithms have become very popular in bioinformatics and computational biology, particularly in the studies of nucleosome positioning and transcription factor binding.
Examples: computer algorithms
Dijkstra's algorithm for the shortest path problem
From a dynamic programming point of view, Dijkstra's algorithm for the shortest path problem is a successive approximation scheme that solves the dynamic programming functional equation for the shortest path problem by the Reaching method. Online version of the paper with interactive computational modules.
In fact, Dijkstra's explanation of the logic behind the algorithm, namely
is a paraphrasing of Bellman's famous Principle of Optimality in the context of the shortest path problem.
Fibonacci sequence
Using dynamic programming in the calculation of the nth member of the Fibonacci sequence improves its performance greatly. Here is a naïve implementation, based directly on the mathematical definition:
function fib(n)
if n <= 1 return n
return fib(n − 1) + fib(n − 2)
Notice that if we call, say, fib(5), we produce a call tree that calls the function on the same value many different times:
fib(5)
fib(4) + fib(3)
(fib(3) + fib(2)) + (fib(2) + fib(1))
((fib(2) + fib(1)) + (fib(1) + fib(0))) + ((fib(1) + fib(0)) + fib(1))
(((fib(1) + fib(0)) + fib(1)) + (fib(1) + fib(0))) + ((fib(1) + fib(0)) + fib(1))
In particular, fib(2) was calculated three times from scratch. In larger examples, many more values of fib, or subproblems, are recalculated, leading to an exponential time algorithm.
Now, suppose we have a simple map object, m, which maps each value of fib that has already been calculated to its result, and we modify our function to use it and update it. The resulting function requires only O(n) time instead of exponential time (but requires O(n) space):
var m := map(0 → 0, 1 → 1)
function fib(n)
if key n is not in map m
m[n] := fib(n − 1) + fib(n − 2)
return m[n]
This technique of saving values that have already been calculated is called memoization; this is the top-down approach, since we first break the problem into subproblems and then calculate and store values.
In the bottom-up approach, we calculate the smaller values of fib first, then build larger values from them. This method also uses O(n) time since it contains a loop that repeats n − 1 times, but it only takes constant (O(1)) space, in contrast to the top-down approach which requires O(n) space to store the map.
function fib(n)
if n = 0
return 0
else
var previousFib := 0, currentFib := 1
repeat n − 1 times // loop is skipped if n = 1
var newFib := previousFib + currentFib
previousFib := currentFib
currentFib := newFib
return currentFib
In both examples, we only calculate fib(2) one time, and then use it to calculate both fib(4) and fib(3), instead of computing it every time either of them is evaluated.
A type of balanced 0–1 matrix
Consider the problem of assigning values, either zero or one, to the positions of an matrix, with even, so that each row and each column contains exactly zeros and ones. We ask how many different assignments there are for a given . For example, when , five possible solutions are
There are at least three possible approaches: brute force, backtracking, and dynamic programming.
Brute force consists of checking all assignments of zeros and ones and counting those that have balanced rows and columns ( zeros and ones). As there are possible assignments and sensible assignments, this strategy is not practical for arbitrarily large values of .
Backtracking for this problem consists of choosing some order of the matrix elements and recursively placing ones or zeros, while checking that in every row and column the number of elements that have not been assigned plus the number of ones or zeros are both at least . While more sophisticated than brute force, this approach will visit every solution once, making it impractical for larger than six, since the number of solutions is already for = 8, as we shall see.
Dynamic programming makes it possible to count the number of solutions without visiting them all. Imagine backtracking values for the first row – what information would we require about the remaining rows, in order to be able to accurately count the solutions obtained for each first row value? We consider boards, where , whose rows contain zeros and ones. The function f to which memoization is applied maps vectors of n pairs of integers to the number of admissible boards (solutions). There is one pair for each column, and its two components indicate respectively the number of zeros and ones that have yet to be placed in that column. We seek the value of ( arguments or one vector of elements). The process of subproblem creation involves iterating over every one of possible assignments for the top row of the board, and going through every column, subtracting one from the appropriate element of the pair for that column, depending on whether the assignment for the top row contained a zero or a one at that position. If any one of the results is negative, then the assignment is invalid and does not contribute to the set of solutions (recursion stops). Otherwise, we have an assignment for the top row of the board and recursively compute the number of solutions to the remaining board, adding the numbers of solutions for every admissible assignment of the top row and returning the sum, which is being memoized. The base case is the trivial subproblem, which occurs for a board. The number of solutions for this board is either zero or one, depending on whether the vector is a permutation of and pairs or not.
For example, in the first two boards shown above the sequences of vectors would be
((2, 2) (2, 2) (2, 2) (2, 2)) ((2, 2) (2, 2) (2, 2) (2, 2)) k = 4
0 1 0 1 0 0 1 1
((1, 2) (2, 1) (1, 2) (2, 1)) ((1, 2) (1, 2) (2, 1) (2, 1)) k = 3
1 0 1 0 0 0 1 1
((1, 1) (1, 1) (1, 1) (1, 1)) ((0, 2) (0, 2) (2, 0) (2, 0)) k = 2
0 1 0 1 1 1 0 0
((0, 1) (1, 0) (0, 1) (1, 0)) ((0, 1) (0, 1) (1, 0) (1, 0)) k = 1
1 0 1 0 1 1 0 0
((0, 0) (0, 0) (0, 0) (0, 0)) ((0, 0) (0, 0), (0, 0) (0, 0))
The number of solutions is
Links to the MAPLE implementation of the dynamic programming approach may be found among the external links.
Checkerboard
Consider a checkerboard with n × n squares and a cost function c(i, j) which returns a cost associated with square (i,j) (i being the row, j being the column). For instance (on a 5 × 5 checkerboard),
5 6 7 4 7 8 4 7 6 1 1 4 3 3 5 7 8 2 2 – 6 7 0 – 1 – – 5 – – 1 2 3 4 5
Thus c(1, 3) = 5
Let us say there was a checker that could start at any square on the first rank (i.e., row) and you wanted to know the shortest path (the sum of the minimum costs at each visited rank) to get to the last rank; assuming the checker could move only diagonally left forward, diagonally right forward, or straight forward. That is, a checker on (1,3) can move to (2,2), (2,3) or (2,4).
5 4 3 2 x x x 1 o 1 2 3 4 5
This problem exhibits optimal substructure. That is, the solution to the entire problem relies on solutions to subproblems. Let us define a function q(i, j) as
q(i, j) = the minimum cost to reach square (i, j).
Starting at rank n and descending to rank 1, we compute the value of this function for all the squares at each successive rank. Picking the square that holds the minimum value at each rank gives us the shortest path between rank n and rank 1.
The function q(i, j) is equal to the minimum cost to get to any of the three squares below it (since those are the only squares that can reach it) plus c(i, j). For instance:
5 4 A 3 B C D 2 1 1 2 3 4 5
Now, let us define q(i, j) in somewhat more general terms:
The first line of this equation deals with a board modeled as squares indexed on 1 at the lowest bound and n at the highest bound. The second line specifies what happens at the first rank; providing a base case. The third line, the recursion, is the important part. It represents the A,B,C,D terms in the example. From this definition we can derive straightforward recursive code for q(i, j). In the following pseudocode, n is the size of the board, c(i, j) is the cost function, and min() returns the minimum of a number of values:
function minCost(i, j)
if j < 1 or j > n
return infinity
else if i = 1
return c(i, j)
else
return min( minCost(i-1, j-1), minCost(i-1, j), minCost(i-1, j+1) ) + c(i, j)
This function only computes the path cost, not the actual path. We discuss the actual path below. This, like the Fibonacci-numbers example, is horribly slow because it too exhibits the overlapping sub-problems attribute. That is, it recomputes the same path costs over and over. However, we can compute it much faster in a bottom-up fashion if we store path costs in a two-dimensional array q[i, j] rather than using a function. This avoids recomputation; all the values needed for array q[i, j] are computed ahead of time only once. Precomputed values for (i,j) are simply looked up whenever needed.
We also need to know what the actual shortest path is. To do this, we use another array p[i, j]; a predecessor array. This array records the path to any square s. The predecessor of s is modeled as an offset relative to the index (in q[i, j]) of the precomputed path cost of s. To reconstruct the complete path, we lookup the predecessor of s, then the predecessor of that square, then the predecessor of that square, and so on recursively, until we reach the starting square. Consider the following pseudocode:
function computeShortestPathArrays()
for x from 1 to n
q[1, x] := c(1, x)
for y from 1 to n
q[y, 0] := infinity
q[y, n + 1] := infinity
for y from 2 to n
for x from 1 to n
m := min(q[y-1, x-1], q[y-1, x], q[y-1, x+1])
q[y, x] := m + c(y, x)
if m = q[y-1, x-1]
p[y, x] := -1
else if m = q[y-1, x]
p[y, x] := 0
else
p[y, x] := 1
Now the rest is a simple matter of finding the minimum and printing it.
function computeShortestPath()
computeShortestPathArrays()
minIndex := 1
min := q[n, 1]
for i from 2 to n
if q[n, i] < min
minIndex := i
min := q[n, i]
printPath(n, minIndex)
function printPath(y, x)
print(x)
print("<-")
if y = 2
print(x + p[y, x])
else
printPath(y-1, x + p[y, x])
Sequence alignment
In genetics, sequence alignment is an important application where dynamic programming is essential. Typically, the problem consists of transforming one sequence into another using edit operations that replace, insert, or remove an element. Each operation has an associated cost, and the goal is to find the sequence of edits with the lowest total cost.
The problem can be stated naturally as a recursion, a sequence A is optimally edited into a sequence B by either:
inserting the first character of B, and performing an optimal alignment of A and the tail of B
deleting the first character of A, and performing the optimal alignment of the tail of A and B
replacing the first character of A with the first character of B, and performing optimal alignments of the tails of A and B.
The partial alignments can be tabulated in a matrix, where cell (i,j) contains the cost of the optimal alignment of A[1..i] to B[1..j]. The cost in cell (i,j) can be calculated by adding the cost of the relevant operations to the cost of its neighboring cells, and selecting the optimum.
Different variants exist, see Smith–Waterman algorithm and Needleman–Wunsch algorithm.
Tower of Hanoi puzzle
upright=1.2|thumb|A model set of the Towers of Hanoi (with 8 disks)
upright=1.2|thumb|An animated solution of the Tower of Hanoi puzzle for T(4,3)
The Tower of Hanoi or Towers of Hanoi is a mathematical game or puzzle. It consists of three rods, and a number of disks of different sizes which can slide onto any rod. The puzzle starts with the disks in a neat stack in ascending order of size on one rod, the smallest at the top, thus making a conical shape.
The objective of the puzzle is to move the entire stack to another rod, obeying the following rules:
Only one disk may be moved at a time.
Each move consists of taking the upper disk from one of the rods and sliding it onto another rod, on top of the other disks that may already be present on that rod.
No disk may be placed on top of a smaller disk.
The dynamic programming solution consists of solving the functional equation
S(n,h,t) = S(n-1,h, not(h,t)) ; S(1,h,t) ; S(n-1,not(h,t),t)
where n denotes the number of disks to be moved, h denotes the home rod, t denotes the target rod, not(h,t) denotes the third rod (neither h nor t), ";" denotes concatenation, and
S(n, h, t) := solution to a problem consisting of n disks that are to be moved from rod h to rod t.
For n=1 the problem is trivial, namely S(1,h,t) = "move a disk from rod h to rod t" (there is only one disk left).
The number of moves required by this solution is 2n − 1. If the objective is to maximize the number of moves (without cycling) then the dynamic programming functional equation is slightly more complicated and 3n − 1 moves are required.
Egg dropping puzzle
A famous puzzle relates to dropping eggs from a building to determine at which height they start to break. The following is a description involving N=2 eggs and a building with H=36 floors:Konhauser J.D.E., Velleman, D., and Wagon, S. (1996). Which way did the Bicycle Go? Dolciani Mathematical Expositions – No 18. The Mathematical Association of America.
Suppose that we wish to know which stories in a 36-story building are safe to drop eggs from, and which will cause the eggs to break on landing (using U.S. English terminology, in which the first floor is at ground level). We make a few assumptions:
An egg that survives a fall can be used again.
A broken egg must be discarded.
The effect of a fall is the same for all eggs.
If an egg breaks when dropped, then it would break if dropped from a higher window.
If an egg survives a fall, then it would survive a shorter fall.
It is not ruled out that the first-floor windows break eggs, nor is it ruled out that eggs can survive the 36th-floor windows.
If only one egg is available and we wish to be sure of obtaining the right result, the experiment can be carried out in only one way. Drop the egg from the first-floor window; if it survives, drop it from the second-floor window. Continue upward until it breaks. In the worst case, this method may require 36 droppings. Suppose 2 eggs are available. What is the lowest number of egg-droppings that is guaranteed to work in all cases?
To derive a dynamic programming functional equation for this puzzle, let the state of the dynamic programming model be a pair s = (n,k), where
n = number of test eggs available, n = 0, 1, 2, 3, ..., N − 1.
k = number of (consecutive) floors yet to be tested, k = 0, 1, 2, ..., H − 1.
For instance, s = (2,6) indicates that two test eggs are available and 6 (consecutive) floors are yet to be tested. The initial state of the process is s = (N,H) where N denotes the number of test eggs available at the commencement of the experiment. The process terminates either when there are no more test eggs (n = 0) or when k = 0, whichever occurs first. If termination occurs at state s = (0,k) and k > 0, then the test failed.
Now, let
W(n,k) = minimum number of trials required to identify the value of the critical floor under the worst-case scenario given that the process is in state s = (n,k).
Then it can be shown that
W(n,k) = 1 + min{max(W(n − 1, x − 1), W(n,k − x)): x = 1, 2, ..., k }
with W(n,0) = 0 for all n > 0 and W(1,k) = k for all k. It is easy to solve this equation iteratively by systematically increasing the values of n and k.
Faster DP solution using a different parametrization
Notice that the above solution takes time with a DP solution. This can be improved to time by binary searching on the optimal in the above recurrence, since is increasing in while is decreasing in , thus a local minimum of is a global minimum. Also, by storing the optimal for each cell in the DP table and referring to its value for the previous cell, the optimal for each cell can be found in constant time, improving it to time. However, there is an even faster solution that involves a different parametrization of the problem:
Let be the total number of floors such that the eggs break when dropped from the th floor (The example above is equivalent to taking ).
Let be the minimum floor from which the egg must be dropped to be broken.
Let be the maximum number of values of that are distinguishable using tries and eggs.
Then for all .
Let be the floor from which the first egg is dropped in the optimal strategy.
If the first egg broke, is from to and distinguishable using at most tries and eggs.
If the first egg did not break, is from to and distinguishable using tries and eggs.
Therefore, .
Then the problem is equivalent to finding the minimum such that .
To do so, we could compute in order of increasing , which would take time.
Thus, if we separately handle the case of , the algorithm would take time.
But the recurrence relation can in fact be solved, giving , which can be computed in time using the identity for all .
Since for all , we can binary search on to find , giving an algorithm.
Matrix chain multiplication
Matrix chain multiplication is a well-known example that demonstrates utility of dynamic programming. For example, engineering applications often have to multiply a chain of matrices. It is not surprising to find matrices of large dimensions, for example 100×100. Therefore, our task is to multiply matrices . Matrix multiplication is not commutative, but is associative; and we can multiply only two matrices at a time. So, we can multiply this chain of matrices in many different ways, for example:
and so on. There are numerous ways to multiply this chain of matrices. They will all produce the same final result, however they will take more or less time to compute, based on which particular matrices are multiplied. If matrix A has dimensions m×n and matrix B has dimensions n×q, then matrix C=A×B will have dimensions m×q, and will require m*n*q scalar multiplications (using a simplistic matrix multiplication algorithm for purposes of illustration).
For example, let us multiply matrices A, B and C. Let us assume that their dimensions are m×n, n×p, and p×s, respectively. Matrix A×B×C will be of size m×s and can be calculated in two ways shown below:
Ax(B×C) This order of matrix multiplication will require nps + mns scalar multiplications.
(A×B)×C This order of matrix multiplication will require mnp + mps scalar calculations.
Let us assume that m = 10, n = 100, p = 10 and s = 1000. So, the first way to multiply the chain will require 1,000,000 + 1,000,000 calculations. The second way will require only 10,000 + 100,000 calculations. Obviously, the second way is faster, and we should multiply the matrices using that arrangement of parenthesis.
Therefore, our conclusion is that the order of parenthesis matters, and that our task is to find the optimal order of parenthesis.
At this point, we have several choices, one of which is to design a dynamic programming algorithm that will split the problem into overlapping problems and calculate the optimal arrangement of parenthesis. The dynamic programming solution is presented below.
Let's call m[i,j] the minimum number of scalar multiplications needed to multiply a chain of matrices from matrix i to matrix j (i.e. Ai × .... × Aj, i.e. i<=j). We split the chain at some matrix k, such that i <= k < j, and try to find out which combination produces minimum m[i,j].
The formula is:
if i = j, m[i,j]= 0
if i < j, m[i,j]= min over all possible values of k
where k ranges from i to j − 1.
is the row dimension of matrix i,
is the column dimension of matrix k,
is the column dimension of matrix j.
This formula can be coded as shown below, where input parameter "chain" is the chain of matrices, i.e. :
function OptimalMatrixChainParenthesis(chain)
n = length(chain)
for i = 1, n
m[i,i] = 0 // Since it takes no calculations to multiply one matrix
for len = 2, n
for i = 1, n - len + 1
j = i + len -1
m[i,j] = infinity // So that the first calculation updates
for k = i, j-1
if q < m[i, j] // The new order of parentheses is better than what we had
m[i, j] = q // Update
s[i, j] = k // Record which k to split on, i.e. where to place the parenthesis
So far, we have calculated values for all possible , the minimum number of calculations to multiply a chain from matrix i to matrix j, and we have recorded the corresponding "split point". For example, if we are multiplying chain , and it turns out that and , that means that the optimal placement of parenthesis for matrices 1 to 3 is and to multiply those matrices will require 100 scalar calculations.
This algorithm will produce "tables" m[, ] and s[, ] that will have entries for all possible values of i and j. The final solution for the entire chain is m[1, n], with corresponding split at s[1, n]. Unraveling the solution will be recursive, starting from the top and continuing until we reach the base case, i.e. multiplication of single matrices.
Therefore, the next step is to actually split the chain, i.e. to place the parenthesis where they (optimally) belong. For this purpose we could use the following algorithm:
function PrintOptimalParenthesis(s, i, j)
if i = j
print "A"i
else
print "("
PrintOptimalParenthesis(s, i, s[i, j])
PrintOptimalParenthesis(s, s[i, j] + 1, j)
print ")"
Of course, this algorithm is not useful for actual multiplication. This algorithm is just a user-friendly way to see what the result looks like.
To actually multiply the matrices using the proper splits, we need the following algorithm:
function MatrixChainMultiply(chain from 1 to n) // returns the final matrix, i.e. A1×A2×... ×An
OptimalMatrixChainParenthesis(chain from 1 to n) // this will produce s[ . ] and m[ . ] "tables"
OptimalMatrixMultiplication(s, chain from 1 to n) // actually multiply
function OptimalMatrixMultiplication(s, i, j) // returns the result of multiplying a chain of matrices from Ai to Aj in optimal way
if i < j
// keep on splitting the chain and multiplying the matrices in left and right sides
LeftSide = OptimalMatrixMultiplication(s, i, s[i, j])
RightSide = OptimalMatrixMultiplication(s, s[i, j] + 1, j)
return MatrixMultiply(LeftSide, RightSide)
else if i = j
return Ai // matrix at position i
else
print "error, i <= j must hold"
function MatrixMultiply(A, B) // function that multiplies two matrices
if columns(A) = rows(B)
for i = 1, rows(A)
for j = 1, columns(B)
C[i, j] = 0
for k = 1, columns(A)
C[i, j] = C[i, j] + A[i, k]*B[k, j]
return C
else
print "error, incompatible dimensions."
History of the name
The term dynamic programming was originally used in the 1940s by Richard Bellman to describe the process of solving problems where one needs to find the best decisions one after another. By 1953, he refined this to the modern meaning, referring specifically to nesting smaller decision problems inside larger decisions,Stuart Dreyfus. "Richard Bellman on the birth of Dynamical Programming". and the field was thereafter recognized by the IEEE as a systems analysis and engineering topic. Bellman's contribution is remembered in the name of the Bellman equation, a central result of dynamic programming which restates an optimization problem in recursive form.
Bellman explains the reasoning behind the term dynamic programming in his autobiography, Eye of the Hurricane: An Autobiography:
The word dynamic was chosen by Bellman to capture the time-varying aspect of the problems, and because it sounded impressive. The word programming referred to the use of the method to find an optimal program, in the sense of a military schedule for training or logistics. This usage is the same as that in the phrases linear programming and mathematical programming, a synonym for mathematical optimization.
The above explanation of the origin of the term may be inaccurate: According to Russell and Norvig, the above story "cannot be strictly true, because his first paper using the term (Bellman, 1952) appeared before Wilson became Secretary of Defense in 1953." Also, Harold J. Kushner stated in a speech that, "On the other hand, when I asked [Bellman] the same question, he replied that he was trying to upstage Dantzig's linear programming by adding dynamic. Perhaps both motivations were true."
See also
References
Further reading
. An accessible introduction to dynamic programming in economics. MATLAB code for the book .
. Includes an extensive bibliography of the literature in the area, up to the year 1954.
. Dover paperback edition (2003), .
. Especially pp. 323–69.
.
.
.
.
External links
King, Ian, 2002 (1987), "A Simple Introduction to Dynamic Programming in Macroeconomic Models." An introduction to dynamic programming as an important tool in economic theory.
Category:Optimization algorithms and methods
Category:Equations
Category:Systems engineering
Category:Optimal control
|
computer_science
| 6,252
|
140416
|
United States Agency for International Development
|
https://en.wikipedia.org/wiki/United_States_Agency_for_International_Development
|
The United States Agency for International Development (USAID) is a de jure agency of the executive branch of the United States federal government. USAID was the world's largest foreign aid agency, but it received major cutbacks in 2025, with its remaining functions being transferred to the United States Department of State.
USAID was established in 1961 to compete with the Soviet Union during the Cold War through the use of soft power. In 2025, the Trump administration ended 83% of overall projects. However, USAID had been reorganized by the United States Congress as an independent agency in 1998 and can only be closed down by an act of Congress. As such, it legally still exists. In addition, budget requests, the Office of Inspector General, and court filings have continued to acknowledge USAID's legal existence.
From 2001 to 2024, USAID had an average budget of $23 billion a year and missions in over 100 countries in areas as diverse as education, global health, environmental protection, and democratic governance. In the twenty years from 2001 to 2021, USAID saved a yearly estimated range of between 4.1 and 4.7 million lives, with an estimated subset of between 1.2 and 1.7 million children under five saved.
History
Creation (1961) and reorganization (1998)
In August 1953, President Eisenhower reorganized country offices as "United States Operations Missions" (USOMs). Congress passed the Foreign Assistance Act on September 4, 1961, which reorganized U.S. foreign assistance programs and mandated the creation of an agency to administer economic aid. The goal of this agency was to counter Soviet influence during the Cold War and to advance U.S. soft power through socioeconomic development. USAID was subsequently established by the executive order of President John F. Kennedy, who sought to unite several existing foreign assistance organizations and programs under one agency. After this transition, USOMs continued to exist for a time as an element of USAID. USAID by law is placed under "the direct authority and policy guidance of the Secretary of State".
In 1998, Congress established USAID as a functionally independent executive agency with the Foreign Affairs Reform and Restructuring Act, which gave the President 60 days to abolish or reorganize USAID. President Bill Clinton chose the second option and reorganized USAID, which retained its independence from the U.S. Department of State.
History (1961–2025)
USAID's predecessor agency was already substantial, with 6,400 U.S. staff in developing-country field missions in 1961. Except for the peak years of the Vietnam War, 1965–70, that was more U.S. field staff than USAID would have in the future, and triple the number USAID has had in field missions in the years since 2000.
After his inauguration as president on January 20, 1961, John F. Kennedy created the Peace Corps by Executive Order on March 1, 1961. On March 22, he sent a special message to Congress on foreign aid, asserting that the 1960s should be a "Decade of Development" and proposing to unify U.S. development assistance administration into a single agency. He sent a proposed "Act for International Development" to Congress in May and the resulting "Foreign Assistance Act" was approved in September, repealing the Mutual Security Act. In November, Kennedy signed the act and issued an Executive Order tasking the Secretary of State to create, within the State Department, the "Agency for International Development" (or A.I.D.: subsequently re-branded as USAID), as the successor to both ICA and the Development Loan Fund. With these actions, the U.S. created a permanent agency working with administrative autonomy under the policy guidance of the State Department to implement, through resident field missions, a global program of both technical and financial development assistance for low-income countries.
In the 21 years from 2001 to 2021 inclusive, USAID funding saved an estimated 92 million persons, with a range between 86 and 98 million. This is an estimated range of between 4.1 and 4.7 million lives saved per year. Among these lives saved were an estimated 30 million children younger than five, with a range between 26 and 35 million. This is an estimated range of between 1.2 and 1.7 million per year in the subset of children under five saved.Lancet, “Evaluating the impact . . . ”, July 2025. These estimated ranges of lives saved have a 95% CI, meaning Confidence Interval. RR means Relative Risk, and an RR less than 1.0 means that the treatment group has a lower risk than the control group.
Budget cuts and absorption into the State Department (2025)
Initial 90-day freeze
On January 24, 2025, during the second Trump administration, President Donald Trump ordered a near-total freeze on all foreign aid.President Trump signed Executive Order 14169 directing a near-total freeze on foreign aid. In February, the administration placed most employees on administrative leave. Multiple lawsuits were filed against the Trump administration alleging that these actions were not within its powers without congressional authorization. Also in February, the administration made several allegations of wasteful spending and fraud. An analysis by Al Jazeera reported the claims regarding fraud to be unsubstantiated. An analysis by The Washington Post found some of the claims made by the administration regarding USAID spending to be misleading.
Several days later, Secretary of State Marco Rubio issued a waiver for humanitarian aid. However, a key issue developed over whether the waivers for lifesaving aid were actually translating into aid flowing. Despite the waiver, there was still much confusion about what agencies should do. More than 1,000 USAID employees and contractors were fired or furloughed following the near-total freeze on U.S. global assistance that the second Trump administration implemented.Matt Hopson, the USAID chief of staff appointed by the Trump administration, resigned.
On January 27, 2025, the agency's official government website was shut down.
Role of Elon Musk
On January 30, 2025, Elon Musk demanded that Jason Gray, acting administrator of USAID at that time, shut off email and cellphone access for USAID workers around the world, including in conflict zones. Gray refused, saying that doing so would put their lives at risk. By the next day, Gray was removed from his post.
On February 3, 2025, Elon Musk, who has been carrying out parts of Trump's cost-cutting agenda, announced that he and Trump were in the process of shutting down USAID, claiming it to be a "criminal organization" and that it was "beyond repair". Because USAID's Inspector General had previously launched a probe into Starlink, Musk was criticized as having a conflict of interest.
Andrew Natsios, the administrator for USAID during the George W. Bush administration, told PBS that,
"With all due respect, none of these people know anything about AID. What does Musk know about international development? Absolutely nothing. He has a bunch of young kids in their 20s. They don't know. They're techies. They don't know anything about international development. They don't know anything about the Global South."
Role of U.S. secretary of state Marco Rubio
On February 3, 2025, Secretary of State Marco Rubio announced his appointment as Acting Administrator of USAID by President Trump and that USAID was being merged into the State Department. The legality of these actions is disputed given the mandate for the agency's creation in the Foreign Assistance Act. thumb|upright|Taping over a USAID sign at the Ronald Reagan Building in Washington, DC on February 7, 2025It was announced that on February 6, 2025, at 11:59 pm (EST) all USAID direct hire personnel would be placed on administrative leave globally, with the exception of designated personnel responsible for mission-critical functions, core leadership and specially designated programs.
Initial effects on health assistance
On February 6, 2025, reports indicated that the total number of employees to be retained was 294, out of a total of more than 10,000. Trump declared that agency leaders were "radical left lunatics", while the State Department ordered them to halt virtually all their projects, even if that meant ceasing programs that helped to eradicate smallpox and prevented millions of HIV cases. The freeze in HIV relief programs, including PEPFAR, is estimated to jeopardize treatment access for 20 million people, including 500,000 children. This drastic action led to sudden pauses in over 30 clinical trials for ailments such as HIV, malaria, cholera, cervical cancer, and tuberculosis, leaving participants with medical devices in their bodies and cut off from researchers, likely going against the principles of the Declaration of Helsinki.
Initial effects on wartime assistance
It also led to a pause in other efforts such as wartime help in Ukraine, hospital assistance in Syria, education programs in Mali, and conservation efforts in the Amazon rainforest.
On February 6, CBS News reported that due to the civil war in Sudan, often called the "Forgotten War" because it receives comparatively little attention compared to the Ukraine and Gaza, an estimated 3 million children under age 5 are suffering from acute malnutrition. In September 2024, the Biden administration planned for $424 million in new humanitarian assistance for Sudanese persons, including $276 million being sent through USAID. However, the Trump administration's 90-day freeze interrupted this.
The American Farm Bureau Federation stated, "AID plays a critical role in reducing hunger around the world while sourcing markets for the surplus foods America's farmers and ranchers grow".
Famine Early Warning
CNN reported on March 9 that the Famine Early Warning Systems Network (FEWS NET) had stopped operating and had its data pulled offline. FEWS NET had been considered the "gold standard" of famine warning, providing 8-month projections of food security issues.
The Integrated Food Security Phase Classification (IPC) is another early-warning system supported by various governments, including the United States. IPC uses volunteers for specific country analysis, whereas FEW NETS had used paid staff.
Reduction to 17% of programs
On March 10, 2025, U.S. Secretary of State Marco Rubio announced that the Trump administration had concluded its review, and 83% of USAID's programs would be cancelled, involving approximately 5,200 contracts.
Gavi Foundation and vaccines
On March 24, 2025, the Department of Government Efficiency (DOGE) announced the termination of a $2.63 billion grant from USAID to the Gavi Foundation because the Gavi Foundation "prioritizes 'zero-dose' children." DOGE stated the United States federal government saved $1.75 billion by cancelling the grant, which was 6.575% of the total USAID budget.
Partial add-backs for UN World Food Programme
On April 8, 2025, USAID announced it was making some exceptions to the recent announcement of cancelled participation in the UN's World Food Programme. Specifically, USAID was restoring food aid to Lebanon, Syria, Somalia, Jordan, Iraq and Ecuador, and other countries for a total of 14 nations (plus the International Organization for Migration in the Pacific region). However, food aid was not restored to Yemen or Afghanistan with a State Department spokesperson saying this was "based on concern that the funding was benefiting terrorist groups, including the Houthis and the Taliban."
Competition with China
Senator Roger Wicker (R-Mississippi) said, "I have felt for a long time that USAID is our way to combat the [$1 trillion] Belt and Road Initiative, which is China's effort to really gain influence around the world, including Africa and South America in the Western Hemisphere." In addition, China often completes such projects on the basis of loans, not grants. Since 2000, African countries have been the recipient of over $182 billion in Chinese loans, with interest rates averaging about 3%.
In February 2025, China pledged an additional $4.4 million to de-mining efforts in Cambodia.
Regarding the March 28 Myanmar earthquake, a U.S. State Department spokesperson stated that the United States is working through local partners in Myanmar, and said, "The success in the work and our impact will still be there." However, a former USAID mission head in Myanmar said, "This is the new normal. This is what it looks like when the United States sits on the international sidelines, when the United States is a weaker international player, when it cedes the space to other global players like China." Other potential issues are secondary crisis(es) from diseases such as cholera which can appear in the days and weeks following a disaster.
Michael Sobolik, a China analyst at the conservative Hudson Institute think tank and a former aide to Senator Ted Cruz (R-Texas), has stated, "Sure, USAID was doing some highly questionable stuff that's worthy of review. But don't throw the baby out with the bathwater. Beijing is hoping we do exactly that."
Lawsuits
+CaseCourtCase no.(s)First filing dateOutcomeNotesAmerican Foreign Service Association, et al. v. Trump, et al.U.S. District Court for the District of Columbia1:25-cv-00352February 6, 2025AIDS Vaccine Advocacy Coalition, et al. v. United States Department of State, et al.U.S. District Court for the District of Columbia1:25-cv-00400February 10, 2025Global Health Council, et al. v. Trump, et al.U.S. District Court for the District of Columbia1:25-cv-00402February 11, 2025Personal Services Contractor Association v. Trump, et al.U.S. District Court for the District of Columbia1:25-cv-00469February 18, 2025
American Federation of Government Employees v. Trump
This lawsuit claimed the Trump administration had violated separation of powers, the Take Care Clause of the Constitution, and the Administrative Procedure Act. And initially, U.S. district judge Carl Nichols, whom Trump had nominated in 2019, stated he would enter in a temporary restraining order pausing the plan to put thousands of employees on leave and removing workers from abroad.
However, on February 21, 2025, Judge Nichols reversed himself and cleared the way for the Trump administration to move forward with thousands of layoffs of USAID staffers, as well as providing those abroad with a 30-day deadline to move back to the United States at government expense. Nichols had previously stated that these Trump administration policies threatened the safety of USAID workers abroad because many were deployed in unstable regions.
5-4 Supreme Court decision that completed projects must be paid for
On February 10, 2025, the AIDS Vaccine Advocacy Coalition and the Journalism Development Network filed suit in District Court seeking a preliminary injunction that would prevent the enforcement of the 90-day freeze.
On February 13, 2025, a district court ruled that the government must pay $2 billion for projects already completed. On March 5, 2025, the United States Supreme Court ruled 5-4 that the federal government must pay for completed projects. Voting in the majority were the three Democratic appointees justices Sotomayor, Kagan, and Jackson, and Republican appointees Chief Justice Roberts and Justice Barrett. Voting in the minority were the four other Republican appointees justices Thomas, Alito, Gorsuch, and Kavanaugh. However, the district judge was ordered to proceed with "due regard for the feasibility of any compliance timelines."
On March 11, Associated Press reported that "until recently" no payments had been made because DOGE had disabled the payment system. On March 20, Reuters reported that the Trump administration was close to paying the $671 million owed for projects completed by the organizations which had sued.
Lawsuit which claimed Musk needed Senate confirmation
On March 18, 2025, U.S. district judge Theodore Chuang ruled that Musk's and DOGE's actions in placing USAID employees on leave were likely unconstitutional. Judge Chuang issued a preliminary injunction against further employees being placed on leave, buildings being closed, or websites having their contents deleted.
On March 28, 2025, the U.S. Fourth Circuit Court of Appeals overruled Judge Chuang on the preliminary injunction, without deciding the merits. Judge Marvin Quattlebaum wrote, "And none of this is to say that plaintiffs will not be able to develop evidence of unconstitutional conduct as the case progresses. Time will tell."
Absorption by State Department
On March 28, 2025, U.S. secretary of state Marco Rubio notified Congress that USAID would be dissolved and absorbed into the U.S. State Department, stating that USAID had been fiscally irresponsible and strayed from original mission. He argued, "Unfortunately, USAID strayed from its original mission long ago. As a result, the gains were too few and the costs were too high."
Since July 1, 2025, USAID's operations have ceased and U.S. foreign assistance has now been administered by the U.S. State Department. In connection with this effort, 83% of USAID programs were canceled, and 94% of staff were laid off.
Representative Jim Himes (D-Conn.), the top Democrat on the House Intelligence Committee, stated as an example of what he viewed as abrupt and irresponsible cost-cutting: "Thanks to DOGE, the men we paid to guard the most vicious ISIS terrorists in the world in Syria walked off the job."
USAID employees were not automatically transferred. Instead, the State Department is engaging in a "separate and independent hiring process."
Impact of the demise of USAID on global health
The impact of the demise of USAID on global health is wide-reaching. A study published in The Lancet on June 30, 2025, estimated that funding cuts and the abolition of the agency could result in at least 14 million preventable deaths by 2030, 4.5 million of which could be among children under 5 years old.
PEPFAR and HIV medication
The Lancet study concluded that the discontinuation of PEPFAR alone could cause as many as 10.75 million new HIV infections and as many as 2.93 million deaths related to HIV. The study warned that for low and middle income countries, "the resulting shock would be similar in scale to a global pandemic or a major armed conflict."
Another study published in March 2025 concluded that the suspension of PEPFAR could result in HIV-related deaths surging to as high as 630,000 per year. Christine Stegling, deputy executive director at UNAIDS, estimated that there could be a 400% increase in AIDS-related deaths around the world if PEPFAR was not formally reauthorized for USAID funding, which represents around 6.3 million AIDS-related deaths within four years. In 2024, PEPFAR funds accounted for 14% of the entire health budget of Zimbabwe.
In March 2025, experts from the Center for Global Development estimated that before the freeze, USAID programs annually prevented approximately 1,650,000 deaths from HIV/AIDS, 500,000 deaths from lack of vaccines, 310,000 deaths from tuberculosis and 290,000 deaths from malaria.
Breastfeeding and maternal health
USAID-funded breast feeding programs to reduce malnutrition in Nepal were brought to a halt following the aid freeze on January 20, 2025.
According to Pio Smith, UNFPA's Asia-Pacific regional director, the USAID freeze could lead to 1,200 maternal deaths and 109,000 additional unwanted pregnancies in the next three years in Afghanistan.
March USAID memo
A USAID info memo written by Nicholas Enrich, Acting Assistant Administrator for Global Health, dated March 4, 2025, outlined the risks of the aid freeze. He stated that a permanent suspension of lifesaving humanitarian aid posed a direct threat to public health, economic stability, national security and biothreat vulnerability. He concluded: "Any decision to halt or significantly reduce global health funding for lifesaving humanitarian assistance (LHA)—despite approved waivers—and USAID global health programming, despite congressional mandates, would have severe domestic and global consequences." Enrich was notified that he was put on administrative leave less than 30 minutes after the memo's publication, a decision that had reportedly been made a week prior.
Cases refuting claim that no one has died
Pe Kha Lau, 71, died after she was discharged from a USAID-funded healthcare facility operated by the International Rescue Committee (IRC) while still relying on oxygen to survive. In the Umpiem Mai camp in Thailand, witnesses reported the deaths of multiple patients who too relied on oxygen. The IRC offered their condolences to the family and friends of Pe Kha Lau. Nicholas Kristof also documented evidence contradicting Elon Musk's claim that "No one has died as a result of a brief pause to do a sanity check on foreign aid funding. No one." Secretary of State Marco Rubio similarly claimed while testifying before Congress that no death resulted from the shutdown.
Incineration of emergency food
In July 2025 The Atlantic reported that the order had been given to incinerate nearly 500 metric tons of emergency food. Citing former and current government employees, The Atlantic wrote that USAID had already bought the food, some $800,000 of high-energy biscuits (a stopgap measure for feeding children under 5) for the World Food Programme to distribute in Afghanistan and Pakistan. Instead, it remained in a warehouse in Dubai for months, was set to expire the day after the report, and would deteriorate quickly and be incinerated at a cost of $130,000. Employees could no longer ship the food without the permission of the new heads of American foreign assistance, which had been requested repeatedly. The Atlantic cited the sources as saying that improper storage or delivery complications such as floods or terrorism might previously have cost the agency a few dozen tons of fortified foods a year at most, and that they'd never before seen the U.S. government give up on food that could have been put to good use.
Secretary of State Marco Rubio testified before the House Appropriations Committee in May that the food would be distributed before spoiling. The Atlantic'''s reporter stated she'd reviewed the incineration order, which had been given by the time of Rubio's testimony. These high-energy biscuits are estimated to be able to feed 1.5 million children for a week. This food was previously under the authority of Pete Marocco, and then under the authority of Jeremy Lewin.
Waste of mpox vaccines
According to Africa CDC nearly 800,000 doses of mpox vaccine the Biden administration had pledged to donate to African countries could not be shipped as they had not been sent by the time they had a remaining shelf life of under six months, which Africa CDC described as the minimum time required to ship vaccines in order to ensure arrival in good condition and allow implementation. Politico noted that the loss of the shots came after the Trump administration cut back on foreign aid programs and closed USAID, and that although the U.S. had not disclosed their price, UNICEF described a price of "up to $65" per dose as "the lowest price in the market."
Incineration of contraceptives
In July 2025 Reuters, citing unnamed sources and a screenshot, reported that contraceptive implants, pills, and intrauterine devices worth $9.7 million would be incinerated.
Reuters had reported in June that contraceptives meant largely for vulnerable women in Sub-Saharan Africa, including young girls who face higher risks from pregnancy as well as those fleeing conflict or who could not otherwise afford or access contraceptives, had been warehoused in Belgium and Dubai for months following Trump's cuts to foreign aid and USAID. In its July article Reuters reported that the Belgian stockpile would be incinerated in France as medical waste, costing $160,000 and likely comprising "dozens of truckloads". The Belgium foreign ministry told Reuters it had "explored all possible options to prevent the destruction" and would keep trying. Reuters reported that the United Nations Population Fund (UNPFA) tried to buy the contraceptives. However, the fact that contraceptives were embossed with the USAID trademark was a problem, as was the way the U.S. could not ensure UNFPA would not share them with groups offering abortions, violating the Mexico City policy Trump had reinstated in January. Sarah Shaw, nonprofit MSI Reproductive Choices' Associate Director of Advocacy, told Reuters the organization had volunteered for pay for repackaging the supplies to remove USAID branding, shipping them, and for import duties, but the U.S. government declined, saying it would only sell the supplies at full market value. Shaw added that this was "clearly not about saving money." A source in this Reuters article concluded that "Washington did not want any USAID-branded supplies to be rerouted elsewhere."
Citing unnamed sources and an internal document listing warehouse stocks, Reuters reported the contraceptives were due to expire between April 2027 and September 2031.
Counter which estimates preventable deaths
Brooke Nichols, an infectious disease modeler working at Boston University, created an impact counter to estimate the life toll of funding cuts on various USAID health programs. , the counter estimates that over 604,400 deaths have been caused by the funding discontinuation, over 408,200 of which are children.
Malaria
The President's Malaria Initiative, started with help from George W. Bush, has contributed to a more than 60% reduction in malaria deaths, saved 7.6 million lives, and prevented 1.5 billion malaria cases globally between 2000 and 2019. PMI has supported malaria prevention and control for over 500 million at-risk people in Africa.
However, the USAID funding of PMI has been cut an estimated 47% as of June 2025. In countries such as the Democratic Republic of the Congo (DRC), these funds had supported the supply of antimalarial drugs to numerous health zones, including preventive treatments for pregnant women. Health officials in the DRC reported that the effects of these cuts were already being felt, with increased risk of severe illness and death from malaria among vulnerable populations. Former aid workers and experts also expressed concern that reduced funding undermined disease surveillance systems that help detect malaria and other outbreaks early. Such surveillance not only protects affected countries but also contributes to U.S. health security by limiting the global spread of disease. Aid organizations also highlighted how these cuts create a "vicious cycle", with malnutrition and malaria reinforcing one another. Reductions in U.S. support for nutrition programs increase children's vulnerability to malaria and other diseases, while higher malaria infections can worsen malnutrition.
Humanitarian crisis in Haiti
In July 2025, the United Nations reported that the cut of USAID funding to Haiti represents a halt of approximately 80% of US-funded programmes. Food security, access to drinking water, primary healthcare, education and protection are all affected. Children are among the hardest hit. Modibo Traore, United Nations OCHA's country director in Haiti, said, "The withdrawal of US funding has led to a multidimensional regression in the rights of women and girls in Haiti."
$8 billion "claw back" for USAID but not PEPFAR
In June 2025, the White House requested that Congress pass a package of rescissions, or "claw backs", of approximately $8 billion in foreign aid and $1 billion for the Corporation for Public Broadcasting.
The House of Representatives passed the cuts as requested. The Senate excluded the PEPFAR cuts, which is the program started in 2003 during the presidency of George W Bush to help provide HIV medicines to lower-income countries. The Senate passed two preliminary votes in close 51-50 fashion with Vice-president JD Vance casting the tie-breaking votes. A rescission is one of the exceptions to the Senate's 60-vote filibuster rule.
On July 10, President Trump focused on the public broadcasting aspect, criticizing CNN, and also "MSDNC" which is a portmanteau ("smash up") of MSNBC and DNC (Democratic National Committee). In a social media post, he wrote, "Any Republican that votes to allow this monstrosity to continue broadcasting will not have my support or Endorsement."
On July 17, the Senate voted 51–48 in favor of the cuts. Later that same day, the House of Representatives voted 216–213 for the Senate version, meaning PEPFAR was protected in the amount of $400 million."Senate to vote to formalize DOGE cuts to public broadcasting, USAID", ABC News, July 14, 2025. In a post on July 10, President Trump said, "It is very important that all Republicans adhere to my Recissions Bill and, in particular, DEFUND THE CORPORATION FOR PUBLIC BROADCASTING (PBS and NPR), which is worse than CNN & MSDNC put together. Any Republican that votes to allow this monstrosity to continue broadcasting will not have my support or Endorsement. Thank you for your attention to this matter!" Regarding the cuts to public broadcasting, service to rural areas became one of the political issues.
July 2025 reports that PEPFAR had been cut 50% back in February
In July 2025, four congressional aides reported that cuts by the Trump administration in February effectively put many contracts on hold. The aides stated that many of the promised waivers did not translate to action and an estimated 50% of budgeted money did not flow to providers.
Range of avoidable deaths if all Trump-era cuts continue
An article updated July 2025 in The Lancet estimates 4.5 million avoidable deaths in children under five years by 2030 if all spending cuts continue (cuts in both PEPFAR and broader aid). These estimates ranged between 3.1 and 5.9 million.
In an article published the same month, Kelsey Piper writing for Vox argues that it is difficult to predict the extent to which other nonprofits and governments will take the place of USAID cuts. She notes that even the most rigorous research must make large assumptions, and contends that if the numbers are over-estimated, opponents may be able to "dismiss the entire foreign aid project as one run by politically motivated liars." Additionally, Piper said, "The White House has repeatedly lost when seeking congressional approval to dismantle our best-performing life-saving programs. So the administration has resorted to doing it piecewise and, as much as possible, avoiding a public debate."
Another attempted $5 billion in cuts, in form of "pocket rescission" in September
In late August, President Trump informed House speaker Mike Johnson, that he would not spend $4.9 billion in foreign aid which Congress already approved. This is a "pocket rescission", in which a president announces money will not be spent shortly before the 30 September end of the fiscal year, preventing Congress from acting on his request within 45 days. The Guardian states this is the first time in 50 years such a rescission has been made.
In early September, a district judge ordered the Trump administration to spend those funds. On appeal, Chief Justice John Roberts, who handles emergency petitions for Washington DC, gave aid groups who had sued until September 12 to file a response.
Some of the money was scheduled to go to United Nations organizations and peacekeeping purposes, as well as projects of economic development assistance and democracy promotion.
Purposes
USAID's decentralized network of resident field missions was drawn on to manage U.S. government programs in low-income countries for various purposes.
Disaster relief
Poverty relief
Technical cooperation on global issues, including the environment
U.S. bilateral interests
Socioeconomic development
Disaster relief
Some of the U.S. government's earliest foreign aid programs provided relief in war-created crises. In 1915, U.S. government assistance through the Commission for Relief in Belgium headed by Herbert Hoover prevented starvation in Belgium after the German invasion. After 1945, the European Recovery Program championed by Secretary of State George Marshall (the "Marshall Plan") helped rebuild war-torn Western Europe.
Poverty relief
After 1945, many newly independent countries needed assistance to relieve the chronic deprivation afflicting their low-income populations. USAID and its predecessor agencies have continuously provided poverty relief in many forms, including assistance to public health and education services targeted at the poorest. USAID has also helped manage food aid provided by the U.S. Department of Agriculture. Also, USAID provides funding to NGOs to supplement private donations in relieving chronic poverty.
Global issues
Technical cooperation between nations is essential for addressing a range of cross-border concerns like communicable diseases, environmental issues, trade and investment cooperation, safety standards for traded products, money laundering, and so forth. The United States has specialized federal agencies dealing with such areas, such as the Centers for Disease Control and the Environmental Protection Agency. USAID's special ability to administer programs in low-income countries supported these and other U.S. government agencies' international work on global concerns.
Environment
Among these global interests, environmental issues attracted high attention. USAID assisted projects that conserve and protect threatened land, water, forests, and wildlife. USAID also assists projects in reducing greenhouse gas emissions and building resilience to the risks associated with global climate change. U.S. environmental regulation laws require that programs sponsored by USAID should be both economically and environmentally sustainable.
U.S. national interests
Congress appropriates exceptional financial assistance to allies to support U.S. geopolitical interests, mainly in the form of "Economic Support Funds" (ESF). USAID is called on to administer the bulk (90%) of ESF and is instructed: "To the maximum extent feasible, [to] provide [ESF] assistance ... consistent with the policy directions, purposes, and programs of [development assistance]."
Also, when U.S. troops were in the field, USAID could supplement the "Civil Affairs" programs that the U.S. military conducts to win the friendship of local populations. In these circumstances, USAID may be directed by specially appointed diplomatic officials of the State Department, as has been done in Afghanistan and Pakistan during operations against al-Qaeda.
U.S. commercial interests were served by U.S. law's requirement that most goods and services financed by USAID must be sourced from U.S. vendors. American farms supplied about 41 percent of the food aid according to a 2021 report by the Congressional Research Service.
Socioeconomic development
To help low-income nations achieve self-sustaining socioeconomic development, USAID assisted them in improving the management of their own resources. USAID's assistance for socioeconomic development mainly provides technical advice, training, scholarships, commodities, and financial assistance. Through grants and contracts, USAID mobilized the technical resources of the private sector and other U.S. government agencies, universities, and NGOs to participate in this assistance.
Programs of the various types above frequently reinforced one another. For example, the Foreign Assistance Act required USAID to use funds appropriated for geopolitical purposes ("Economic Support Funds") to support socioeconomic development to the maximum extent possible.
Modes of assistance
USAID delivered both technical and financial assistance:
Technical assistance
Technical assistance included technical advice, training, scholarships, construction, and commodities. USAID contracts or procures technical assistance and provides it in-kind to recipients. For technical advisory services, USAID draws on experts from the private sector, mainly from the assisted country's pool of expertise and from specialized U.S. government agencies. Many host-government leaders have drawn on USAID's technical assistance to develop IT systems and procure computer hardware to strengthen their institutions.
To build indigenous expertise and leadership, USAID financed scholarships to U.S. universities and assists in the strengthening of developing countries' universities. Local universities' programs in developmentally important sectors were assisted directly and through USAID support for forming partnerships with U.S. universities.
The various forms of technical assistance were frequently coordinated as capacity-building packages for the development of local institutions.
Financial assistance
Financial assistance supplied cash to developing country organizations to supplement their budgets. USAID also provided financial assistance to local and international NGOs who in turn give technical assistance in developing countries. Although USAID formerly provided loans, all financial assistance is now provided in the form of non-reimbursable grants.
In recent years, the United States had increased its emphasis on financial rather than technical assistance. In 2004, the Bush administration created the Millennium Challenge Corporation as a new foreign aid agency that is mainly restricted to providing financial assistance. In 2009, the Obama administration initiated a major realignment of USAID's own programs to emphasize financial assistance, referring to it as "government-to-government" or "G2G" assistance.
Public–private partnerships
In April 2023, USAID and the Global Food Safety Initiative (GFSI) announced a memorandum of understanding (MOU) to improve food safety and sustainable food systems in Africa. GFSI's work in benchmarking and standard harmonisation aims to foster mutual acceptance of GFSI-recognized certification programmes for the food industry.
Organization
USAID is organized around country development programs managed by resident USAID offices in developing countries ("USAID missions"), supported by USAID's global headquarters in Washington, D.C.
Country development programs
USAID planned its work in each country around an individual country development program managed by a resident office called a "mission". The USAID mission and its U.S. staff were guests in the country, with a status that is usually defined by a "framework bilateral agreement" between the government of the United States and the host government. Framework bilaterals give the mission and its U.S. staff privileges similar to (but not necessarily the same as) those accorded to the U.S. embassy and diplomats by the Vienna Convention on Diplomatic Relations of 1961.
USAID missions work in over fifty countries, consulting with their governments and non-governmental organizations to identify programs that will receive USAID's assistance. As part of this process, USAID missions conduct socio-economic analysis, discuss projects with host-country leaders, design assistance to those projects, award contracts and grants, administer assistance (including evaluation and reporting), and manage flows of funds.
As countries develop and need less assistance, USAID shrunk and ultimately closed its resident missions. USAID had closed missions in a number of countries that had achieved a substantial level of prosperity, including South Korea, Turkey, and Costa Rica.
USAID also closed missions when requested by host countries for political reasons. In September 2012, the U.S. closed USAID/Russia at that country's request. Its mission in Moscow had been in operation for two decades. On May 1, 2013, the president of Bolivia, Evo Morales, asked USAID to close its mission, which had worked in the country for 49 years. The closure was completed on September 20, 2013.
USAID missions were led by mission directors and were staffed both by USAID Foreign Service officers and by development professionals from the country itself, with the host-country professionals forming the majority of the staff. The length of a Foreign service officer's "tour" in most countries is four years, to provide enough time to develop in-depth knowledge about the country. (Shorter tours of one or two years were usual in countries of exceptional hardship or danger.)
The mission director was a member of the U.S. Embassy's "Country Team" under the direction of the U.S. ambassador. As a USAID mission works in an unclassified environment with relative frequent public interaction, most missions were initially located in independent offices in the business districts of capital cities. Since the passage of the Foreign Affairs Agencies Consolidation Act in 1998 and the bombings of U.S. Embassy chanceries in East Africa in the same year, missions have gradually been moved into U.S. Embassy chancery compounds.
USAID/Washington
The country programs were supported by USAID's headquarters in Washington, D.C., "USAID/Washington", where about half of USAID's Foreign Service officers work on rotation from foreign assignments, alongside USAID's Civil Service staff and top leadership.
USAID is headed by an administrator. Under the Biden administration, the administrator became a regular attendee of the National Security Council.
USAID/Washington helped define overall federal civilian foreign assistance policy and budgets, working with the State Department, Congress, and other U.S. government agencies. It was organized into "Bureaus" covering geographical areas, development subject areas, and administrative functions. Each bureau is headed by an assistant administrator appointed by the president.
(Some tasks similar to those of USAID's Bureaus were performed by what were termed "Independent Offices".)
Geographic bureaus
AFRAfrica
ASIAAsia
LACLatin America & the Caribbean
E&EEurope and Eurasia
MEthe Middle East
Subject-area bureaus
GHGlobal Health
Every year, the Global Health Bureau reported to the U.S. Congress through its Global Health Report to Congress. The Global Health Bureau also submits a yearly report on the Call to Action: ending preventable child and maternal deaths. This is part of USAID's follow-up to the 2012, where it committed to ending preventable child and maternal deaths in a generation with A Promise Renewed.
E3Economic Growth, Education, and the Environment
Economic Growth offices in E3 defined Agency policy and provide technical support to Mission assistance activities in the areas of economic policy formulation, international trade, sectoral regulation, capital markets, microfinance, energy, infrastructure, land tenure, urban planning and property rights, gender equality and women's empowerment. The Engineering Division, in particular, draws on licensed professional engineers to support USAID Missions in a multibillion-dollar portfolio of construction projects, including medical facilities, schools, universities, roads, power plants, and water and sanitation plants.
The Education Office in E3 defines Agency policy and provides technical support to Mission assistance activities for both basic and tertiary education.
Environment offices in E3 define Agency policy and provide technical support to Mission assistance activities in the areas of climate change and biodiversity.
Bureau for Humanitarian Assistance
Bureau for Democracy, Human Rights and Governance
The mission of the DRG Bureau was to lead USAID's efforts to invigorate democracy, enhance human rights and justice, and bolster governance that advances the public interest and delivers inclusive development.
LABU.S. Global Development Lab
The Lab serves as an innovation hub, taking smart risks to test new ideas and partner within the Agency and with other actors to harness the power of innovative tools and approaches that accelerate development impact.
RFSResilience and Food Security
Headquarters bureaus
MManagement
OHCTMOffice of Human Capital and Talent Management
LPALegislative and Public Affairs
PPLPolicy, Planning, and Learning
BRMOffice of Budget and Resource Management
Independent oversight of USAID activities was provided by its Office of Inspector General, U.S. Agency for International Development, which conducted criminal and civil investigations, financial and performance audits, reviews, and inspections of USAID activities around the world.
Staffing
USAID's staffing reported to Congress in June 2016 totaled 10,235, including both field missions "overseas" (7,176) and the Washington, D.C. headquarters (3,059). Of this total, 1,850 were USAID Foreign Service officers who spend their careers mostly residing overseas (1,586 overseas in June 2016) and partly on rotation in Washington, D.C. (264). The Foreign Service officers stationed overseas worked alongside the 4,935 local staff of USAID's field missions.
Host-country staff normally worked under one-year contracts that were renewed annually. Formerly, host-country staff could be recruited as "direct hires" in career positions and at present many host-country staff continue working with USAID missions for full careers on a series of one-year contracts. In USAID's management approach, local staff may fill highly responsible, professional roles in program design and management.
U.S. citizens can apply to become USAID Foreign Service officers by competing for specific job openings based on academic qualifications and experience in development programs. Within five years of recruitment, most Foreign Service officers receive tenure for an additional 20+ years of employment before mandatory retirement. Some were promoted to the Senior Foreign Service with extended tenure, subject to the Foreign Service's mandatory retirement age of 65. (This recruitment system differs from the State Department's use of the "Foreign Service Officer Test" to identify potential U.S. diplomats. Individuals who pass the test become candidates for the State Department's selection process, which emphasizes personal qualities in thirteen dimensions such as "Composure" and "Resourcefulness". No specific education level is required.)
In 2008, USAID launched the "Development Leadership Initiative" to reverse the decline in USAID's Foreign Service officer staffing, which had fallen to a total of about 1,200 worldwide. Although USAID's goal was to double the number of Foreign Service officers to about 2,400 in 2012, actual recruitment net of attrition reached only 820 by the end of 2012. USAID's 2016 total of 1,850 Foreign Service officers compared with 13,000 in the State Department.
Field missions
While USAID can have as little presence in a country as a single person assigned to the U.S. embassy, a full USAID mission in a larger country may have twenty or more USAID Foreign Service officers and a hundred or more professional and administrative employees from the country itself.
The USAID mission's staff is divided into specialized offices in three groups: (1) assistance management offices; (2) the mission director's and the Program office; and (3) the contracting, financial management, and facilities offices. See in particular the definitions of "Large mission" and "Office."
Assistance management offices
Called "technical" offices by USAID staff, these offices designed and managed the technical and financial assistance that USAID provides to their local counterparts' projects. The technical offices that were frequently found in USAID missions include Health and Family Planning, Education, Environment, Democracy, and Economic Growth.
Health and Family Planning
Examples of projects assisted by missions' Health and Family Planning offices were projects for the eradication of communicable diseases, strengthening of public health systems focusing on maternal-child health including family planning services, HIV-AIDS monitoring, delivery of medical supplies including contraceptives, and coordination of Demographic and Health Surveys. This assistance is primarily targeted to the poor majority of the population and corresponds to USAID's poverty relief objective, as well as strengthening the basis for socio-economic development.
Education
USAID's Education offices mainly assisted the national school system, emphasizing broadening the coverage of quality basic education to reach the entire population. Examples of projects often assisted by Education offices were projects for curriculum development, teacher training, and provision of improved textbooks and materials. Larger programs have included school construction. Education offices often manage scholarship programs for training in the U.S., while assistance to the country's universities and professional education institutions may be provided by Economic Growth and Health offices. The Education office's emphasis on school access for the poor majority of the population corresponds to USAID's poverty relief objective, as well as to the socioeconomic development objective in the long term.
Environment
Examples of projects assisted by environmental offices were projects for tropical forest conservation, protection of indigenous people's lands, regulation of marine fishing industries, pollution control, reduction of greenhouse gas emissions, and helping communities adapt to climate change. Environment assistance corresponds to USAID's objective of technical cooperation on global issues, as well as laying a sustainable basis for USAID's socioeconomic development objective in the long term.
USAID (United States Agency for International Development) has recently initiated the HEARTH (Health, Ecosystems and Agriculture for Resilient, Thriving Societies) program, which operated in 10 countries with 15 activities aimed at promoting conservation of threatened landscapes and enhancing community well-being by partnering with the private sector to align business goals with development objectives. Through HEARTH, USAID implements One Health principles to achieve sustainable benefits for both people and the environment through projects focused on livelihoods, well-being, conservation, biodiversity, and governance.
Democracy
Examples of projects assisted by Democracy offices were projects for the country's political institutions, including elections, political parties, legislatures, and human rights organizations. Counterparts include the judicial sector and civil society organizations that monitor government performance. Democracy assistance received its greatest impetus at the time of the creation of the successor states to the USSR starting in about 1990, corresponding both to USAID's objective of supporting U.S. bilateral interests and to USAID's socioeconomic development objective.
Economic Growth
Examples of projects often assisted by Economic Growth offices were projects for improvements in agricultural techniques and marketing (the mission may have had a specialized "Agriculture" office), development of microfinance industries, streamlining of Customs administrations (to accelerate the growth of exporting industries), and modernization of government regulatory frameworks for the industry in various sectors (telecommunications, agriculture, and so forth). In USAID's early years and some larger programs, Economic Growth offices have financed economic infrastructure like roads and electrical power plants. Economic Growth assistance was thus quite diverse in terms of the range of sectors where it may work. It corresponded to USAID's socioeconomic development objective and is the source of sustainable poverty reduction. Economic Growth offices also occasionally manage assistance to poverty relief projects, such as to government programs that provide "cash transfer" payments to low-income families.
Special assistance
Some USAID missions had specialized technical offices for areas like counter-narcotics assistance or assistance in conflict zones.
Disaster assistance on a large scale is provided through USAID's Office of U.S. Foreign Disaster Assistance. Rather than having a permanent presence in country missions, this office has supplies pre-positioned in strategic locations to respond quickly to disasters when and where they occur.
The Office of the Mission Director and the Program Office
The mission director's signature authorized technical offices to assist according to the designs and budgets they propose. With the help of the Program Office, the mission director ensures that designs were consistent with USAID policy for the country, including budgetary earmarks by which Washington directs that funds be used for certain general purposes such as public health or environmental conservation. The Program Office compiles combined reports to Washington to support budget requests to Congress and to verify that budgets were used as planned.
Contracting, financial management and management offices
While the mission director was the public face and key decision-maker for an impressive array of USAID technical capabilities, arguably the offices that made USAID preeminent among U.S. government agencies in the ability to follow through on assistance agreements in low-income countries were the "support" offices.
Contracting
Commitments of U.S. government funds to NGOs and firms that implemented USAID's assistance programs can only be made in compliance with carefully designed contracts and grant agreements executed by warranted Contracting and agreement officers. The mission director is authorized to commit financial assistance directly to the country's government agencies.
Financial management
Funds can be committed only when the Mission's Controller certifies their availability for the stated purpose. "FM" offices assisted technical offices in financial analysis and in developing detailed budgets for inputs needed by projects assisted. They evaluate potential recipients' management abilities before financial assistance can be authorized and then review implementers' expenditure reports with great care. This office often had the largest number of staff of any office in the mission.
Management
Called the "Executive Office" in USAID (sometimes leading to confusion with the Embassy's Executive Office, which is the office of the Ambassador), "EXO" provided operational support for mission offices, including human resources, information systems management, transportation, property, and procurement services. Increasing integration into Embassies' chancery complexes, and the State Department's recently increased role in providing support services to USAID, is expanding the importance of coordination between USAID's EXO and the embassy's Management section.
Budget
+ Countries with 1% or over of USAID managed foreign assistance disbursed in Fiscal Year 2023 Country US$ billion Share of totalUkraine 16.02 36.6%Global funds 6.06 13.8%Ethiopia 1.68 3.8%Jordan 1.20 2.7%Afghanistan 1.09 2.5%Somalia 1.05 2.4%DR Congo 0.94 2.1%Syria 0.89 2.0%Nigeria 0.82 1.9%Yemen 0.81 1.9%South Sudan 0.74 1.7%Kenya 0.68 1.6%Uganda 0.52 1.2%Mozambique 0.47 1.1%Sudan 0.46 1.1%Tanzania 0.45 1.0%
The Congressional Research Service (CRS) states that some USAID appropriations were programmed collaboratively with the Department of State, which makes any calculation of the USAID budget imprecise, and the CRS generally refers to USAID-managed funds. The CRS stated USAID managed more than $40 billion of combined appropriations in 2023, and had a workforce of more than 10,000. The mean average managed foreign assistance disbursed in the fiscal years 2001 to 2024 was $22.9 billion in inflation adjusted to 2023 dollars; 2023 was an exceptional year because of an extra $16 billion of funds for Ukraine.
The U.S. government USAspending.gov website included International Security Assistance, Special Assistance Initiatives and a small amount of other spending alongside direct USAID spending in its assessment of the 2023 $50.1 billion of budgetary resources available to USAID, about $10 billion more than the headline CRS assessment. International Security Assistance was budgeted about $9 billion in 2023, of which Foreign Military Financing to strengthen military support of key U.S. allies and partner governments was $6 billion.
In fiscal year 2022, the cost of supplying USAID's assistance includes the agency's "Operating Expenses" of $1.97 billion, and "Bilateral Economic Assistance" program costs of $25.01 billion (the vast bulk of which was administered by USAID). In fiscal year 2012, "Operating Expenses" were $1.53 billion, and "Bilateral Economic Assistance" was $20.83 billion.
U.S. assistance budget totals were shown along with other countries' total assistance budgets in tables in a webpage of the Organization for Economic Cooperation and Development.
At the Earth Summit in Rio de Janeiro in 1992, most of the world's governments adopted a program for action under the auspices of the United Nations Agenda 21, which included an Official Development Assistance (ODA) aid target of 0.7% of gross national product (GNP) for rich nations, specified as roughly 22 members of the OECD and known as the Development Assistance Committee (DAC). Most countries do not adhere to this target, as the OECD's table indicates that the DAC average ODA in 2011 was 0.31% of GNP. The U.S. figure for 2011 was 0.20% of GNP, which still left the U.S. as the largest single source of ODA among individual countries. According to the OECD, The United States' total official development assistance (ODA) (US$55.3 billion, preliminary data) increased in 2022, mainly due to support to Ukraine, as well as increased costs for in-donor refugees from Afghanistan. ODA represented 0.22% of gross national income (GNI).
US public opinion
According to a 2010 poll, the median American believed that 25% of the federal budget goes to foreign aid and that it should be 10%. In reality, between 0.8% and 1.4% of the U.S. federal budget has gone to foreign aid since 2001. The USAID portion of the federal budget is even smaller, accounting for 0.6% in 2023.
In a 2019 poll of the American public, 35% said more money should be spent on foreign aid, 33% said spending should stay about the same, and 28% said less money should be spent.
A 2025 poll revealed that 50% of Americans believed that the US should play a major or leading role in improving health in developing countries, with 36% preferring a minor role and 14% preferring no role at all. However, the same poll also revealed that 43% of Americans thought that "too much" US funding was being given to these initiatives.
A February 2025 poll by the University of Maryland's Program for Public Consultation found that, after presented with arguments for and against closure, 58% of Americans supported continuing USAID compared to 41% supporting abolition. Another poll by Ipsos found that just 37% supported Trump's efforts to dismantle the agency while 58% opposed the efforts.
Activities by region
Haiti
Following the January 2010 earthquake in Haiti, USAID helped provide safer housing for almost 200,000 displaced Haitians; supported vaccinations for more than 1 million people; cleared more than 1.3 million cubic meters of the approximately 10 million cubic meters of rubble generated; helped more than 10,000 farmers double the yields of staples like corn, beans, and sorghum; and provided short-term employment to more than 350,000 Haitians, injecting more than $19 million into the local economy. USAID has provided nearly $42 million to help combat cholera, helping to decrease the number of cases requiring hospitalization and reduce the case fatality rate.
Afghanistan
With American entry into Afghanistan in 2001, USAID worked with the Department of State and Department of Defense to coordinate reconstruction efforts.
Iraq
The interactions between USAID and other U.S. government agencies in the period of planning the Iraq operation of 2003 are described by the Office of the Special Inspector General for Iraq Reconstruction in its book Hard Lessons: The Iraq Reconstruction Experience.
Subsequently, USAID played a major role in the U.S. reconstruction and development effort in Iraq. , USAID had invested approximately $6.6 billion on programs designed to stabilize communities; foster economic and agricultural growth; and build the capacity of the national, local, and provincial governments to represent and respond to the needs of the Iraqi people.
In June 2003, C-SPAN followed USAID administrator Andrew Natsios as he toured Iraq. The special program C-SPAN produced aired over four nights.
Lebanon
USAID has periodically supported the Lebanese American University and the American University of Beirut financially, with major contributions to the Lebanese American University's Campaign for Excellence.
Europe
Ukraine
In the twenty years prior to the 2022 Russian invasion of Ukraine USAID dispersed modest funds, averaging $115 million, in Ukraine. Following the invasion Congress enacted large sums for Ukraine through USAID to support the operation of its government and civil society. In fiscal year 2022 nearly $9 billion was disbursed, and $16 billion in 2023 causing that year to be the highest total spending year for USAID with 36.6% of its managed funds being disbursed to Ukraine.
United Kingdom
USAID has donated funds to international charity BBC Media Action, with approximately $3.23 million (£2.6 million) given in 2024. This funding supports media development, journalism training, and public education initiatives in over 30 countries.
Cuba
A USAID subcontractor was arrested in Cuba in 2009 for distributing satellite equipment to provide Cubans with internet access. The subcontractor was released during Obama's second presidential term as part of the measures to improve relations between the two countries.
USAID has been used as a mechanism for "hastening transition", i.e., regime change in Cuba. Between 2009 and 2012, USAID ran a multimillion-dollar program, disguised as humanitarian aid and aimed at inciting rebellion in Cuba. The program consisted of two operations: one to establish an anti-regime social network called ZunZuneo, and the other to attract potential dissidents contacted by undercover operatives posing as tourists and aid workers.
USAID engineered a subversive program using social media aimed at fueling political unrest in Cuba to overthrow the Cuban government. On 3 April 2014 the Associated Press published an investigative report that revealed USAID was behind the creation of a social networking text messaging service aimed at creating political dissent and triggering an uprising against the Cuban government. The name of the messaging network was ZunZuneo, a Cuban slang term for a hummingbird's tweet and a play on "Twitter". According to the AP's report, the plan was to build an audience by initially presenting non-controversial content like sports, music and weather. Once a critical mass of users was reached the US government operators would change the content to spark political dissent and mobilize the users into organized political gatherings called "smart mobs" that would trigger an uprising against the Cuban government.
The messaging service was launched in 2010 and gained 40,000 followers at its peak. Extensive efforts were made to conceal the USAID involvement in the program, using offshore bank accounts, front companies and servers based overseas. According to a memo from one of the project's contractors, Mobile Accord: "There will be absolutely no mention of United States government involvement", "This is absolutely crucial for the long-term success of the service and to ensure the success of the Mission."
ZunZuneo's subscribers were never aware that it was created by the US government or that USAID was gathering their private data to gain useful demographics that would gauge their levels of dissent and help USAID "maximize our possibilities to extend our reach".
USAID officials realized they needed an exit strategy to conceal their involvement in the program, at one point seeking funding from Twitter cofounder Jack Dorsey as part of a plan for it to go independent. The service was abruptly closed down around mid-2012, which USAID said was due to the program running out of money.
The ZunZuneo operation was part of a program that included a second operation which started in October 2009 and was financed jointly with ZunZuneo. In the second operation, USAID sent Venezuelan, Costa Rican and Peruvian children to Cuba to recruit Cubans into anti-regime political activities. The operatives posed as traveling aid workers and tourists. In one of the covert operations, the workers formed a HIV prevention workshop, which leaked memos called "the perfect excuse" for the programme's political goals. The Guardian said the operation could undermine US efforts to work toward improving health globally.
The operation was also criticized for putting the undercover operatives themselves at risk. The covert operatives were given limited training about evading Cuban authorities suspicious of their actions. After Alan Gross, a development specialist and USAID subcontractor, was arrested in Cuba, the US government warned USAID about the safety of covert operatives. Regardless of safety concerns, USAID refused to end the operation.
In light of the AP's report, Rajiv Shah, the head of USAID, testified before the Senate Appropriations State Department and Foreign Operations Subcommittee on 8 April 2014.
Bolivia
USAID operated in the coca-growing Chapare region, including under a 1983 agreement to support crop-substitution programs to encourage other crops. No later than 1998, this funding was conditional on farmers eradicating all their coca plants. In 2008, the coca growers union affiliated with Bolivian president Evo Morales ejected the 100 employees and contractors from USAID working in the Chapare region, citing frustration with U.S. efforts to persuade them to switch to growing unviable alternatives. Other rules, such as the requirement that participating communities declare themselves "terrorist-free zones" as required by U.S. law irritated people, said Kathryn Ledebur, director of the Andean Information Network. "Eradicate all your coca and then you grow an orange tree that will get fruit in eight years but you don't have anything to eat in the meantime? A bad idea. The thing about kicking out USAID, I don't think it's an anti-American sentiment overall but rather a rejection of bad programs."
Also in 2008, USAID's Bolivian programs under the Office of Transitional Initiatives and the Democracy Program, as well as separate funding by the National Endowment for Democracy, were the subject of critical investigative reports that documented them supporting political initiatives in regions governed by separatist movements. During the September 2008 political crisis, President Evo Morales expelled US ambassador Philip S. Goldberg and spoke out against USAID interference. The US government had previously ended OTI spending in Bolivia and subsequently redirected Democracy Program funds to other purposes, while denying USAID had interfered in Bolivian politics.
President Evo Morales expelled USAID from Bolivia on May 1, 2013, for allegedly seeking to undermine his government following ten years of operations within the country. At the time, the USAID had seven American staffers and 37 Bolivian staffers in the country, with an annual budget of $26.7 million. President Morales explained that the expulsion was because USAID's objectives in Bolivia were to advance American interests, not to advance the interests of the Bolivian people. More specifically, President Morales noted the American "counter-narcotic" programs that harms the interests of Bolivian coca farmers who get caught in the middle of American operations.
Following the 2019 Bolivian political crisis that saw Jeanine Áñez's assumption of power, President Áñez invited USAID to return to Bolivia to provide "technical aid to the electoral process in Bolivia". In October 2020, USAID provided $700,000 in emergency assistance in fighting wildfires to the government of Luis Arce.
Brazil
During the Brazilian military dictatorship, the organization launched , responsible for transforming the Brazilian education policies closer to the USA. USAID also acted in the countries public security. Between 1960 and 1972, USAID trained cops that were involved in political repression in Brazil.Folha de S.Paulo, Brazil's largest newspaper, accused USAID of trying to influence political reform in Brazil in a way that would have purposely benefited right-wing parties. USAID spent $95,000 US in 2005 on a seminar in the Brazilian Congress to promote a reform aimed at pushing for legislation punishing party infidelity. According to USAID papers acquired by Folha under the Freedom of Information Act, the seminar was planned to coincide with the eve of talks in that country's Congress on a broad political reform. The papers read that although the "pattern of weak party discipline is found across the political spectrum, it is somewhat less true of parties on the liberal left, such as the [ruling] Worker's Party." The papers also expressed a concern about the "'indigenization' of the conference so that it is not viewed as providing a U.S. perspective." The event's main sponsor was the International Republican Institute.
In February 2025, Michael Benz, a former state department official, affirmed in an interview with Steve Bannon on The War Room that Bolsonaro was seen in USAID as "Tropical Trump" and "if USAID didn't exist, Bolsonaro would still be the president of Brazil". In February 3, Eduardo Bolsonaro, federal deputy and son of Jair Bolsonaro, answered Benz in his social media by, accusing USAID of financing institutions involved with fighting against fake news during the presidential elections in 2022, such as the International Center for Journalists, Sleeping Giants Brazil and Vero Institute, created by the YouTuber Felipe Neto, with the objective of "manipulating narratives and interfering with Brazilian democracy". He and Gustavo Gayer also began to collect signatures to open a Parliamentary Inquiry Commission to investigate the supposed interference. His accusations are largely considered as fake news and many of the accused institutions affirmed that they never received money from USAID. Shortly after, in a speech for the Ação Política Conservadora, President of Argentina Javier Milei alleged without evidence that USAID used millions of dollars to falsify the 2022 election.
East Africa
On September 19, 2011, USAID and the Ad Council launched the "Famine, War, and Drought" (FWD) campaign to raise awareness about that year's severe drought in East Africa. Through TV and internet ads as well as social media initiatives, FWD encouraged Americans to spread awareness about the crisis, support the humanitarian organizations that were conducting relief operations, and consult the Feed the Future global initiative for broader solutions. Celebrities Geena Davis, Uma Thurman, Josh Hartnett and Chanel Iman took part in the campaign via a series of public service announcements. Corporations like Cargill, General Mills, and PepsiCo also signed on to support FWD.
After the Trump administration's termination of most of USAID's programs in early 2025, during an Ebola outbreak in Uganda, USAID-funded research efforts into Ebola treatment and prevention were halted in Uganda. During the previous Ebola outbreak in Uganda in 2022, USAID had funded contact tracing efforts, the supply of protective equipment, safe burials, etc.
Palestinian territories
USAID halted its assistance to the West Bank and Gaza Strip on January 31, 2019, reportedly at the request of the Palestinian Authority. The request was related to new U.S. legislation, the Anti-Terrorism Clarification Act of 2018, that exposed foreign aid recipients to anti-terrorism lawsuits. USAID restarted assistance to Palestinians in April 2021 under President Biden. The agency increased assistance during the Israel–Gaza war that began in October 2023. Since October 7, 2023, USAID gave more than $2.1 billion in assistance to Palestinians. On November 10, 2023, more than 1,000 USAID employees signed an open letter calling for an immediate ceasefire in the war.
Vietnam
USAID, alongside the Department of State and Defence, has supported NGOs to removing UXO and landmines, and remediating soil contaminated by Agent Orange from multiple regions in Vietnam, as well as supporting victims of Agent Orange.
Personnel who died in the course of their work
Concerns and criticism
U.S. foreign economic assistance has been the subject of debate and criticism since at least the 1950s.
Claims of wasteful spending
In 2025, the Trump administration accused USAID of "wasting massive sums of taxpayer money" over several decades, including during Trump's first presidency from 2017 to 2021. The administration cited a number of projects, including $1.5 million for LGBT workplace inclusion in Serbia, $2.5 million to build electric vehicle chargers in Vietnam, $6 million for tourism promotion in Egypt, and "hundreds of millions of dollars" (the largest item) purportedly allocated to discourage Afghanistan farmers from growing poppies for opium, which allegedly ended up supporting poppy cultivation and benefiting the Taliban. Fact checkers found that these claims were largely false or "highly misleading". According to the World Health Organization, the closure of health clinics in 31 out of 34 provinces in Afghanistan has contributed to a growing humanitarian crisis. The situation is further compounded by widespread poverty and the continued presence of infectious diseases such as measles, malaria, and polio.
On February 3, 2025, White House press secretary Karoline Leavitt criticized four expenditures putatively uncovered by DOGE. Fact-checkers found that several of the alleged wasteful grants were actually administered by the State Department, not USAID. U.S. District Judge Carl Nichols, in his February 2025 order blocking the Trump administration from placing certain USAID employees on leave, "noted that despite Trump's claim of massive 'corruption and fraud' in the agency, government lawyers had no support for that argument in court."
During Trump's first term, his daughter Ivanka Trump, who served as Advisor to the President, used over $11,000 from USAID in 2019 to purchase video recording and reproducing equipment for a White House event. Both Ivanka and then-First Lady Melania Trump had publicly praised USAID's work during the first Trump administration. Melania Trump visited Africa in 2018, speaking about USAID's efforts and stating, "We care, and we want to show the world that we care, and I've partnered and am working with USAID." Ivanka Trump also toured Africa on behalf of USAID, lauding her father's creation of the "Women's Global Development and Prosperity" initiative and emphasizing its alignment with U.S. national security interests.
In February 2025, following the allegations of fraud, the White House announced a plan to reduce USAID's staff from over 10,000 employees to fewer than 300. Critics, including former USAID administrators, decried this move, calling it "one of the worst and most costly foreign policy blunders in U.S. history", and have argued that the cuts will result in job losses, damage to American businesses, and harm to vulnerable populations worldwide. The inspector general for USAID issued a report on the spending pause and staff furloughs noting that these actions limited USAID's efforts to assure that its distributed funds "do not benefit terrorists and their supporters". The inspector general also warned that $489 million in humanitarian food aid was at risk of spoiling due to staff furloughs and unclear guidance. The Office of Presidential Personnel fired the inspector general the next day, despite a law requiring 30 days notice to Congress before firing an Inspector General.
Bribery scheme involving $550 million in contracts
In June 2025, a former USAID officer plead guilty to accepting bribes in exchange for manipulating the contracting process. Three executives of two separate companies, Apprio and Vistant, also plead guilty. The bribes began in 2013 and included such items as cash, laptops, NBA suite tickets, a country club wedding, mortgage down payments, phones, and jobs for relatives. These allegedly totaled more than $1 million. In exchange, the USAID officer used his position to recommend Apprio and Vistant for non-competitive awards, leaked sensitive information, provided favorable evaluations, and approved contract decisions. The total value of these contracts was approximately $550 million.
Non-career contracts
USAID frequently contracted with private firms or individuals for specialist services lasting from a few weeks to several years. It has long been asked whether USAID should more often assign such tasks to career U.S. government employees instead. United States government staff directly performed technical assistance in the earliest days of the program in the 1940s. It soon became necessary for the federal government technical experts to plan and manage larger assistance programs than they could perform by themselves. The global expansion of technical assistance in the early 1950s reinforced the need to draw on outside experts, which was also accelerated by Congress's requirement of major reductions of U.S. government staffing in 1953. By 1955, observers commented on a perceived shift toward re use of shorter-term contracts (rather than using employees with career-length contracts).
Financial conflicts of interest
USAID stated that "U.S. foreign assistance has always had the twofold purpose of furthering America's foreign policy interests in expanding democracy and free markets while improving the lives of the citizens of the developing world." In 2008, a report found that approximately 40% of aid money spent in Afghanistan had returned to donor countries through corporate profits, consultants' salaries, and other costs.
Although USAID officially selects contractors on a competitive and objective basis, watchdog groups, politicians, foreign governments, and corporations have occasionally accused the agency of allowing its bidding process to be unduly influenced by the political and financial interests of its current presidential administration. Under the Bush administration, for instance, it emerged that all five implementing partners selected to bid on a $600 million Iraq reconstruction contract enjoyed close ties to the administration.
In 2020, one of the contractors for USAID, DAI Global, was sued by families of soldiers who had died in Afghanistan.
Political operations abroad
Critics have accused USAID of being a tool for US interventionism.
Additionally, the agency has been accused of covert political operations abroad, allegedly collaborating with the CIA on regime-change efforts and controversial funding decisions, leading to strained relations with some foreign governments.
William Blum has said that in the 1960s and early 1970s, USAID has maintained "a close working relationship with the CIA, and Agency officers often operated abroad under USAID cover." The 1960s-era Office of Public Safety, a now-disbanded division of USAID, has been mentioned as an example of this, having served as a front for training foreign police in counterinsurgency methods (including torture techniques).
In 2008, Benjamin Dangl wrote in The Progressive that the Bush administration was using USAID to fund efforts in Bolivia to "undermine the Morales government and coopt the country's dynamic social movementsjust as it has tried to do recently in Venezuela and traditionally throughout Latin America".
From 2010 to 2012, the agency operated ZunZuneo, a social media site similar to Twitter in an attempt to instigate uprisings against the Cuban government. Its involvement was concealed in order to ensure mission success. The plan was to draw in users with non-controversial content until a critical mass is reached, after which more political messaging would be introduced. At its peak, more than 40,000 unsuspecting Cubans interacted on the platform.
In the summer of 2012, ALBA countries (Venezuela, Cuba, Ecuador, Bolivia, Nicaragua, Saint Vincent and the Grenadines, Dominica, Antigua and Barbuda) called on its members to expel USAID from their countries.
A Macedonia political activist, in an editorial, wrote about USAID working behind the scenes from 2012-2017 as part of an effort to get Macedonia to change its name to North Macedonia. This name change happened in 2019, and was a condition for Greece to support North Macedonia’s membership in NATO.
Influence on the United Nations
Studies have found correlations between U.S. foreign aid levels and nations' membership on the United Nations Security Council, suggesting the use of aid to influence council votes.
In 1990, after Yemen voted against a resolution for a U.S.-led coalition to use force against Iraq, U.S. ambassador to the UN Thomas Pickering told Yemen's UN Ambassador Abdullah Saleh al-Ashtal, "That's the most expensive No vote you ever cast." Within days, USAID ceased operations and funding in Yemen.
State Department terrorist list
USAID required NGOs to sign a document renouncing terrorism, as a condition of funding. Issam Abdul Rahman, media coordinator for the Palestinian Non-Governmental Organizations' Network, a body representing 135 NGOs in the West Bank and Gaza Strip, said his organization "takes issue with politically conditioned funding". Also, the Popular Front for the Liberation of Palestine, listed as a terrorist organization by the US Department of State, said that the USAID condition was nothing more than an attempt "to impose political solutions prepared in the kitchens of Western intelligence agencies to weaken the rights and principles of Palestinians, especially the right of return."
Renouncing prostitution and sex trafficking
In 2003, Congress passed a law providing U.S. government funds to private groups to help fight AIDS and other diseases all over the world through USAID grants. One of the conditions imposed by the law on grant recipients was a requirement to have "a policy explicitly opposing prostitution and sex trafficking". In 2013, the U.S. Supreme Court ruled in Agency for International Development v. Alliance for Open Society International, Inc.'' that the requirement violated the First Amendment's prohibition against compelled speech.
Involvement in Peru's forced sterilizations
For three decades, USAID has been the principal foreign donor to family planning in Peru. Until the 1990s, the Peruvian government's commitment to providing family planning services was limited. In 1998, concerns arose regarding the involvement of USAID in forced sterilization campaigns in Peru. Some politicians in Washington opposed USAID's funding of family planning initiatives in the country. In January 1998, David Morrison, from the U.S.-based NGO Population Research Institute (PRI), traveled to Peru to investigate claims of human rights abuses related to these programs. During his visit, Morrison gathered testimony from Peruvian politicians and other figures opposed to family planning but did not meet with USAID officials in Peru. Upon his return to the United States, the PRI submitted its findings to U.S. congressman Chris Smith, a member of the Republican Party, urging for the suspension of USAID's family planning efforts in Peru. Smith subsequently dispatched a member of his staff to Peru for further investigation.
In February 1998, another U.S. organization, the Latin American Alliance for the Family, sent its director to Peru to examine the situation, again without consulting USAID officials. On February 25, 1998, a subcommittee of the U.S. House Committee on International Relations, chaired by Smith, held a hearing on "the Peruvian population control program". Allegations that USAID was funding forced sterilizations in Peru prompted Congressman Todd Tiahrt to introduce the "Tiahrt Amendment" in 1998. However, the subcommittee concluded that USAID's funding had not supported the abuses committed by the Peruvian government.
Office of Inspector General investigation into alleged terror-linked funding
According to a February 2024 report, the USAID's Office of Inspector General launched an investigation in 2023 into the agency for awarding $110,000 in 2021 to Helping Hand for Relief and Development (HHRD), a charity in Michigan that Republicans on the House Foreign Affairs Committee have accused in recent years of sharing ties to terrorism organizations in South Asia. In August 2023, USAID's Vetting Support Unit cleared HHRD to receive the grant. In 2024, researchers at George Mason University reported that allegations against HHRD were part of a campaign targeting large American Muslim charities based on the manipulation of poorly-sourced information.
See also
Notes
References
Sources
Further reading
External links
Office of Inspector General, U.S. Agency for International Development
Agency for International Development in the Federal Register
Agency for International Development on USAspending.gov
Records of the Agency for International Development (1935–89) in the National Archives
Category:United States Agency for International Development
Category:1961 establishments in Washington, D.C.
Category:Civil affairs
Category:Foreign relations agencies of the United States
Category:Government agencies established in 1961
Category:Independent agencies of the United States government
Category:International development agencies
Category:Organizations based in Washington, D.C.
Category:Presidency of John F. Kennedy
Category:Second presidency of Donald Trump
|
politics_government
| 12,840
|
143115
|
Astrophotography
|
https://en.wikipedia.org/wiki/Astrophotography
|
Astrophotography, also known as astronomical imaging, is the photography or imaging of astronomical objects, celestial events, or areas of the night sky. The first photograph of an astronomical object (the Moon) was taken in 1839, but it was not until the late 19th century that advances in technology allowed for detailed stellar photography. Besides being able to record the details of extended objects such as the Moon, Sun, and planets, modern astrophotography has the ability to image objects outside of the visible spectrum of the human eye such as dim stars, nebulae, and galaxies. This is accomplished through long time exposure as both film and digital cameras can accumulate and sum photons over long periods of time or using specialized optical filters which limit the photons to a certain wavelength.
Photography using extended exposure-times revolutionized the field of professional astronomical research, recording hundreds of thousands of new stars, and nebulae invisible to the human eye. Specialized and ever-larger optical telescopes were constructed as essentially big cameras to record images on photographic plates. Astrophotography had an early role in sky surveys and star classification but over time it has used ever more sophisticated image sensors and other equipment and techniques designed for specific fields.
Since almost all observational astronomy today uses photography, the term "astrophotography" usually refers to its use in amateur astronomy, seeking aesthetically pleasing images rather than scientific data. Amateurs use a wide range of special equipment and techniques.
Methods
With a few exceptions, astronomical photography employs long exposures since both film and digital imaging devices can accumulate light photons over long periods of time. The amount of light hitting the film or detector is also increased by increasing the diameter of the primary optics (the objective) being used. Urban areas produce light pollution so equipment and observatories doing astronomical imaging are often located in remote locations to allow long exposures without the film or detectors being swamped with stray light.
Since the Earth is constantly rotating, telescopes and equipment are rotated in the opposite direction to follow the apparent motion of the stars overhead (called diurnal motion). This is accomplished by using either equatorial or computer-controlled altazimuth telescope mounts to keep celestial objects centered while Earth rotates. All telescope mount systems suffer from induced tracking errors due to imperfect motor drives, the mechanical sag of the telescope, and atmospheric refraction. Tracking errors are corrected by keeping a selected aiming point, usually a guide star, centered during the entire exposure. Sometimes (as in the case of comets) the object to be imaged is moving, so the telescope has to be kept constantly centered on that object. This guiding is done through a second co-mounted telescope called a "guide scope" or via some type of "off-axis guider", a device with a prism or optical beam splitter that allows the observer to view the same image in the telescope that is taking the picture. Guiding was formerly done manually throughout the exposure with an observer standing at (or riding inside) the telescope making corrections to keep a cross hair on the guide star. Since the advent of computer-controlled systems, this is accomplished by an automated system in professional and even amateur equipment.
Astronomical photography was one of the earliest types of scientific photography and almost from its inception it diversified into subdisciplines that each have a specific goal including star cartography, astrometry, stellar classification, photometry, spectroscopy, polarimetry, and the discovery of astronomical objects such as asteroids, meteors, comets, variable stars, novae, and even unknown planets. These often require specialized equipment such as telescopes designed for precise imaging, for wide field of view (such as Schmidt cameras), or for work at specific wavelengths of light. Astronomical CCD cameras may cool the sensor to reduce thermal noise and to allow the detector to record images in other spectra such as in infrared astronomy. Specialized filters are also used to record images in specific wavelengths.
History
The development of astrophotography as a scientific tool was pioneered in the mid-19th century for the most part by experimenters and amateur astronomers, or so-called "gentleman scientists" (although, as in other scientific fields, these were not always men). Because of the very long exposures needed to capture relatively faint astronomical objects, many technological problems had to be overcome. These included making telescopes rigid enough so they would not sag out of focus during the exposure, building clock drives that could rotate the telescope mount at a constant rate, and developing ways to accurately keep a telescope aimed at a fixed point over a long period of time. Early photographic processes also had limitations. The daguerreotype process was far too slow to record anything but the brightest objects, and the wet plate collodion process limited exposures to the time the plate could stay wet.Memoir, Henry Draper 1837–1882, George F. Barker read before the National Academy, April 18, 1888.
The first known attempt at astronomical photography was by Louis Jacques Mandé Daguerre, inventor of the daguerreotype process which bears his name, who attempted in 1839 to photograph the Moon. Tracking errors in guiding the telescope during the long exposure meant the photograph came out as an indistinct fuzzy spot. John William Draper, New York University Professor of Chemistry, physician and scientific experimenter managed to make the first successful photograph of the Moon a year later on March 23, 1840, taking a 20-minute-long daguerreotype image using a reflecting telescope.
The Sun may have been first photographed in an 1845 daguerreotype by the French physicists Léon Foucault and Hippolyte Fizeau. A failed attempt to obtain a photograph of a Total Eclipse of the Sun was made by the Italian physicist, Gian Alessandro Majocchi during an eclipse of the Sun that took place in his home city of Milan, on July 8, 1842. He later gave an account of his attempt and the Daguerreotype photographs he obtained, in which he wrote:
The Sun's solar corona was first successfully imaged during the Solar eclipse of July 28, 1851. Dr. August Ludwig Busch, the Director of the Königsberg Observatory gave instructions for a local daguerreotypist named Johann Julius Friedrich Berkowski to image the eclipse. Busch himself was not present at Königsberg (now Kaliningrad, Russia), but preferred to observe the eclipse from nearby Rixhoft. The telescope used by Berkowski was attached to Königsberg heliometer and had an aperture of only , and a focal length of . Commencing immediately after the beginning of totality, Berkowski exposed a daguerreotype plate for 84 seconds in the focus of the telescope, and on developing an image of the corona was obtained. He also exposed a second plate for about 40 to 45 seconds but was spoiled when the Sun broke out from behind the Moon. More detailed photographic studies of the Sun were made by the British astronomer Warren De la Rue starting in 1861.
The first photograph of a star other than the Sun was a daguerreotype of the star Vega by astronomer William Cranch Bond and daguerreotype photographer and experimenter John Adams Whipple, on July 16 and 17, 1850 with Harvard College Observatory's 15 inch Great refractor. In 1863 the English chemist William Allen Miller and English amateur astronomer Sir William Huggins used the wet collodion plate process to obtain the first ever photographic spectrogram of a star, Sirius and Capella.Spectrometers, ASTROLab of Mont-Mégantic National Park In 1872 American physician Henry Draper, the son of John William Draper, recorded the first spectrogram of a star (Vega) to show absorption lines.
Astronomical photography did not become a serious research tool until the late 19th century, with the introduction of dry plate photography. It was first used by Sir William Huggins and his wife Margaret Lindsay Huggins, in 1876, in their work to record the spectra of astronomical objects. In 1880, Henry Draper used the new dry plate process with photographically corrected refracting telescope made by Alvan Clarkloen.ucolick.org, Lick Observatory 12-inch Telescope to make a 51-minute exposure of the Orion Nebula, the first photograph of a nebula ever made. A breakthrough in astronomical photography came in 1883, when amateur astronomer Andrew Ainslie Common used the dry plate process to record several images of the same nebula in exposures up to 60 minutes with a reflecting telescope that he constructed in the backyard of his home in Ealing, outside London. These images for the first time showed stars too faint to be seen by the human eye.
UCO Lick Observatory page on the Crossley telescope
The first all-sky photographic astrometry project, Astrographic Catalogue and Carte du Ciel, was started in 1887. It was conducted by 20 observatories all using special photographic telescopes with a uniform design called normal astrographs, all with an aperture of around and a focal length of , designed to create images with a uniform scale on the photographic plate of approximately 60 arcsecs/mm while covering a 2° × 2° field of view. The attempt was to accurately map the sky down to the 14th magnitude but it was never completed.
The beginning of the 20th century saw the worldwide construction of refracting telescopes and sophisticated large reflecting telescopes specifically designed for photographic imaging. Towards the middle of the century, giant telescopes such as the Hale Telescope and the Samuel Oschin telescope at Palomar Observatory were pushing the limits of film photography.
Some progress was made in the field of photographic emulsions and in the techniques of forming gas hypersensitization, cryogenic cooling,See for example, U.S. Patent No. 4,038,669, Cryogenic Cameras, John M. Guerra, July 26, 1977. and light amplification, but starting in the 1970s after the invention of the CCD, photographic plates were gradually replaced by electronic imaging in professional and amateur observatories. CCD's are far more light sensitive, do not drop off in sensitivity over long exposures the way film does ("reciprocity failure"), have the ability to record in a much wider spectral range, and simplify storage of information. Telescopes now use many configurations of CCD sensors including linear arrays and large mosaics of CCD elements equivalent to 100 million pixels, designed to cover the focal plane of telescopes that formerly used photographic plates.
The late 20th century saw advances in astronomical imaging take place in the form of new hardware, with the construction of giant multi-mirror and segmented mirror telescopes. It would also see the introduction of space-based telescopes, such as the Hubble Space Telescope. Operating outside the atmosphere's turbulence, scattered ambient light and the vagaries of weather allows the Hubble Space Telescope, with a mirror diameter of , to record stars down to the 30th magnitude, some 100 times dimmer than what the 5-meter Mount Palomar Hale Telescope could record in 1949.
Amateur astrophotography
Astrophotography is a popular hobby among photographers and amateur astronomers. Techniques ranges from basic film and digital cameras on tripods up to methods and equipment geared toward advanced imaging. Amateur astronomers and amateur telescope makers also use homemade equipment and modified devices.
Media
Images are recorded on many types of media and imaging devices including single-lens reflex cameras, 35 mm film, 120 film, digital single-lens reflex cameras, simple amateur-level, and professional-level commercially manufactured astronomical CCD and CMOS cameras, video cameras, and even off-the-shelf webcams used for Lucky imaging.
The conventional over-the-counter film has long been used for astrophotography. Film exposures range from seconds to over an hour. Commercially available color film stock is subject to reciprocity failure over long exposures, in which sensitivity to light of different wavelengths appears to drop off at different rates as the exposure time increases, leading to a color shift in the image and reduced sensitivity over all as a function of time. This is compensated for, or at least reduced, by cooling the film (see Cold camera photography). This can also be compensated for by using the same technique used in professional astronomy of taking photographs at different wavelengths that are then combined to create a correct color image. Since the film is much slower than digital sensors, tiny errors in tracking can be corrected without much noticeable effect on the final image. Film astrophotography is becoming less popular due to the lower ongoing costs, greater sensitivity, and the convenience of digital photography.
Since the late 1990s amateurs have been following the professional observatories in the switch from film to digital CCDs for astronomical imaging. CCDs are more sensitive than film, allowing much shorter exposure times, and have a linear response to light. Images can be captured in many short exposures to create a synthetic long exposure. Digital cameras also have minimal or no moving parts and the ability to be operated remotely via an infrared remote or computer tethering, limiting vibration. Simple digital devices such as webcams can be modified to allow access to the focal plane and even (after the cutting of a few wires), for long exposure photography. Digital video cameras are also used. There are many techniques and pieces of commercially manufactured equipment for attaching digital single-lens reflex (DSLR) cameras and even basic point and shoot cameras to telescopes. Consumer-level digital cameras suffer from image noise over long exposures, so there are many techniques for cooling the camera, including cryogenic cooling. Astronomical equipment companies also now offer a wide range of purpose-built astronomical CCD cameras complete with hardware and processing software. Many commercially available DSLR cameras have the ability to take long time exposures combined with sequential (time-lapse) images allowing the photographer to create a motion picture of the night sky. CMOS cameras are increasingly replacing CCD cameras in the amateur sector. Modern CMOS sensors offer higher quantum efficiency, lower thermal and read noise and faster readout speeds than commercially available CCD sensors.
Post-processing
Both digital camera images and scanned film images are usually adjusted in image processing software to improve the image in some way. Images can be brightened and manipulated in a computer to adjust color and increase the contrast. More sophisticated techniques involve capturing multiple images (sometimes thousands) to composite together in an additive process to sharpen images to overcome atmospheric seeing, negating tracking issues, bringing out faint objects with a poor signal-to-noise ratio, and filtering out light pollution.
Digital camera images may also need further processing to reduce the image noise from long exposures, including subtracting a “dark frame” and a processing called image stacking or "Shift-and-add". Commercial, freeware and free software packages are available specifically for astronomical photographic image manipulation.
"Lucky imaging" is a secondary technique that involves taking a video of an object rather than standard long exposure photos. Software can then select the highest quality images which can then be stacked. This is typically used when observing planetary objects and helps to overcome atmospheric conditions.
Color and brightness
Astronomical pictures, like observational astronomy and photography from space exploration, show astronomical objects and phenomena in different colors and brightness, and often as composite images. This is done to highlight different features or reflect different conditions, and makes the note of these conditions necessary.
Images attempting to reproduce the true color and appearance of an astronomical object or phenomenon need to consider many factors, including how the human eye works. Particularly under different atmospheric conditions images need to evaluate several factors to produce analyzable or representative images, like images of space missions from the surface of Mars, Venus or Titan.
Hardware
Astrophotographic hardware among non-professional astronomers varies widely since the photographers themselves range from general photographers shooting some form of aesthetically pleasing images to very serious amateur astronomers collecting data for scientific research. As a hobby, astrophotography has many challenges that have to be overcome that differ from conventional photography and from what is normally encountered in professional astronomy.
Since most people live in urban areas, equipment often needs to be portable so that it can be taken far away from the lights of major cities or towns to avoid urban light pollution. Urban astrophotographers may use special light-pollution or narrow-band filters and advanced computer processing techniques to reduce ambient urban light in the background of their images. They may also stick to imaging bright targets like the Sun, Moon and planets. Another method used by amateurs to avoid light pollution is to set up, or rent time, on a remotely operated telescope at a dark sky location. Other challenges include setup and alignment of portable telescopes for accurate tracking, working within the limitations of “off the shelf” equipment, the endurance of monitoring equipment, and sometimes manually tracking astronomical objects over long exposures in a wide range of weather conditions.
Some camera manufacturers modify their products to be used as astrophotography cameras, such as Canon's EOS 60Da, based on the EOS 60D but with a modified infrared filter and a low-noise sensor with heightened hydrogen-alpha sensitivity for improved capture of red hydrogen emission nebulae.
There are also cameras specifically designed for amateur astrophotography based on commercially available imaging sensors. They may also allow the sensor to be cooled to reduce thermal noise in long exposures, provide raw image readout, and to be controlled from a computer for automated imaging. Raw image readout allows later better image processing by retaining all the original image data which along with stacking can assist in imaging faint deep sky objects.
With very low light capability, a few specific models of webcams are popular for solar, lunar, and planetary imaging. Mostly, these are manually focused cameras containing a CCD sensor instead of the more common CMOS. The lenses of these cameras are removed and then these are attached to telescopes to record images, videos, or both. In newer techniques, videos of very faint objects are taken and the sharpest frames of the video are 'stacked' together to obtain a still image of respectable contrast. The Philips PCVC 740K and SPC 900 are among the few webcams liked by astrophotographers. Any smartphone that allows long exposures can be used for this purpose, but some phones have a specific mode for astrophotography that will stitch together multiple exposures.
Equipment setups
Fixed or tripod
The most basic types of astronomical photographs are made with standard cameras and photographic lenses mounted in a fixed position or on a tripod. Foreground objects or landscapes are sometimes composed in the shot. Objects imaged are constellations, interesting planetary configurations, meteors, and bright comets. Exposure times must be short (under a minute) to avoid having the stars point image become an elongated line due to the Earth's rotation. Camera lens focal lengths are usually short, as longer lenses will show image trailing in a matter of seconds. A rule of thumb called the 500 rule states that, to keep stars point-like,
Maximum exposure time in seconds =
regardless of aperture or ISO setting. For example, with a 35 mm lens on an APS-C sensor, the maximum time is ≈ 9.5 s. A more accurate calculation takes into account pixel pitch and declination.
Allowing the stars to intentionally become elongated lines in exposures lasting several minutes or even hours, called "star trails", is an artistic technique sometimes used.
Tracking mounts
Telescope mounts that compensate for the Earth's rotation are used for longer exposures without objects being blurred. They include commercial equatorial mounts and homemade equatorial devices such as barn door trackers and equatorial platforms. Mounts can suffer from inaccuracies due to backlash in the gears, wind, and imperfect balance, and so a technique called auto guiding is used as a closed feedback system to correct for these inaccuracies.
Tracking mounts can come in two forms; single axis and dual axis. Single axis mounts are often known as star trackers. Star trackers have a single motor which drives the right ascension axis. This allows the mount to compensate for the Earth's rotation. Star trackers rely on the user ensuring the mount is polar aligned with high accuracy, as it is unable correct in the secondary declination axis, limiting exposure times.
Dual axis mounts use two motors to drive both the right ascension and the declination axis together. This mount will compensate for the Earth's rotation by driving the right ascension axis, similar to a star tracker. However using an auto-guiding system, the secondary declination axis can also be driven, compensating for errors in polar alignment, allowing for significantly longer exposure times.
"Piggyback" photography
Piggyback astronomical photography is a method where a camera/lens is mounted on an equatorially mounted astronomical telescope. The telescope is used as a guide scope to keep the field of view centered during the exposure. This allows the camera to use a longer exposure and/or a longer focal length lens or even be attached to some form of photographic telescope co-axial with the main telescope.
Telescope focal plane photography
In this type of photography, the telescope itself is used as the "lens" collecting light for the film or CCD of the camera. Although this allows for the magnification and light-gathering power of the telescope to be used, it is one of the most difficult astrophotography methods.Prime focus astrophotography – Prescott Astronomy Club . This is because of the difficulties in centering and focusing sometimes very dim objects in the narrow field of view, contending with magnified vibration and tracking errors, and the added expense of equipment (such as sufficiently sturdy telescope mounts, camera mounts, camera couplers, off-axis guiders, guide scopes, illuminated cross-hairs, or auto-guiders mounted on primary telescope or the guide-scope.) There are several different ways cameras (with removable lenses) are attached to amateur astronomical telescopes including:Keith Mackay, Keith's Astrophotography and Astronomy site, Methods of Astrophotography
Prime focus – In this method the image produced by the telescope falls directly on the film or CCD with no intervening optics or telescope eyepiece.
Positive projection – A method in which the telescope eyepiece (eyepiece projection) or a positive lens (placed after the focal plane of the telescope objective) is used to project a much more magnified image directly onto the film or CCD. Since the image is magnified with a narrow field of view this method is generally used for lunar and planetary photography.
Negative projection – This method, like positive projection, produces a magnified image. A negative lens, usually a Barlow or a photographic teleconverter, is placed in the light cone before the focal plane of the telescope objective.
Compression – Compression uses a positive lens (also called a focal reducer), placed in the converging cone of light before the focal plane of the telescope objective, to reduce overall image magnification. It is used on very long focal length telescopes, such as Maksutovs and Schmidt–Cassegrains, to obtain a wider field of view, or to reduce the focal ratio of the setup thereby increasing the speed of the system.
When the camera lens is not removed (or cannot be removed) a common method used is afocal photography, also called afocal projection. In this method, both the camera lens and the telescope eyepiece are attached. When both are focused at infinity the light path between them is parallel (afocal), allowing the camera to basically photograph anything the observer can see. This method works well for capturing images of the moon and brighter planets, as well as narrow field images of stars and nebulae. Afocal photography was common with early 20th-century consumer-level cameras since many models had non-removable lenses. It has grown in popularity with the introduction of point and shoot digital cameras since most models also have non-removable lenses.
Filters
Astronomical filters usually come as sets and are manufactured to specific standards, in order to allow different observatories to make observations at the same standard. A common filter standard in the astronomy community is the Johnson Morgan UVB, designed to match a CCD’s color response to that of photographic film. However there are over 200 standards available.
Remote Telescope
Fast Internet access in the last part of the 20th century, and advances in computer-controlled telescope mounts and CCD cameras, allows use of 'Remote Telescopes' for amateur astronomers not aligned with major telescope facilities to partake in research and deep-sky imaging. This enables the imager to control a telescope far away in a dark location. The observers can image through the telescopes using CCD cameras.
Imaging can be done regardless of the location of the user or the telescopes they wish to use. The digital data collected by the telescope is then transmitted and displayed to the user by means of the Internet. An example of a digital remote telescope operation for public use via the Internet is The Bareket Observatory.
Gallery
See also
Astrophotographers
References
Further reading
WikiHOW - How to Photograph the Night Sky (Astrophotography)
External links
Large collection of astronomical photographs taken from the Lick Observatory from the Lick Observatory Records Digital Archive, UC Santa Cruz Library’s Digital Collections
History of Astrophotography Timeline – 1800–1860, 1861–1900
One of the first photos of the Sun (taken in 1845)
Peter Abrahams: The Early History of Astrophotography
Ricky Leon Murphy: CCD's Versus Professional Plate Film (astronomyonline.org)
The History of Astrophotography (astrosurf.com)
Astrophotography Techniques – Astropix.com
Description of the types of images used in astrophotography.
The Beauty of Space Photography Documentary produced by Off Book (web series)
Beginners Guide to Astrophotography (Skies & Scopes)
What is Astrophotography?
Category:Astronomical imaging
Category:Space art
Category:Photographic techniques
Category:Photography by genre
Category:Articles containing video clips
|
arts_entertainment
| 4,179
|
143284
|
Restoration comedy
|
https://en.wikipedia.org/wiki/Restoration_comedy
|
Restoration comedy is English comedy written and performed in the Restoration period of 1660–1710. Comedy of manners is used as a synonym for this.George Henry Nettleton, Arthur British dramatists from Dryden to Sheridan p. 149. After public stage performances were banned for 18 years by the Puritan regime, reopening of the theatres in 1660 marked a renaissance of English drama.See also Antitheatricality. Sexually explicit language was encouraged by King Charles II (1660–1685) personally and by the rakish style of his court. Historian George Norman Clark argues:
The socially diverse audiences included aristocrats, their servants and hangers-on and a major middle-class segment. They were attracted to the comedies by up-to-the-minute topical writing, crowded and bustling plots, introduction of the first professional actresses, and the rise of the first celebrity actors. The period saw the first professional female playwright, Aphra Behn.
Theatre companies
Original patent companies, 1660–1682
Charles II was an active and interested patron of drama. Soon after his restoration in 1660 he granted exclusive staging rights, so-called Royal patents, to the King's Company and the Duke's Company, led by two middle-aged Caroline playwrights, Thomas Killigrew and William Davenant. The patentees scrambled for performance rights to the previous generation's Jacobean and Caroline plays, as the first necessity for economic survival before any new plays existed.Hume, 19–21.
Their next priority was to build splendid patent theatres in Drury Lane and Dorset Gardens, respectively. Striving to outdo each other, Killigrew and Davenant ended with quite similar theatres, both designed by Christopher Wren, both optimally providing music and dancing, and both fitted with moveable scenery and elaborate machines for thunder, lightning, and waves.Hume, 19–21.
The Restoration dramatists renounced the tradition of satire recently embodied by Ben Jonson, devoting themselves to the comedy of manners.Hodgart (2009) pp. 194 and 189.
The audience of the early Restoration period was not exclusively courtly, as has sometimes been supposed, but it was quite small and could barely support two companies. There was no untapped reserve of occasional playgoers. Ten consecutive performances constituted a success. This closed system forced playwrights to respond strongly to popular taste. Fashions in drama changed almost week by week rather than season by season, as each company responded to the offerings of the other, and new plays were urgently sought. In this hectic climate the new genres of heroic drama, pathetic drama and Restoration comedy were born and flourished.Hume, 17, 23.
United Company, 1682–1695
Both the quantity and quality of drama suffered in 1682 when the more successful Duke's Company absorbed the struggling King's Company to form the United Company. Production of new plays dropped off sharply in the 1680s, affected by the monopoly and the political situation (see Decline of comedy below). The influence and incomes of actors dropped too.Milhous, 38–48. In the late 1680s, predatory investors ("adventurers") converged on the United Company. Management was taken over by the lawyer Christopher Rich, who tried to finance a tangle of "farmed" shares and sleeping partners by slashing salaries and dangerously by abolishing traditional perks of senior performers, who were stars with the clout to fight back.Milhous, pp. 51–55.
War of the theatres, 1695–1700
The company owners, wrote the young United Company employee Colley Cibber, "had made a monopoly of the stage, and consequently presum'd they might impose what conditions they pleased upon their people. [They] did not consider that they were all this while endeavouring to enslave a set of actors whom the public were inclined to support."Milhous, p. 66. Performers like the legendary Thomas Betterton, the tragedienne Elizabeth Barry and the rising young comedian Anne Bracegirdle had the audience on their side, and confident of this, walked out.Milhous, pp. 68–74.
The actors gained a Royal "licence to perform", so bypassing Rich's ownership of the original Duke's and King's Company patents from 1660 and forming their own cooperative company. This venture was set up with detailed rules for avoiding arbitrary managerial authority, regulating the ten actors' shares, setting the conditions of salaried employees and the sickness and retirement benefits of both categories. In 1695, the cooperative had the good luck to open with the première of William Congreve's famous Love For Love and the skill to make it a huge box-office success.Milhous, pp. 52–55.
London again had two competing companies. Their dash to attract audiences briefly revitalised Restoration drama, but also set it on a fatal slope to the lowest common denominator of public taste. Rich's company notoriously offered Bartholomew Fair-type attractions – high kickers, jugglers, rope dancers, and performing animals. The co-operating actors, while appealing to snobbery by setting themselves up as the one legitimate theatre company in London, were not above retaliating with "prologues recited by boys of five and epilogues declaimed by ladies on horseback".Dobrée, xxi.
Actors
First actresses
Restoration comedy was strongly influenced by the first professional actresses. Before the closing of the theatres, all female roles had been taken by boy players. The predominantly male audiences of the 1660s and 1670s were curious, censorious and delighted at the novelty of seeing real women engage in risqué repartee and take part in physical seduction scenes. Samuel Pepys refers many times in his diary to visiting the playhouse to watch or re-watch performances by particular actresses and to his enjoyment of these.
Daringly suggestive comedy scenes involving women became especially common, although Restoration actresses were, just like male actors, expected to do justice to all kinds and moods of plays. Their role in the development of Restoration tragedy is also important, compare She-tragedy.
A speciality introduced almost as early as actresses was the breeches role – an actress appearing in male clothes, breeches of tight-fitting knee-length pants, the standard male garment of the time. For instance, to play a witty heroine who disguises herself as a boy to hide or to engage in escapades disallowed to girls. A quarter of the plays produced on the London stage between 1660 and 1700 contained breeches roles. Women playing them behaved with the freedom society allowed to men.
Some feminist critics such as Jacqueline Pearson saw them as subverting conventional gender roles and empowering female members of the audience. Elizabeth Howe has objected that the male disguise, when studied in relation to play texts, prologues, and epilogues, comes out as "little more than yet another means of displaying the actress as a sexual object" to male patrons, by showing off her body, normally hidden by a skirt, outlined by the male outfit."See also Antitheatricality#Restoration theatre.
Successful Restoration actresses included Charles II's mistress Nell Gwyn, the tragedienne Elizabeth Barry, famous for an ability to "move the passions" and make whole audiences cry, and the 1690s comedian Anne Bracegirdle. Susanna Mountfort (Susanna Verbruggen), had many roles written specially for her in the 1680s and 1690s. Letters and memoirs of the period show men and women in the audience relishing Mountfort's swaggering, roistering impersonations of young women breeched to enjoy the social and sexual freedom of male Restoration rakes.
First celebrity actors
Male and female actors on the London stage in the Restoration period became for the first time public celebrities. Documents of the period show audiences attracted to performances by the talents of specific actors as much as by specific plays, and more than by authors, who seem to have been the least important draw, no performance being advertised by an author until 1699. Although playhouses were built for large audiences – the second Drury Lane theatre from 1674 held 2,000 patrons – they were compact in design and an actor's charisma could be intimately projected from the thrust stage.
With two companies competing for their services from 1660 to 1682, star actors could negotiate star deals, comprising company shares and benefit nights as well as salaries. This advantage changed when the two companies were amalgamated in 1682. The way the actors rebelled and took command of a new company in 1695 is an illustration of how far their status and power had developed since 1660.
The greatest fixed stars among Restoration actors were Elizabeth Barry ("Famous Mrs Barry" who "forc 'd Tears from the Eyes of her Auditory") and Thomas Betterton, both active in running the actors' revolt in 1695 and both original patent-holders in the resulting actors' cooperative.
Betterton played every great male part there was from 1660 into the 18th century. After watching Hamlet in 1661, Pepys reports in his diary that the young beginner Betterton "did the prince's part beyond imagination." Such expressive performances seem to have attracted playgoers as magnetically, as did the novelty of seeing women on the stage. He was soon established as the leading man in the Duke's Company, and played Dorimant, the seminal irresistible Restoration rake, at the première of George Etherege's Man of Mode (1676).
Betterton's position remained unassailed through the 1680s, both as leading man of the United Company and as its stage manager and de facto day-to-day leader. He remained loyal to Rich longer than many of his co-workers, but eventually he headed an actors' walkout in 1695 and became the acting manager of the new company.
Comedies
Variety and dizzying fashion changes are typical of Restoration comedy. Though the "Restoration drama" unit taught to college students is likely to be telescoped in a way that makes the plays all sound contemporary, scholars now have a strong sense of the rapid evolution of English drama over these 40 years and its social and political causes. The influence of theatre-company competition and playhouse economics is also acknowledged.
Restoration comedy peaked twice. The genre came to marked maturity in the mid-1670s with an extravaganza of aristocratic comedies. Twenty lean years followed this short golden age, though the achievement of Aphra Behn in the 1680s can be noted. In the mid-1690s, a brief second Restoration comedy renaissance arose, aimed at a wider audience. The comedies of the golden 1670s and the 1690s peak times are extremely different from each other.
An attempt is made below to illustrate the generational taste shift by describing The Country Wife (1675) and The Provoked Wife (1697) in some detail. The two plays differ in some typical ways, just as a Hollywood movie of the 1950s differs from one of the 1970s. The plays are not offered as "typical" of their decades. There exist no typical comedies of the 1670s or the 1690s. Even within these two short peak-times, comedy types kept mutating and multiplying.
Aristocratic comedy, 1660–1680
The drama of the 1660s and 1670s was vitalised by the competition between the two patent companies created at the Restoration, and by the personal interest of Charles II, while comic playwrights arose to the demand for new plays. They stole freely from the contemporary French and Spanish stage, from English Jacobean and Caroline plays, and even from Greek and Roman classical comedies, combining the looted plotlines in adventurous ways. Resulting differences of tone in a single play were appreciated rather than frowned on: audiences prized "variety" within as well as between plays.
Early Restoration audiences had little enthusiasm for structurally simple, well-shaped comedies such as those of Molière. They demanded bustling, crowded multi-plot action and fast pace. Even a splash of high heroic drama might be thrown in to enrich the comedy mix, as in George Etherege's Love in a Tub (1664), which has one heroic verse "conflict between love and friendship" plot, one urbane wit comedy plot, and one burlesque pantsing plot. (See illustration, top right.) Such incongruities contributed to the low esteem held by Restoration comedy in the 18th, 19th and early 20th centuries. Today, such total theatre experience is again valued on the stage and by postmodern academic critics.
The unsentimental or "hard" comedies of John Dryden, William Wycherley, and George Etherege reflected the atmosphere at Court. They celebrated with frankness an aristocratic macho lifestyle of unremitting sexual intrigue and conquest. The Earl of Rochester, a real-life Restoration rake, courtier and poet, is flatteringly portrayed in Etherege's The Man of Mode (1676) as a riotous, witty, intellectual, sexually irresistible aristocrat, a template for posterity's idea of the glamorous Restoration rake, never actually a very common character in Restoration comedy.
Wycherley's The Plain Dealer (1676), a variation on the theme of Molière's Le Misanthrope, was highly regarded for uncompromising satire. It earned Wycherley the appellation "Plain Dealer" Wycherley or "Manly" Wycherley, after the play's main character Manly. The single play that does most to support the charge of obscenity levelled then and now at Restoration comedy is probably Wycherley's The Country Wife (1675).
Example. William Wycherley, The Country Wife (1675)
The Country Wife has three interlinked but distinct plots, which each project sharply different moods:
1. Horner's impotence trick provides the main plot and the organising principle. The upper-class rake Horner mounts a campaign to seduce as many respectable ladies as possible, first spreading a false rumour of his own impotence, so as to be allowed where no other men might go. The trick is a great success and Horner has sex with many married ladies of virtuous reputation, whose husbands are happy to leave them alone with him. In the famously outrageous "China scene", sexual intercourse is assumed to take place repeatedly just off stage, where Horner and his mistresses carry on a sustained double entendre dialogue purportedly about Horner's china collection. The Country Wife is driven by a succession of near-discoveries of the truth about Horner's sexual prowess (and so the truth about the respectable ladies), from which he extricates himself by quick thinking and luck. Horner never reforms, but keeps his secret to the end and is seen to go on merrily reaping the fruits of his planted misinformation past the last act and beyond.
2. The married life of Pinchwife and Margery draws on Molière's School for Wives. Middle-aged Pinchwife has married an ignorant young country girl in the hope that she will not know to cuckold him. Horner teaches her, and Margery cuts a swathe through the sophistications of London marriage without even noticing them. She is enthusiastic about the virile handsomeness of town gallants, rakes, and especially theatre actors (such self-referential stage jokes were nourished by the new higher status of actors), and keeps Pinchwife in a state of continual horror with her plain-spokenness and interest in sex. A running joke is the way Pinchwife's pathological jealousy always leads him into supplying Margery with the very information he wishes her not to have.
3. The courtship of Harcourt and Alithea is a comparatively uplifting love story, in which the witty Harcourt wins the hand of Pinchwife's sister Alithea from the hands of the upper-class town snob Sparkish, to whom she was engaged until discovering he loved her only for her money.
Decline of comedy, 1678–1690
When the two companies merged in 1682 and the London stage became a monopoly, both the number and the variety of new plays dropped sharply. There was a swing away from comedy to serious political drama, reflecting preoccupations and divisions after the Popish Plot (1678) and Exclusion Crisis (1682). The few comedies produced tended to be political in focus, the Whig dramatist Thomas Shadwell sparring with the Tories John Dryden and Aphra Behn. Behn's achievement as an early professional woman writer has been the subject of much recent study.
Comedy renaissance, 1690–1700
During a second wave of Restoration comedy in the 1690s, the "softer" comedies of William Congreve and John Vanbrugh reflected mutating cultural perceptions and great social change. The playwrights of the 1690s set out to appeal to more socially mixed audiences with a strong middle-class element, and to female spectators, for instance by moving the war between the sexes from the arena of intrigue into that of marriage. The focus in comedy is less on young lovers outwitting the older generation, and more on marital relations after the wedding bells.
Thomas Southerne's dark The Wives' Excuse (1691) is not yet "soft": it shows a woman miserably married to the fop Friendall, everybody's friend, whose follies and indiscretions undermine her social worth, as her honour is bound up in his. Mrs Friendall is pursued by a would-be lover, a matter-of-fact rake devoid of all the qualities that made Etherege's Dorimant charming. She is kept from action and choice by the unattractiveness of all her options. The humour of this "comedy" is in the subsidiary love-chase and fornication plots, none in the main plot.
In Congreve's Love for Love (1695) and The Way of the World (1700), the "wit duels" between lovers typical of 1670s comedy are underplayed. The give-and-take set pieces of couples still testing their attraction for each other have mutated into witty prenuptial debates on the eve of marriage, as in the famous "Proviso" scene in The Way of the World (1700). Vanbrugh's The Provoked Wife (1697) follows in the footsteps of Southerne's Wives' Excuse, with a lighter touch and more humanly recognisable characters.
Example. John Vanbrugh, The Provoked Wife (1697)
The Provoked Wife is something of a Restoration problem play in its attention to the subordinate legal position of married women and the complexities of "divorce" and separation, issues that had been highlighted in the mid-1690s by some notorious cases before the House of Lords (see Stone).
Sir John Brute in The Provoked Wife is tired of matrimony. He comes home drunk every night and is continually rude and insulting to his wife. She is meanwhile tempted to embark on an affair with the witty and faithful Constant. Divorce is no option for either of the Brutes at this time, but forms of legal separation have recently arisen and would entail separate maintenance for the wife. Such an arrangement would prevent remarriage. Still, muses Lady Brute, in one of many discussions with her niece Bellinda, "These are good times. A woman may have a gallant and a separate maintenance too."
Bellinda is meanwhile grumpily courted by Constant's friend Heartfree, who is surprised and dismayed to find himself in love with her. The bad example of the Brutes is a constant warning to Heartfree not to marry.
The Provoked Wife is a talk play, with the focus less on love scenes and more on discussions between friends, female (Lady Brute and Bellinda) and male (Constant and Heartfree). These are full of jokes, but are also thoughtful, with a dimension of melancholy and frustration.
After a forged-letter complication, the play ends with marriage between Heartfree and Bellinda and stalemate between the Brutes. Constant continues to pay court to Lady Brute, and she continues to shilly-shally.
End of comedy
The tolerance for Restoration comedy even in its modified form was running out by the end of the 17th century, as public opinion turned to respectability and seriousness faster than playwrights did. Interconnected causes for this shift in taste were demographic change, the Glorious Revolution of 1688, William's and Mary's dislike of the theatre, and the lawsuits brought against playwrights by the Society for the Reformation of Manners, founded in 1692.
In 1698, when Jeremy Collier attacked Congreve and Vanbrugh in his Short View of the Immorality and Profaneness of the English Stage, he was confirming a shift in audience taste that had taken place. At the much-anticipated all-star première in 1700 of The Way of the World, Congreve's first comedy for five years, the audience showed only moderate enthusiasm for that subtle and almost melancholy work. The comedy of sex and wit was about to be replaced by the drama of obvious sentiment and exemplary morality.
After Restoration comedy
Stage history
During the 18th and 19th centuries, the sexual frankness of Restoration comedy ensured that theatre producers cannibalised it or adapted it with a heavy hand, rather than actually performed it. Today Restoration comedy is again appreciated on stage. The classics – Wycherley's The Country Wife and The Plain-Dealer, Etherege's The Man of Mode, and Congreve's Love For Love and The Way of the World – have competition not only from Vanbrugh's The Relapse and The Provoked Wife, but from such dark, unfunny comedies as Thomas Southerne's The Wives Excuse. Aphra Behn, once considered unstageable, has had a renaissance, with The Rover now a repertory favourite.
Literary criticism
Distaste for sexual impropriety long kept Restoration comedy off the stage, locked in a critical poison cupboard. Victorian critics like William Hazlitt, though valuing the linguistic energy and "strength" of the canonical writers Etherege, Wycherley and Congreve, found it necessary to temper aesthetic praise with heavy moral condemnation. Aphra Behn received the condemnation without the praise, as outspoken sex comedy was seen as particularly offensive from a woman author. At the turn of the 20th century, an embattled minority of academic Restoration comedy enthusiasts began to appear, such as the editor Montague Summers, whose work ensured that plays of Restoration comedy authors remained in print.
"Critics remain astonishingly defensive about the masterpieces of this period," wrote Robert D. Hume as late as 1976. It is only over the last few decades that the statement has become untrue, with Restoration comedy acknowledged as a rewarding subject for high theory analysis, and Wycherley's The Country Wife, long branded the most obscene play in the English language, becoming something of an academic favourite. "Minor" comic writers are getting a fair share of attention, especially the post-Aphra Behn generation of women playwrights around the turn of the 18th century: Delarivier Manley, Mary Pix, Catharine Trotter, and Susanna Centlivre. A broad study of most never-reprinted Restoration comedies has been made possible by internet access (by subscription only) to the first editions at the British Library.
List of Restoration comedies
George Etherege – The Comical Revenge (1664), She Would If She Could (1668), The Man of Mode (1676)
John Dryden – An Evening's Love (1668), Marriage a la Mode (1672)
Charles Sedley – The Mulberry-Garden (1668), Bellamira: or, The Mistress (1687)
George Villiers, 2nd Duke of Buckingham – The Rehearsal (1671)
William Wycherley – Love in a Wood (1671), The Country Wife (1675), The Plain Dealer (1676)
Thomas Shadwell – Epsom Wells (1672), The Virtuoso (1676), A True Widow (1678), The Woman Captain (1679), The Squire of Alsatia (1688), Bury Fair (1689), The Volunteers (1692)
Edward Ravenscroft – The Careless Lovers (1673) The London Cuckolds (1681), Dame Dobson (1683), The Canterbury Guests (1694)
John Crowne – The Country Wit (1676), City Politiques (1683), Sir Courtly Nice (1685), The English Friar (1690), The Married Beau (1694)
Thomas Rawlins – Tom Essence (1676), Tunbridge Wells (1678)
Aphra Behn – The Counterfeit Bridegroom (1677), The Rover (1677), The Roundheads (1681), The Revenge (1680), The City Heiress (1682), The Lucky Chance (1686)
Thomas D'Urfey – A Fond Husband (1677), Squire Oldsapp (1678), The Virtuous Wife (1679), Sir Barnaby Whigg (1681), The Royalist (1682) A Commonwealth of Women (1685), A Fool's Preferment (1688), Love for Money (1691), The Marriage-Hater Matched (1692), The Campaigners (1698)
Thomas Otway – Friendship in Fashion (1678)
Thomas Southerne – Sir Anthony Love (1690), The Wives Excuse (1691), The Maid's Last Prayer (1693)
William Congreve – The Old Bachelor (1693), Love For Love (1695), The Way of the World (1700)
John Vanbrugh – The Relapse (1696), The Provoked Wife (1697)
George Farquhar – Love and a Bottle (1698), The Constant Couple (1699), Sir Harry Wildair (1701), The Recruiting Officer (1706), The Beaux' Stratagem (1707)
Susanna Centlivre – The Perjured Husband (1700), The Gamester (1705), The Busie Body (1709)
Film adaptations
The Country Wife, starring Helen Mirren (1977)
See also
Essay of Dramatick Poesie
John Rich (producer)
Restoration style
Notes
References
Colley Cibber, first published 1740, 1976, An Apology for the Life of Colley Cibber. London: J. M. Dent & Sons
Bonamy Dobrée, 1927, Introduction to The Complete Works of Sir John Vanbrugh, vol. 1. Bloomsbury: The Nonesuch Press
Elizabeth Howe, 1992, The First English Actresses: Women and Drama 1660–1700. Cambridge: Cambridge University Press
Robert D. Hume, 1976, The Development of English Drama in the Late Seventeenth Century. Oxford: Clarendon Press
Judith Milhous, 1979, Thomas Betterton and the Management of Lincoln's Inn Fields 1695–1708. Carbondale, Illinois: Southern Illinois University Press
Fidelis Morgan, 1981, The Female Wits – Women Playwrights on the London Stage 1660–1720. London: Virago
Jacqueline Pearson, 1988, The Prostituted Muse: Images of Women and Women Dramatists 1642–1737. New York: St. Martin's Press
Lawrence Stone, 1990, Road to Divorce: England 1530–1987. Oxford: Oxford University Press
William Van Lennep, ed., 1965, The London Stage 1660–1800: A Calendar of Plays, Entertainments & Afterpieces Together with Casts, Box-Receipts and Contemporary Comment Compiled From the Playbills, Newspapers and Theatrical Diaries of the Period, Part 1: 1660–1700. Carbondale, Illinois: Southern Illinois University Press
Further reading
Selected seminal critical studies:
Douglas Canfield, 1997, Tricksters and Estates: On the Ideology of Restoration Comedy. Lexington, Kentucky: The University Press of Kentucky
Thomas H. Fujimura, 1952, The Restoration Comedy of Wit. Princeton: Princeton University Press
Norman N. Holland, 1959), The First Modern Comedies: The Significance of Etherege, Wycherley and Congreve. Cambridge, Massachusetts: Harvard University Press
Robert Markley, 1988, Two-Edg'd Weapons: Style and Ideology in the Comedies of Etherege, Wycherley, and Congreve. Oxford : Clarendon Press
Montague Summers, 1935, Playhouse of Pepys. London: Kegan Paul
Harold Weber, 1986, The Restoration Rake-Hero: Transformations in Sexual Understanding in Seventeenth-Century England. Madison: University of Wisconsin Press
Rose Zimbardo, 1965, Wycherley's Drama: A Link in the Development of English Satire. New Haven: Yale University Press
External links
Restoration playhouses (archived 11 March 2007)
Links to e-texts of restorations Plays, Univ. of Oldenburg, 2007
17th Century Database
Aphra Behn, The Rover
William Congreve, Love For Love
William Congreve, The Way of the World
George Etherege, The Man of Mode (archived 31 October 2000)
John Vanbrugh, The Provoked Wife. Use with caution, this is an abridged and bowdlerised text.
William Wycherley, The Country Wife
William Wycherley, The Gentleman Dancing-Master (archived 4 December 2004)
Category:British drama
Category:Comedy
Category:Literature of England
Category:The Restoration
|
arts_entertainment
| 4,277
|
143533
|
Green fluorescent protein
|
https://en.wikipedia.org/wiki/Green_fluorescent_protein
|
The green fluorescent protein (GFP) is a protein that exhibits green fluorescence when exposed to light in the blue to ultraviolet range. The label GFP traditionally refers to the protein first isolated from the jellyfish Aequorea victoria and is sometimes called avGFP. However, GFPs have been found in other organisms including corals, sea anemones, zoanithids, copepods and lancelets.
The GFP from A. victoria has a major excitation peak at a wavelength of 395 nm and a minor one at 475 nm. Its emission peak is at 509 nm, which is in the lower green portion of the visible spectrum. The fluorescence quantum yield (QY) of GFP is 0.79. The GFP from the sea pansy (Renilla reniformis) has a single major excitation peak at 498 nm. GFP makes for an excellent tool in many forms of biology due to its ability to form an internal chromophore without requiring any accessory cofactors, gene products, or enzymes / substrates other than molecular oxygen.
In cell and molecular biology, the GFP gene is frequently used as a reporter of expression. It has been used in modified forms to make biosensors, and many animals have been created that express GFP, which demonstrates a proof of concept that a gene can be expressed throughout a given organism, in selected organs, or in cells of interest. GFP can be introduced into animals or other species through transgenic techniques, and maintained in their genome and that of their offspring. GFP has been expressed in many species, including bacteria, yeasts, fungi, fish and mammals, including in human cells. Scientists Roger Y. Tsien, Osamu Shimomura, and Martin Chalfie were awarded the 2008 Nobel Prize in Chemistry on 10 October 2008 for their discovery and development of the green fluorescent protein.
Most commercially available genes for GFP and similar fluorescent proteins are around 730 base-pairs long. The natural protein has 238 amino acids. Its molecular mass is 27 kD. Therefore, fusing the GFP gene to the gene of a protein of interest can significantly increase the protein's size and molecular mass, and can impair the protein's natural function or change its location or trajectory of transport within the cell.
Background
Wild-type GFP (wtGFP)
In the 1960s and 1970s, GFP, along with the separate luminescent protein aequorin (an enzyme that catalyzes the breakdown of luciferin, releasing light), was first purified from the jellyfish Aequorea victoria and its properties studied by Osamu Shimomura. In A. victoria, GFP fluorescence occurs when aequorin interacts with Ca2+ ions, inducing a blue glow. Some of this luminescent energy is transferred to the GFP, shifting the overall color towards green. However, its utility as a tool for molecular biologists did not begin to be realized until 1992 when Douglas Prasher reported the cloning and nucleotide sequence of wtGFP in Gene. The funding for this project had run out, so Prasher sent cDNA samples to several labs. The lab of Martin Chalfie expressed the coding sequence of wtGFP, with the first few amino acids deleted, in heterologous cells of E. coli and C. elegans, publishing the results in Science in 1994. Frederick Tsuji's lab independently reported the expression of the recombinant protein one month later. Remarkably, the GFP molecule folded and was fluorescent at room temperature, without the need for exogenous cofactors specific to the jellyfish. Although this near-wtGFP was fluorescent, it had several drawbacks, including dual peaked excitation spectra, pH sensitivity, chloride sensitivity, poor fluorescence quantum yield, poor photostability and poor folding at .
The first reported crystal structure of a GFP was that of the S65T mutant by the Remington group in Science in 1996. One month later, the Phillips group independently reported the wild-type GFP structure in Nature Biotechnology. These crystal structures provided vital background on chromophore formation and neighboring residue interactions. Researchers have modified these residues by directed and random mutagenesis to produce the wide variety of GFP derivatives in use today. Further research into GFP has shown that it is resistant to detergents, proteases, guanidinium chloride (GdmCl) treatments, and drastic temperature changes.
GFP derivatives
Due to the potential for widespread usage and the evolving needs of researchers, many different mutants of GFP have been engineered. The first major improvement was a single point mutation (S65T) reported in 1995 in Nature by Roger Tsien. This mutation dramatically improved the spectral characteristics of GFP, resulting in increased fluorescence, photostability, and a shift of the major excitation peak to 488 nm, with the peak emission kept at 509 nm. This matched the spectral characteristics of commonly available FITC filter sets, increasing the practicality of use by the general researcher. A 37 °C folding efficiency (F64L) point mutant to this scaffold, yielding enhanced GFP (EGFP), was discovered in 1995 by the laboratories of Thastrup and Falkow. EGFP allowed the practical use of GFPs in mammalian cells. EGFP has an extinction coefficient (denoted ε) of 55,000 M−1cm−1. The fluorescence quantum yield (QY) of EGFP is 0.60. The relative brightness, expressed as ε•QY, is 33,000 M−1cm−1.
Superfolder GFP (sfGFP), a series of mutations that allow GFP to rapidly fold and mature even when fused to poorly folding peptides, was reported in 2006.
Many other mutations have been made, including color mutants; in particular, blue fluorescent protein (EBFP, EBFP2, Azurite, mKalama1), cyan fluorescent protein (ECFP, Cerulean, CyPet, mTurquoise2), and yellow fluorescent protein derivatives (YFP, Citrine, Venus, YPet). BFP derivatives (except mKalama1) contain the Y66H substitution. They exhibit a broad absorption band in the ultraviolet centered close to 380 nanometers and an emission maximum at 448 nanometers. A green fluorescent protein mutant (BFPms1) that preferentially binds Zn(II) and Cu(II) has been developed. BFPms1 have several important mutations including and the BFP chromophore (Y66H),Y145F for higher quantum yield, H148G for creating a hole into the beta-barrel and several other mutations that increase solubility. Zn(II) binding increases fluorescence intensity, while Cu(II) binding quenches fluorescence and shifts the absorbance maximum from 379 to 444 nm. Therefore, they can be used as a Zn biosensor.
More color variants are possible via chromophore binding. The critical mutation in cyan derivatives is the Y66W substitution, which causes the chromophore to form with an indole rather than phenol component. Several additional compensatory mutations in the surrounding barrel are required to restore brightness to this modified chromophore due to the increased bulk of the indole group. In ECFP and Cerulean, the N-terminal half of the seventh strand exhibits two conformations. These conformations both have a complex set of van der Waals interactions with the chromophore. The Y145A and H148D mutations in Cerulean stabilize these interactions and allow the chromophore to be more planar, better packed, and less prone to collisional quenching.
Additional site-directed random mutagenesis in combination with fluorescence lifetime based screening has further stabilized the seventh β-strand resulting in a bright variant, mTurquoise2, with a quantum yield (QY) of 0.93. The red-shifted wavelength of the YFP derivatives is accomplished by the T203Y mutation and is due to π-electron stacking interactions between the substituted tyrosine residue and the chromophore. These two classes of spectral variants are often employed for Förster resonance energy transfer (FRET) experiments. Genetically encoded FRET reporters sensitive to cell signaling molecules, such as calcium or glutamate, protein phosphorylation state, protein complementation, receptor dimerization, and other processes provide highly specific optical readouts of cell activity in real time.
Semirational mutagenesis of a number of residues led to pH-sensitive mutants known as pHluorins, and later super-ecliptic pHluorins. By exploiting the rapid change in pH upon synaptic vesicle fusion, pHluorins tagged to synaptobrevin have been used to visualize synaptic activity in neurons.
Redox sensitive GFP (roGFP) was engineered by introduction of cysteines into the beta barrel structure. The redox state of the cysteines determines the fluorescent properties of roGFP.
Nomenclature
The nomenclature of modified GFPs is often confusing due to overlapping mapping of several GFP versions onto a single name. For example, mGFP often refers to a GFP with an N-terminal palmitoylation that causes the GFP to bind to cell membranes. However, the same term is also used to refer to monomeric GFP, which is often achieved by the dimer interface breaking A206K mutation. Wild-type GFP has a weak dimerization tendency at concentrations above 5 mg/mL. mGFP also stands for "modified GFP", which has been optimized through amino acid exchange for stable expression in plant cells.
In nature
alt=Live lancelet (B. floridae) under a fluorescent microscope.|thumb|Live lancelet (B. floridae) under a fluorescent microscope.
The purpose of both the (primary) bioluminescence (from aequorin's action on luciferin) and the (secondary) fluorescence of GFP in jellyfish is unknown. GFP is co-expressed with aequorin in small granules around the rim of the jellyfish bell. The secondary excitation peak (480 nm) of GFP does absorb some of the blue emission of aequorin, giving the bioluminescence a more green hue. The serine 65 residue of the GFP chromophore is responsible for the dual-peaked excitation spectra of wild-type GFP. It is conserved in all three GFP isoforms originally cloned by Prasher. Nearly all mutations of this residue consolidate the excitation spectra to a single peak at either 395 nm or 480 nm. The precise mechanism of this sensitivity is complex, but, it seems, involves donation of a hydrogen from serine 65 to glutamate 222, which influences chromophore ionization. Since a single mutation can dramatically enhance the 480 nm excitation peak, making GFP a much more efficient partner of aequorin, A. victoria appears to evolutionarily prefer the less-efficient, dual-peaked excitation spectrum. Roger Tsien has speculated that varying hydrostatic pressure with depth may affect serine 65's ability to donate a hydrogen to the chromophore and shift the ratio of the two excitation peaks. Thus, the jellyfish may change the color of its bioluminescence with depth. However, a collapse in the population of jellyfish in Friday Harbor, where GFP was originally discovered, has hampered further study of the role of GFP in the jellyfish's natural environment.
Most species of lancelet are known to produce GFP in various regions of their body. Unlike A. victoria, lancelets do not produce their own blue light, and the origin of their endogenous GFP is still unknown. Some speculate that it attracts plankton towards the mouth of the lancelet, serving as a passive hunting mechanism. It may also serve as a photoprotective agent in the larvae, preventing damage caused by high-intensity blue light by converting it into lower-intensity green light. However, these theories have not been tested.
GFP-like proteins have been found in multiple species of marine copepods, particularly from the Pontellidae and Aetideidae families. GFP isolated from Pontella mimocerami has shown high levels of brightness with a quantum yield of 0.92, making them nearly two-fold brighter than the commonly used EGFP isolated from A. victoria.
Other fluorescent proteins
alt=A rack of test tubes showing solutions glowing in different colors|thumb|Different proteins produce different fluorescent colors when exposed to ultraviolet light.
There are many GFP-like proteins that, despite being in the same protein family as GFP, are not directly derived from Aequorea victoria. These include dsRed, eqFP611, Dronpa, TagRFPs, KFP, EosFP/IrisFP, Dendra, and so on. Having been developed from proteins in different organisms, these proteins can sometimes display unanticipated approaches to chromophore formation. Some of these, such as KFP, are developed from naturally non- or weakly-fluorescent proteins to be greatly improved upon by mutagenesis. When GFP-like barrels of different spectral characteristics are used, the excitation spectrum of one chromophore can be used to power another chromophore (FRET), allowing for conversion between wavelengths of light.
FMN-binding fluorescent proteins (FbFPs) were developed in 2007 and are a class of small (11–16 kDa), oxygen-independent fluorescent proteins that are derived from blue-light receptors. They are intended especially for the use under anaerobic or hypoxic conditions, since the formation and binding of the flavin chromophore does not require molecular oxygen, as it is the case with the synthesis of the GFP chromophore.
Fluorescent proteins with other chromophores, such as UnaG with bilirubin, can display unique properties like red-shifted emission above 600 nm or photoconversion from a green-emitting state to a red-emitting state. They can have excitation and emission wavelengths far enough apart to achieve conversion between red and green light.
A new class of fluorescent protein was engineered from α-allophycocyanin, a phycobiliprotein found in the cyanobacterium Trichodesmium erythraeum, and was named small ultra red fluorescent protein (smURFP) in 2016. smURFP autocatalytically incorporates the chromophore biliverdin without the need for an external protein known as a lyase. Jellyfish- and coral-derived GFP-like proteins require oxygen and produce a stoichiometric amount of hydrogen peroxide upon chromophore formation. smURFP does not require oxygen or produce hydrogen peroxide. smURFP has a large extinction coefficient (180,000 M−1 cm−1) and has a modest quantum yield (0.20), which makes it comparable biophysical brightness to eGFP and ~2-fold brighter than most red or far-red fluorescent proteins derived from coral. smURFP spectral properties are similar to the organic dye Cy5.
Reviews on new classes of fluorescent proteins and applications can be found in the cited reviews.
Structure
GFP has a beta barrel structure consisting of eleven β-strands with a pleated sheet arrangement, with an alpha helix containing the covalently bonded chromophore 4-(p-hydroxybenzylidene)imidazolidin-5-one (HBI) running through the center. Five shorter alpha helices form caps on the ends of the structure. The beta barrel structure is a nearly perfect cylinder, 42Å long and 24Å in diameter (some studies have reported a diameter of 30Å), creating what is referred to as a "β-can" formation, which is unique to the GFP-like family. HBI, the spontaneously modified form of the tripeptide Ser65–Tyr66–Gly67, is nonfluorescent in the absence of the properly folded GFP scaffold and exists mainly in the un-ionized phenol form in wtGFP. Inward-facing sidechains of the barrel induce specific cyclization reactions in Ser65–Tyr66–Gly67 that induce ionization of HBI to the phenolate form and chromophore formation. This process of post-translational modification is referred to as maturation. The hydrogen-bonding network and electron-stacking interactions with these sidechains influence the color, intensity and photostability of GFP and its numerous derivatives. The tightly packed nature of the barrel excludes solvent molecules, protecting the chromophore fluorescence from quenching by water. In addition to the auto-cyclization of the Ser65-Tyr66-Gly67, a 1,2-dehydrogenation reaction occurs at the Tyr66 residue. Besides the three residues that form the chromophore, residues such as Gln94, Arg96, His148, Thr203, and Glu222 all act as stabilizers. The residues of Gln94, Arg96, and His148 are able to stabilize by delocalizing the chromophore charge. Arg96 is the most important stabilizing residue due to the fact that it prompts the necessary structural realignments that are necessary from the HBI ring to occur. Any mutation to the Arg96 residue would result in a decrease in the development rate of the chromophore because proper electrostatic and steric interactions would be lost. Tyr66 is the recipient of hydrogen bonds and does not ionize in order to produce favorable electrostatics.
Blue fluorescent protein (BFP) is the blue variant of green fluorescent protein (GFP). BFP has a very similar structure to GFP. In the BFP structure, two substitution mutations in the amino acid sequence change its fluorescence from green to blue. The first mutation occurs inside the chromophore of GFP at position 66 which changes a tyrosine to a histidine. The other mutation in BFP is on the tyrosine at position 145 which mutates to phenylalanine. The autocatalytic cyclization and oxidation of the serine, tyrosine, and glycine form the GFP chromophore. These three residues at positions 65-67 make up the green fluorescent chromophore. When the tyrosine in the chromophore is substituted by a histidine, it changes the folding structure of the protein and emission spectra. The T145F mutation is also added to increase the stability of the protein and well as intensify the fluorescence. These mutations are what change GFP to BFP.
Autocatalytic formation of the chromophore in wtGFP
Mechanistically, the process involves base-mediated cyclization followed by dehydration and oxidation. In the reaction of 7a to 8 involves the formation of an enamine from the imine, while in the reaction of 7b to 9 a proton is abstracted. The formed HBI fluorophore is highlighted in green.
The reactions are catalyzed by residues Glu222 and Arg96. An analogous mechanism is also possible with threonine in place of Ser65.
Applications
Reporter assays
Green fluorescent protein may be used as a reporter gene.
For example, GFP can be used as a reporter for environmental toxicity levels. This protein has been shown to be an effective way to measure the toxicity levels of various chemicals including ethanol, p-formaldehyde, phenol, triclosan, and paraben. GFP is great as a reporter protein because it has no effect on the host when introduced to the host's cellular environment. Due to this ability, no external visualization stain, ATP, or cofactors are needed. With regards to pollutant levels, the fluorescence was measured in order to gauge the effect that the pollutants have on the host cell. The cellular density of the host cell was also measured. Results from the study conducted by Song, Kim, & Seo (2016) showed that there was a decrease in both fluorescence and cellular density as pollutant levels increased. This was indicative of the fact that cellular activity had decreased. More research into this specific application in order to determine the mechanism by which GFP acts as a pollutant marker. Similar results have been observed in zebrafish because zebrafish that were injected with GFP were approximately twenty times more susceptible to recognize cellular stresses than zebrafish that were not injected with GFP.
Advantages
The biggest advantage of GFP is that it can be heritable, depending on how it was introduced, allowing for continued study of cells and tissues it is expressed in. Visualizing GFP is noninvasive, requiring only illumination with blue light. GFP alone does not interfere with biological processes, but when fused to proteins of interest, careful design of linkers is required to maintain the function of the protein of interest. Moreover, if used with a monomer it is able to diffuse readily throughout cells.
Fluorescence microscopy
The availability of GFP and its derivatives has thoroughly redefined fluorescence microscopy and the way it is used in cell biology and other biological disciplines. While most small fluorescent molecules such as FITC (fluorescein isothiocyanate) are strongly phototoxic when used in live cells, fluorescent proteins such as GFP are usually much less harmful when illuminated in living cells. This has triggered the development of highly automated live-cell fluorescence microscopy systems, which can be used to observe cells over time expressing one or more proteins tagged with fluorescent proteins.
There are many techniques to utilize GFP in a live cell imaging experiment. The most direct way of utilizing GFP is to directly attach it to a protein of interest. For example, GFP can be included in a plasmid expressing other genes to indicate a successful transfection of a gene of interest. Another method is to use a GFP that contains a mutation where the fluorescence will change from green to yellow over time, which is referred to as a fluorescent timer. With the fluorescent timer, researchers can study the state of protein production such as recently activated, continuously activated, or recently deactivated based on the color reported by the fluorescent protein. In yet another example, scientists have modified GFP to become active only after exposure to irradiation giving researchers a tool to selectively activate certain portions of a cell and observe where proteins tagged with the GFP move from the starting location. These are only two examples in a burgeoning field of fluorescent microcopy and a more complete review of biosensors utilizing GFP and other fluorescent proteins can be found here
For example, GFP had been widely used in labelling the spermatozoa of various organisms for identification purposes as in Drosophila melanogaster, where expression of GFP can be used as a marker for a particular characteristic. GFP can also be expressed in different structures enabling morphological distinction. In such cases, the gene for the production of GFP is incorporated into the genome of the organism in the region of the DNA that codes for the target proteins and that is controlled by the same regulatory sequence; that is, the gene's regulatory sequence now controls the production of GFP, in addition to the tagged protein(s). In cells where the gene is expressed, and the tagged proteins are produced, GFP is produced at the same time. Thus, only those cells in which the tagged gene is expressed, or the target proteins are produced, will fluoresce when observed under fluorescence microscopy. Analysis of such time lapse movies has redefined the understanding of many biological processes including protein folding, protein transport, and RNA dynamics, which in the past had been studied using fixed (i.e., dead) material. Obtained data are also used to calibrate mathematical models of intracellular systems and to estimate rates of gene expression. Similarly, GFP can be used as an indicator of protein expression in heterologous systems. In this scenario, fusion proteins containing GFP are introduced indirectly, using RNA of the construct, or directly, with the tagged protein itself. This method is useful for studying structural and functional characteristics of the tagged protein on a macromolecular or single-molecule scale with fluorescence microscopy.
The Vertico SMI microscope using the SPDM Phymod technology uses the so-called "reversible photobleaching" effect of fluorescent dyes like GFP and its derivatives to localize them as single molecules in an optical resolution of 10 nm. This can also be performed as a co-localization of two GFP derivatives (2CLM).
Another powerful use of GFP is to express the protein in small sets of specific cells. This allows researchers to optically detect specific types of cells in vitro (in a dish), or even in vivo (in the living organism). GFP is considered to be a reliable reporter of gene expression in eukaryotic cells when the fluorescence is measured by flow cytometry. Genetically combining several spectral variants of GFP is a useful trick for the analysis of brain circuitry (Brainbow). Other interesting uses of fluorescent proteins in the literature include using FPs as sensors of neuron membrane potential, tracking of AMPA receptors on cell membranes, viral entry and the infection of individual influenza viruses and lentiviral viruses, etc.
It has also been found that new lines of transgenic GFP rats can be relevant for gene therapy as well as regenerative medicine. By using "high-expresser" GFP, transgenic rats display high expression in most tissues, and many cells that have not been characterized or have been only poorly characterized in previous GFP-transgenic rats.
GFP has been shown to be useful in cryobiology as a viability assay. Correlation of viability as measured by trypan blue assays were 0.97. Another application is the use of GFP co-transfection as internal control for transfection efficiency in mammalian cells.
A novel possible use of GFP includes using it as a sensitive monitor of intracellular processes via an eGFP laser system made out of a human embryonic kidney cell line. The first engineered living laser is made by an eGFP expressing cell inside a reflective optical cavity and hitting it with pulses of blue light. At a certain pulse threshold, the eGFP's optical output becomes brighter and completely uniform in color of pure green with a wavelength of 516 nm. Before being emitted as laser light, the light bounces back and forth within the resonator cavity and passes the cell numerous times. By studying the changes in optical activity, researchers may better understand cellular processes.
GFP is used widely in cancer research to label and track cancer cells. GFP-labelled cancer cells have been used to model metastasis, the process by which cancer cells spread to distant organs.
Split GFP
GFP can be used to analyse the colocalization of proteins. This is achieved by "splitting" the protein into two fragments which are able to self-assemble, and then fusing each of these to the two proteins of interest. Alone, these incomplete GFP fragments are unable to fluoresce. However, if the two proteins of interest colocalize, then the two GFP fragments assemble together to form a GFP-like structure which is able to fluoresce. Therefore, by measuring the level of fluorescence it is possible to determine whether the two proteins of interest colocalize.
Macro-photography
Macro-scale biological processes, such as the spread of virus infections, can be followed using GFP labeling. In the past, mutagenic ultra violet light (UV) has been used to illuminate living organisms (e.g., see) to detect and photograph the GFP expression. Recently, a technique using non-mutagenic LED lights have been developed for macro-photography. The technique uses an epifluorescence camera attachment based on the same principle used in the construction of epifluorescence microscopes.
Transgenic pets
Alba, a green-fluorescent rabbit, was created by a French laboratory commissioned by Eduardo Kac using GFP for purposes of art and social commentary. The US company Yorktown Technologies markets to aquarium shops green fluorescent zebrafish (GloFish) that were initially developed to detect pollution in waterways. NeonPets, a US-based company has marketed green fluorescent mice to the pet industry as NeonMice. Green fluorescent pigs, known as Noels, were bred by a group of researchers led by Wu Shinn-Chih at the Department of Animal Science and Technology at National Taiwan University.Scientists in Taiwan breed fluorescent green pigs A Japanese-American Team created green-fluorescent cats as proof of concept to use them potentially as model organisms for diseases, particularly HIV. In 2009 a South Korean team from Seoul National University bred the first transgenic beagles with fibroblast cells from sea anemones. The dogs give off a red fluorescent light, and they are meant to allow scientists to study the genes that cause human diseases like narcolepsy and blindness.
Art
Julian Voss-Andreae, a German-born artist specializing in "protein sculptures", created sculptures based on the structure of GFP, including the tall "Green Fluorescent Protein" (2004) and the tall "Steel Jellyfish" (2006). The latter sculpture is located at the place of GFP's discovery by Shimomura in 1962, the University of Washington's Friday Harbor Laboratories.
See also
Protein tag
pGLO
Yellow fluorescent protein
Genetically encoded voltage indicator
References
Further reading
Popular science book describing history and discovery of GFP
External links
A comprehensive article on fluorescent proteins at Scholarpedia
Brief summary of landmark GFP papers
Interactive Java applet demonstrating the chemistry behind the formation of the GFP chromophore
Video of 2008 Nobel Prize lecture of Roger Tsien on fluorescent proteins
Excitation and emission spectra for various fluorescent proteins
Green Fluorescent Protein Chem Soc Rev themed issue dedicated to the 2008 Nobel Prize winners in Chemistry, Professors Osamu Shimomura, Martin Chalfie and Roger Y. Tsien
Molecule of the Month, June 2003: an illustrated overview of GFP by David Goodsell.
Molecule of the Month, June 2014: an illustrated overview of GFP-like variants by David Goodsell.
Green Fluorescent Protein on FPbase, a fluorescent protein database
Category:Protein methods
Category:Recombinant proteins
Category:Cell imaging
Category:Protein imaging
*
Category:Bioluminescence
Category:Cnidarian proteins
|
chemistry
| 4,442
|
144732
|
Secession
|
https://en.wikipedia.org/wiki/Secession
|
Secession (from ) is a term and concept of the formal withdrawal of a group from a political entity.
In international law, secession is understood as a process in which an integral part of a state's territory unilaterally withdraws without the consent of the original state.
The process begins once a group proclaims an act of secession (such as a declaration of independence). A secession attempt might be violent or peaceful, but the goal is the creation of a new state or entity independent of the group or territory from which it seceded. Threats of secession can be a strategy for achieving more limited goals.Allen Buchanan, "Secession", Stanford Encyclopedia of Philosophy, 2007. There is some academic debate about this definition, and in particular how it relates to separatism.
Secession theory
There is no consensus on the definition of political secession despite many political theories on the subject.
According to the 2017 book Secession and Security, by political scientist Ahsan Butt, states respond violently to secessionist movements if the potential state poses a greater threat than the would-be secessionist movement. States perceive a future war with a potential new state as likely if the ethnic group driving the secessionist struggle has deep identity division with the central state, and if the regional neighborhood is violent and unstable.
Explanations for the 20th century increase in secessionism
According to political scientist Bridget L. Coggins, the academic literature contains four potential explanations for the drastic increase in secessions during the 20th century:
Ethnonational mobilization, where ethnic minorities have been increasingly mobilized to pursue states of their own.
Institutional empowerment, where the growing inability of empires and ethnic federations to maintain colonies and member states increases the likelihood of success.
Relative strength, where increasingly powerful secessionist movements are more likely to achieve statehood.
Negotiated consent, where home states and the international community increasingly consent to secessionist demands.
Other scholars have linked secession to resource discoveries and extraction. David B. Carter, H. E. Goemans, and Ryan Griffiths find that border changes among states tend to conform to the borders of previous administrative units.
Several scholars argue that changes in the international system have made it easier for small states to survive and prosper. Tanisha Fazal and Ryan Griffiths link increased numbers of secessions to an international system that is more favorable for new states. For example, new states can obtain assistance from international organizations such as the International Monetary Fund, World Bank, and the United Nations. Alberto Alesina and Enrico Spolaore argue that greater levels of free trade and peace have reduced the benefits of being part of a larger state, thus motivating nations within larger states to seek secession.
Woodrow Wilson's proclamations on self-determination in 1918 created a surge in secessionist demands.
Philosophy of secession
The political philosophy of the rights and moral justification for secession began to develop as recently as the 1980s. American philosopher Allen Buchanan offered the first systematic account of the subject in the 1990s and contributed to the normative classification of the literature on secession. In his 1991 book Secession: The Morality of Political Divorce From Fort Sumter to Lithuania and Quebec, Buchanan outlined limited rights to secession under certain circumstances, mostly related to oppression by people of other ethnic or racial groups, and especially those previously conquered by other people.Allen Buchanan, Secession: The Morality of Political Divorce From Fort Sumter to Lithuania and Quebec, West View Press, 1991. In his collection of essays from secession scholars, Secession, State, and Liberty, professor David Gordon challenges Buchanan, making a case that the moral status of the seceding state is unrelated to the issue of secession itself.
Justifications for secession
Some theories of secession emphasize a general right of secession for any reason ("Choice Theory") while others emphasize that secession should be considered only to rectify grave injustices ("Just Cause Theory").Allen Buchanan, How can We Construct a Political Theory of Secession?, paper presented October 5, 2006 to the International Studies Association. Some theories do both. A list of justifications may be presented supporting the right to secede, as described by Allen Buchanan, Robert McGee, Anthony Birch,Anthony H. Birch, "Another Liberal Theory of Secession". Political Studies 32, 1984, 596–602. Jane Jacobs,Jane Jacobs, Cities and the Wealth of Nations, Vintage, 1985. Frances Kendall and Leon Louw,Frances Kendall and Leon Louw, After Apartheid: The Solution for South Africa, Institute for Contemporary Studies, 1987. One of several popular books they wrote about canton-based constitutional alternatives that include an explicit right to secession. Leopold Kohr,Leopold Kohr, The Breakdown of Nations, Routledge & K. Paul, 1957 Kirkpatrick Sale,Human Scale, Coward, McCann & Geoghegan, 1980. Donald W. Livingston and various authors in David Gordon's "Secession, State and Liberty", includes:
United States President James Buchanan, Fourth Annual Message to Congress on the State of the Union December 3, 1860: "The fact is that our Union rests upon public opinion, and can never be cemented by the blood of its citizens shed in civil war. If it cannot live in the affections of the people, it must one day perish. Congress possesses many means of preserving it by conciliation, but the sword was not placed in their hand to preserve it by force."
Former President Thomas Jefferson, in a letter to William H. Crawford, Secretary of War under President James Madison, on June 20, 1816: "In your letter to Fisk, you have fairly stated the alternatives between which we are to choose: 1, licentious commerce and gambling speculations for a few, with eternal war for the many; or, 2, restricted commerce, peace, and steady occupations for all. If any State in the Union will declare that it prefers separation with the first alternative, to a continuance in union without it, I have no hesitation in saying, 'let us separate.' I would rather the States should withdraw, which are for unlimited commerce and war, and confederate with those alone which are for peace and agriculture."
Economic enfranchisement of an economically oppressed class that is regionally concentrated within the scope of a larger national territory.
The right to liberty, freedom of association and private property
Recognition of the will of the majority to secede, in keeping with consent as an important democratic principle
Increased ease for states to join with others in an experimental union
Dissolution of such a union when goals for which it was constituted are not achieved
Self-defense when larger group presents lethal threat to minority or the government cannot adequately defend an area
Self-determination of peoples
Preservation of culture, language, etc. from assimilation or destruction by a larger or more powerful group
Furtherance of diversity by allowing diverse cultures to keep their identity
Rectification of past injustices, especially past conquest by a larger power
Escape from "discriminatory redistribution", i.e. tax schemes, regulatory policies, economic programs, and similar policies that distribute resources away to another area, especially in an undemocratic fashion
Enhanced efficiency when the state or empire becomes too large to administer efficiently
Preservation of "liberal purity" (or "conservative purity") by allowing less (or more) liberal regions to secede
Provision of superior constitutional systems which allow flexibility of secession
Minimizing the size of political entities and the human scale through right to secession
Political scientist Aleksander Pavkovic describes five justifications for a general right of secession within liberal political theory:Aleksandar Pavkovic, Secession, Majority Rule and Equal Rights: a Few Questions, Macquarie University Law Journal, 2003.
Anarcho-Capitalism: individual liberty to form political associations and private property rights together justify right to secede and to create a "viable political order" with like-minded individuals.
Democratic Secessionism: the right of secession, as a variant of the right of self-determination, is vested in a "territorial community" which wishes to secede from "their existing political community"; the group wishing to secede then proceeds to delimit "its" territory by the majority.
Communitarian Secessionism: any group with a particular "participation-enhancing" identity, concentrated in a particular territory, which desires to improve its members' political participation has a prima facie right to secede.
Cultural Secessionism: any group which was previously in a minority has a right to protect and develop its own culture and distinct national identity through seceding into an independent state.
The Secessionism of Threatened Cultures: if a minority culture is threatened within a state that has a majority culture, the minority needs a right to form a state of its own which would protect its culture.
Arguments against secession
Allen Buchanan, who supports secession under limited circumstances, lists arguments that might be used against secession:Allen Buchanan, Secession: The Morality of Political Divorce From Fort Sumter to Lithuania and Quebec, Chapter 3, pp. 87–123.
"Protecting legitimate expectations" of those who now occupy territory claimed by secessionists, even in cases where that land was stolen
"Self defense" if losing part of the state would make it difficult to defend the rest of it
"Protecting majority rule" and the principle that minorities must abide by them
"Minimization of strategic bargaining" by making it difficult to secede, such as by imposing an exit tax
"Soft paternalism" because secession will be bad for secessionists or others
"Threat of anarchy" because smaller and smaller entities may choose to secede until there is chaos, although this is not the true meaning of the political and philosophical concept
"Preventing wrongful taking" such as the state's previous investment in infrastructure
"Distributive justice" arguments posit that wealthier areas cannot secede from poorer ones
Types of secession
Secession theorists have described a number of ways in which a political entity (city, county, canton, state) can secede from the larger or original state:Steven Yates, "When Is Political Divorce Justified" in David Gordon, 1998.
Secession from federation or confederation (political entities with substantial reserved powers which have agreed to join) versus secession from a unitary state (a state governed as a single unit with few powers reserved to sub-units)
Colonial wars of independence from an imperial state although this is decolonisation rather than secession.
Recursive secession, such as India decolonising from the British Empire, then Pakistan seceding from India, or Georgia seceding from the Soviet Union, then South Ossetia seceding from Georgia.
National secession (seceding entirely from the national state) versus local secession (seceding from one entity of the national state into another entity of the same state)
Central or enclave secession (seceding entity is completely surrounded by the original state) versus peripheral secession (along a border of the original state)
Secession by contiguous units versus secession by non-contiguous units (exclaves)
Separation or partition (although an entity secedes, the rest of the state retains its structure) versus dissolution (all political entities dissolve their ties and create several new states)
Irredentism where secession is sought in order to annex the territory to another state because of common ethnicity or prior historical links
Minority secession (a minority of the population or territory secedes) versus majority secession (a majority of the population or territory secedes)
Secession of better-off regions versus secession of worse-off regions
The threat of secession is sometimes used as a strategy to gain greater autonomy within the original state
Rights to secession
Most sovereign states do not recognize the right to self-determination through secession in their constitutions. Many expressly forbid it. However, there are several existing models of self-determination through greater autonomy and through secession.Andrei Kreptul, The Constitutional Right of Secession in Political Theory and History, Journal of Libertarian Studies, Ludwig von Mises Institute, Volume 17, no.4 (Fall 2003), pp. 39–100.
In liberal constitutional democracies the principle of majority rule has dictated whether a minority can secede. In the United States Abraham Lincoln acknowledged that secession might be possible through amending the United States Constitution. The Supreme Court in Texas v. White held secession could occur "through revolution, or through consent of the States".Aleksandar Pavković, Peter Radan, Creating New States: Theory and Practice of Secession, p. 222, Ashgate Publishing, Ltd., 2007.Texas v. White, 74 U.S. 700 (1868) at Cornell University Law School Supreme Court collection. The British Parliament in 1933 held that Western Australia could secede from the Commonwealth of Australia only upon vote of a majority of the country as a whole; the previous two-thirds majority vote for secession via referendum in Western Australia was insufficient.
The Chinese Communist Party followed the Soviet Union in including the right of secession in its 1931 constitution in order to entice ethnic nationalities and Tibet into joining. However, the Party eliminated the right to secession in later years, and had anti-secession clause written into the Constitution before and after the founding the People's Republic of China. The 1947 Constitution of the Union of Burma contained an express state right to secede from the union under a number of procedural conditions. It was eliminated in the 1974 constitution of the Socialist Republic of the Union of Burma (officially the "Union of Myanmar"). Burma still allows "local autonomy under central leadership".
As of 1996, the constitutions of Austria, Ethiopia, France, and Saint Kitts and Nevis have express or implied rights to secession. Switzerland allows for the secession from current and the creation of new cantons. In the case of proposed Quebec separation from Canada, the Supreme Court of Canada in 1998 ruled that only both a clear majority of the province and a constitutional amendment confirmed by all participants in the Canadian federation could allow secession.
The European Union is not a sovereign state but an association of sovereign states formed by treaty; as such, leaving it, which is possible by simply denouncing the treaty, is not secession. Nonetheless, the 2003 draft of the European Union Constitution allowed for the voluntary withdrawal of member states from the union, although the representatives of the member-state which wanted to leave could not participate in the withdrawal discussions of the European Council or of the Council of Ministers. There was much discussion about such self-determination by minoritiesXenophon Contiades, Sixth Scholarly Panel: Cultural Identity in the New Europe, 1st Global Conference on Federalism and the Union of European Democracies, March 2004. before the final document underwent the unsuccessful ratification process in 2005. In 2007 the Treaty on European Union included Article 50 of the Treaty on European Union, establishing a mechanism for withdrawal from the EU.
As a result of the successful constitutional referendum held in 2003, every municipality in the Principality of Liechtenstein has the right to secede from the Principality by a vote of a majority of the citizens residing in that municipality.
Indigenous peoples have a range of different forms of indigenous sovereignty and have the right of self-determination, but under current understanding of international law they have a mere "remedial" right to secession in extreme cases of abuse of their rights, because independence and sovereign statehood is a territorial and diplomatic claim and not one of self-determination and self-government, respectively, generally leaving rights to secession to the internal legislation of sovereign states.
Secession movements
National secessionist movements advocate that a population has the right to form its own nation-state. Movements that work towards political secession may describe themselves as being autonomy, separatist, independence, self-determination, partition, devolution, decentralization, sovereignty, self-governance or decolonization movements instead of, or in addition to, being secession movements.
Notable examples of secession, and secession attempts, include:
The United Provinces of the Netherlands breaking away from the Spanish Empire during the Eighty Years' War (1566-1648):
The Thirteen Colonies (the later United States) revolting from the British Empire during the American Revolutionary War (1775-83);
Hispanic America gaining independence from the Spanish Empire during Spanish American wars of independence;
Texas leaving Mexico, during the Texas Revolution (1835-36);
the Confederate States of America seceding from the Union, setting off the American Civil War;
Panama seceding from Colombia in 1903, during United States acquisition of the Panama Canal;
the Irish Republic leaving the United Kingdom;
Finland voting to leave Soviet Russia in 1917, setting off the Finnish Civil War;
Biafra leaving Nigeria (and returning, after losing the Nigerian Civil War);
the former Soviet republics leaving the Soviet Union in 1991, causing its dissolution;
the former republics leaving Yugoslavia during the 1990s, causing its dissolution.
Australia
During the 19th century, the single British colony in eastern mainland Australia, New South Wales (NSW) was progressively divided up by the British government as new settlements were formed and spread. Victoria (Vic) was formed in 1851 and Queensland (Qld) in 1859.
However, settlers agitated to divide the colonies throughout the later part of the century; particularly in central Queensland (centered in Rockhampton) in the 1860s and 1890s, and in North Queensland (with Bowen as a potential colonial capital) in the 1870s. Other secession (or territorial separation) movements arose and these advocated the secession of New England in northern central New South Wales, Deniliquin in the Riverina district also in NSW, and Mount Gambier in the eastern part of South Australia.
Western Australia
Secession movements have surfaced several times in Western Australia (WA), where a 1933 referendum for secession from the Federation of Australia passed with a two-thirds majority. The referendum had to be ratified by the British Parliament, which declined to act, on the grounds that it would contravene the Australian Constitution.
The Principality of Hutt River claimed to have seceded from Australia in 1970, although its status was not recognised by Australia or any other country.
Azerbaijan
The Karabakh movement, also known as the Artsakh movement was a national liberation movement in Armenia and Nagorno-Karabakh between 1988 to 1991 that advocated for the reunification ("miatsum") of the Nagorno-Karabakh – formally an autonomous enclave in Soviet Azerbaijan – with Soviet Armenia. The movement was motivated by fears of cultural and physical erasure under government policies from Azerbaijan. Throughout the Soviet period, Azerbaijani authorities implemented policies aimed at suppressing Armenian culture and diluting the Armenian majority in Nagorno-Karabakh through various means, including border manipulations,: "The borders were to be drawn before 15 August by a mixed commission...but without the participation of either Yerevan or Moscow. All would be presided over by Karaiev. Under such circumstances, the Armenians could expect to be grossly disappointed. On the one hand...they excluded, on the west, the 'corridor' made up of Lachin, Kelbajar, and Kedabek, which had been carefully emptied of its Armenian population to separate Mountainous Karabagh from Armenian Zangezur. On the other hand, in the north, without any justification, they removed the districts of Shamkhor, Khanlar, Dashkesan and Shahumian.. where the Armenian population was predominant (about 90 per cent)... From Shamkhor in the north to Shahumian in the south, Armenian villages in these districts have been systematically emptied...Mountainous Karabagh delimited in this way is only a portion of what had always been Armenian Karabagh, which itself is only a part of what was included in the ancient Armenian provinces of Artsakh and Utik...The spectre of ‘Nakhichevanization’ haunts...Mountainous Karabagh, which had 125,000 inhabitants in 1926 who were 89 percent Armenian. This region has become an ‘enclave’ since the ‘cleansing’ of the Hagaru Valley in order to separate Karabagh from Zangezur by a narrow strip emptied of Armenians...Azerbaijan still contained a large Armenian minority. Aside from the ‘bastion’ of the Autonomous Region of Mountainous Karabagh, Armenians were numerous in Baku and in the region north of the Autonomous Region, up to Shamkhor, where the Armenian villages had been deliberately left outside the frontiers drawn in 1923 and, thereby, subjected to direct Azerbaijani authority. From north to south, these areas had already largely been ‘swept clean,’ with the exception of the area of Shahumian, the northern gateway to the Autonomous Region. The Azerbaijani plan was clearly described in the declaration by the Karabagh Committee on 2 December 1988: 'Exploiting the anarchic situation, the Azerbaijani authorities are about to unleash a monstrous programme: to expel Armenians from their several millennia old homes in Gandzak and the areas north of Artsakh, in preparation for an invasion of Mountainous Karabagh.' Already about 120,000 Armenians have left Azerbaijan, and 50,000 have sought refuge in Armenia and the others in the North Caucasus and Central Asia." encouraging the exodus of Armenians, and settling Azerbaijanis in the region. In the 1960s, 1970s, and 1980s, Armenians protested against Azerbaijan's cultural and economic marginalization A referendum in 1988 was held to transfer the region to Soviet Armenia, citing self-determination laws in the Soviet constitution. In 1991, both Armenia and Nagorno-Karabakh declared independence from the Soviet Union. The Karabakh movement was met with a series of pogroms against Armenians across Azerbaijan, and in November 1991, the Azerbaijani government passed a motion aimed at abolishing the autonomy of the NKAO and prohibiting the use of Armenian placenames in the region.
Austria
After being liberated by the Red Army and the U.S. Army, Austria seceded from Nazi Germany on April 27, 1945. This took place after seven years under Nazi rule, which began with the annexation of Austria into Nazi Germany in March 1938. The secession only took place once Nazi Germany had been defeated by the Allies.
Bangladesh
The Banga Sena (Bangabhumi) is a separatist Hindu organisation, which supports the making of a Bangabhumi/separate homeland for Bengali Hindus in the People's Republic of Bangladesh. The group is led by Kalidas Baidya.
The Shanti Bahini (, "Peace Force") is the name of the military wing of the Parbatya Chattagram Jana Sanghati Samiti - the United People's Party of the Chittagong Hill Tracts aims are to create an indigenous Buddhist orientated Chacomas state within SE Bangladesh.
Belgium and the Netherlands
On August 25, 1830, during the reign of William I, the nationalistic opera La muette de Portici was performed in Brussels. Soon after, the Belgian Revolt occurred, which resulted in the Belgian secession from the Kingdom of the Netherlands.
Brazil
In 1825, soon after the Empire of Brazil managed to defeat the Cortes-Gerais and the Portuguese Empire in an Independence War, the Platinean nationalists in Cisplatina declared independence and joined the United Provinces, which led to a stagnated war between both, as they were both weakened, lacking manpower and politically fragile. The peace treaty accepted Uruguay's independence, reasserted the rule of both nations over their land and some important points like free navigation in the Silver River.
Three rather disorganized secessionist rebellions happened in Grão-Pará, Bahia, and Maranhão, where the people were unhappy with the Empire (these provinces were Portuguese bastions in the Independence War). The Malê Revolt, in Bahia, was an Islamic slave revolt. These three rebellions were bloodily crushed by the Empire of Brazil.
The Pernambuco was one of the most nativist of all Brazilian regions. Over a series of five revolts (1645–1654, 1710, 1817, 1824, 1848), the province ousted the Dutch West India Company and tried to secede from the Portuguese and Brazilian Empires. In each attempt, the rebels were crushed, the leaders shot and their territory divided. Nevertheless, they kept revolting until Pernambuco's territory was a little fraction of what it was before.
In the Ragamuffin War, the Province of Rio Grande do Sul was undergoing a (at that time common) liberal vs conservative "cold" war. After Emperor Pedro II of Brazil favoured the conservatives, the liberals took the Capital and declared an independent Republic, fighting their way to the Province of Santa Catarina and declaring the Juliana Republic. Eventually they were slowly forced back, and made a reunification peace with the Empire. This was not considered a secessionist war, even if it could have resulted in an independent republic if the Empire had been defeated. After the Empire agreed to aid Santa Catarina's economy by taxing Argentina's products (like dry meat), the rebels reunited with the Empire and joined its military ranks.
In modern times, the South Region of Brazil has been the centre of a secessionist movement led by an organization called The South is My Country since the 1990s. Reasons cited for Southern Brazil's secession movement are taxation, due to it being one of the wealthiest regions in the country; political disputes with the northernmost states of Brazil; 2016 scandal revolving around the Workers' Party's involvement in a kickback scheme with state-owned oil company Petrobras; and the impeachment of then-President Dilma Rousseff. Additionally, there is an ethnic divide as the South Region is predominately European, populated primarily by Germans, Italians, Portuguese and other European groups. In contrast, the rest of Brazil is a multicultural melting pot. The South Region in 2016 voted in an unofficial referendum called "Plebisul" in which 95% of voters supported secession and the creation of an independent South Region.
There is also a push for secession movement in the state of São Paulo, which seeks to become a country independent from the rest of Brazil.
Cameroon
In October 2017, Ambazonia declared its independence from Cameroon. Less than a month beforehand, tensions had escalated into open warfare between separatists and the Cameroon Armed Forces.
The conflict, known as the "Anglophone Crisis", is deeply rooted in the October 1, 1961 incomplete decolonization of the former British Southern Cameroons (UNGA Resolution 1608). On January 1, 1960, French Cameroon was granted independence from France as the Republic of Cameroon and was admitted into the United Nations. The more advanced democratic and self-ruling people of British Cameroon were instead limited to two choices. Through a UN plebiscite, they were directed to either join the federation of Nigeria or the independent Republic of Cameroon as a federation of two equal states. While the Northern Cameroons voted to join Nigeria, the Southern Cameroons voted to integrate into the Republic of Cameroon, but they did so without a formal Treaty of Union on record at the UN. In 1972, Cameroon used its majority population to abolish the federation and implement a system which resulted in the occupation of the former South Cameroons territory by French-speaking Cameroon administrators. In 1984, Cameroon heightened tensions by returning to its name at independence, "Republic of Cameroun", which did not include the territory of the former British Southern Cameroons or Ambazonia.
For more than fifty years the English-speaking people of the Former British Southern Cameroons made multiple attempts both nationally and internationally to get the Cameroon government to address these issues and possibly return to the previously agreed federation at independence. In 2016, after all these attempts failed, Cameroon engaged in a military crackdown, including cutting the internet in the English-speaking regions. In response, the people of Southern Cameroon declared on October 1, 2017, the restoration of their UN state of Southern Cameroons, which they called the "Federal Republic of Ambazonia".
Canada
Throughout Canada's history, there has been tension between English-speaking and French-speaking Canadians. Under the Constitutional Act of 1791, the Province of Quebec (including parts of what are today Quebec, Ontario and Newfoundland and Labrador) was divided in two: Lower Canada (which retained French law and institutions and is now part of the provinces of Quebec and Newfoundland and Labrador) and Upper Canada (a new colony intended to accommodate the many new English-speaking settlers, including the United Empire Loyalists, and now part of Ontario). The intent was to provide each group with its own colony. In 1841, the two Canadas were merged into the Province of Canada. The union proved contentious, however, resulting in a legislative deadlock between English and French legislators. The difficulties of the union, among other factors, led in 1867 to the formation of the Canadian Confederation, a federal system that united the Province of Canada, Nova Scotia and New Brunswick (later joined by other British colonies in North America). The federal framework did not eliminate all tensions, however, leading to the Quebec sovereignty movement in the latter half of the 20th century.
Other occasional secessionist movements have included anti-Confederation movements in the 19th century Atlantic Canada (see Anti-Confederation Party), the North-West Rebellion of 1885, and various small separatist movements in Alberta particularly (see Alberta separatism) and Western Canada generally (see, for example, Western Canada Concept).
Central America
After the 1823 collapse of the First Mexican Empire, the former Captaincy-General of Guatemala was organized into a new Federal Republic of Central America. In 1838, Nicaragua seceded. The Federal Republic was formally dissolved in 1840, all but one of the states having seceded amidst general disorder.
China
The People's Republic of China government claims control over Taiwan and describes the political status of Taiwan as an issue of secession, despite having never governed the territory. The Republic of China (Taiwan) government administers control over Taiwan and outlying islands but lacks widespread official international recognition. The Anti-Secession Law, passed in 2005, formalized the long-standing policy of the People's Republic of China to use military means against Taiwan independence in the event peaceful means become otherwise impossible.
Western regions of Xinjiang (East Turkistan) and Tibet are the focus of secessionist calls by the Tibetan independence movement and East Turkestan Independence Movement. The East Turkistan Government in Exile does not view East Turkistan as a part of China but rather an occupied country, so it does not view independence from China as "secession" but rather "decolonization".
The Special Administrative Region of Hong Kong has a secessionist movement in the city that the Chinese Communist Party has placed on the national security agenda in 2017 which is called the Hong Kong independence movement.
Congo
In 1960, the State of Katanga declared independence from the Democratic Republic of the Congo. United Nations troops crushed it in Operation Grand Slam.
Cyprus
In 1974, Greek irredentists launched a coup d'état in Cyprus, in an attempt to annex the island with Greece. Almost immediately, the Turkish Army invaded northern Cyprus to protect the interests of the ethnic Turkish minority, who in the following year formed the Turkish Federated State of Cyprus and in 1983 declared independence as the Turkish Republic of Northern Cyprus, recognized only by Turkey.
East Timor
The Democratic Republic of Timor-Leste (also known as East Timor) has been described as having "seceded" from Indonesia.Santosh C. Saha, Perspectives on contemporary ethnic conflict, p. 63,
Lexington Books, 2006 .Paul D. Elliot, The East Timor Dispute, The International and Comparative Law Quarterly, Vol. 27, No.1 (Jan., 1978).James J. Fox, Dionísio Babo Soares, Out of the ashes: destruction and reconstruction of East Timor, p. 175, ANU E Press, 2003, After Portuguese sovereignty was terminated in 1975, East Timor was occupied by Indonesia. However, the United Nations and the International Court of Justice refused to recognize this incorporation. Therefore, the resulting civil war and eventual 1999 East Timorese vote for complete separation are better described as an independence movement.Thomas D. Musgrave, Self-determination and national minorities, p. xiii, Oxford University Press, 2000
Ethiopia
Following the May 1991 victory of Eritrean People's Liberation Front forces against the communist Derg regime during the Eritrean War of Independence, Eritrea (formerly known as "Medri Bahri") gained de facto independence from Ethiopia. Following the United Nations observation 1993 Eritrean independence referendum, Eritrea gained de jure independence.
European Union
Before the Treaty of Lisbon entered into force on 1December 2009, no provision in the treaties or law of the European Union outlined the ability of a state to voluntarily withdraw from the EU. The European Constitution did propose such a provision and, after the failure to ratify the Treaty establishing a Constitution for Europe, that provision was then included in the Lisbon Treaty.
The treaty introduced an exit clause for members who wish to withdraw from the Union. This formalised the procedure by stating that a member state may notify the European Council that it wishes to withdraw, upon which withdrawal negotiations begin; if no other agreement is reached, the treaty ceases to apply to the withdrawing state two years after such notification.
On June 23, 2016, the United Kingdom voted to leave the European Union in a binding referendum voted for by parliament, and finally left the European Union on January 31, 2020. This is informally known as Brexit.
Finland
Finland successfully and peacefully seceded from the newly-formed and unstable Russian Soviet Federative Socialist Republic in 1917. The latter was led by Lenin, who had sought refuge in Finland during the Russian Revolution. Unsuccessful attempts at greater autonomy or peaceful secession had already been made during the preceding Russian Empire but had been denied by the Russian emperor. However, with the country still at war and under great pressure, Lenin allowed Finland to secede. Its peripheral location made it difficult to defend and less strategically important than Russia's other territories, so he conceded sovereignty to the Finns rather than try to defend it.
France
France was one of the European Great Powers with populous foreign empires. Like the othersthe United Kingdom, Spain, Portugal, Italy, Belgium, the Netherlands, and formerly Germany and the Ottoman Empireits populous states abroad have all seceded, in most cases been granted independence. These secessionist movements generally took place at similar stages by continent. See decolonization of the Ottoman Empire, Americas, Asia and Africa.
As to France's contiguous state, these have few present representatives at the national level, see:
Alsace independence movement
Breton independence
Corsican nationalism
Occitan nationalism
Gran Colombia
After a decade of tumultuous federalism, Ecuador and Venezuela seceded from Gran Colombia in 1830, leaving the similarly tumultuous United States of Colombia (now the Republic of Colombia), which also lost Panama in 1903.
India
Pakistan seceded from the British-Indian Empire in what is known as the Partition.
Today, the Constitution of India does not allow Indian states to secede from the Union.
The Indian Union Territory of Jammu and Kashmir hosts some paramilitary nationalists who advocate for a Muslim state, in opposition to the Indian establishment. They are mostly in the Valley of Kashmir since 1989, where the Indian Army sometimes patrols, having bases along the nearby international border. They are supported by Pakistan, which has allegedly funded many terrorist, separatist outfits with the goal of destabilizing India, according to the Indian Research and Analysis Wing, though the country denies any direct involvement. The Kashmir insurgency reached at its peak influence in the 1990s.
Other secessionist movements in Nagaland, Assam, Manipur, Punjab (known as the Khalistan movement), Mizoram and Tripura, Tamil Nadu . The violent Naxalite–Maoist insurgency operates in eastern rural India is rarely considered secessionist as its goal is to overthrow the government of India. The Communist Party of India (Maoist)'s commanders idealise a Communist republic to be made up swathes of India.
Iran
Active secession movements include: Iranian Azeri, Assyrian independence movement, Bakhtiary lurs movement in 1876, Iranian Kurdistan; Kurdistan Democratic Party of Iran (KDPI), Khūzestān Province Balochistan and independence movement for free separated Balochistan, (Arab nationalist); Al-Ahwaz Arab People's Democratic Popular Front, Democratic Solidarity Party of Al-Ahwaz (See Politics of Khūzestān Province: Arab politics and separatism), and Balochistan People's Party (BPP) supporting Baloch Separatism.
Italy
The Movement for the Independence of Sicily (Movimento Indipendentista Siciliano, MIS) has its roots in the Sicilian Independence Movement of the late 1940s; it was active for around 60 years. Today, the MIS no longer exists, though many other parties have emerged. One is Nation Sicily (Sicilia Nazione), which still believes in the idea that Sicily, due to its deeply personal and ancient history, should be a sovereign country. Moreover, a common ideology shared by all the Sicilian independentist movements is to fight against Cosa Nostra and all the other Mafia organizations, which have a very deep influence over Sicily's public and private institutions. The Sicilian branch of the Five Star Movement, which polls show is Sicily's most popular party, has also publicly expressed the intention to start working for a possible secession from Italy if the central government would not collaborate in shifting the nation's administrative organization from a unitary country to a federal state.
In Southern Italy, several movements have expressed a will to secede from Italy. This newborn ideology is called neo-Bourbonism, because the Kingdom of the Two Sicilies was under the control of the House of Bourbon. The Kingdom of the Two Sicilies was created in 1816 after the Congress of Vienna, and it comprised both Sicily and continental Southern Italy. The Kingdom came to an end in 1861, being annexed to the newborn Kingdom of Italy. However, the patriotic feelings shared among the southern Italian population is more ancient, starting in 1130 with the Kingdom of Sicily, which was composed by both the island and south Italy. According to the neo-Bourbonic movements the Italian regions which should secede are Sicily, Calabria, Basilicata, Apulia, Molise, Campania, Abruzzo, and Latio's provinces of Rieti, Latina and Frosinone. The major movements and parties which believe in this ideology are Unione Mediterranea, Mo! and Briganti.
Lega Nord has been seeking the independence of the region known to separatists as Padania, which includes lands along the Po Valley in northern Italy. Some organizations separately work for the independence of Venetia or Veneto and the secession or reunification of South Tyrol with Austria. Lega Nord governing Lombardy has expressed a will to turn the region into a sovereign country. Also, the island of Sardinia is home to a notable nationalist movement.
Japan
The ethnic Ryukyuan (a branch of modern Okinawan) people had their own state historically (Ryukyu Kingdom). Although some Okinawan people have sought independence from Japan since they were annexed by Japan in 1879, and especially after 1972 when the islands were transferred from U.S. rule to Japan, their activism and movement have been consistently supported by single digit of Okinawan people.
Malaysia
When racial and partisan strife erupted, Singapore was expelled from the Malaysian federation in 1965.
Mexico
Texas seceded from Mexico in 1836 (see Texas Revolution), after animosity between the Mexican government and the American settlers of the Coahuila y Tejas State. It was later annexed by the United States in 1845.
The Republic of the Rio Grande seceded from Mexico on January 17, 1840. It rejoined Mexico on November6 of the same year.
After the federal system was abandoned by President Santa Anna, the Congress of Yucatán approved in 1840 a declaration of independence, establishing the Republic of Yucatán. The Republic rejoined Mexico in 1843.
Netherlands
The United Provinces of the Netherlands, commonly referred to historiographically as the Dutch Republic, was a federal republic formally established from the formal creation of a federal state in 1581 by several Dutch provinces seceded from Spain.
New Zealand
Secession movements have surfaced several times in the South Island of New Zealand. A Premier of New Zealand, Sir Julius Vogel, was amongst the first people to make this call, which was voted on by the Parliament of New Zealand as early as 1865. The desire for South Island independence was one of the main factors in moving the capital of New Zealand from Auckland to Wellington in the same year.
The NZ South Island Party, with a pro-South agenda, fielded only five candidates (4.20% of electoral seats) candidates in the 1999 General Election but achieved only 0.14% (2622 votes) of the general vote. The reality today is that although South Islanders have a strong identity rooted in their geographic region, secession does not carry any real constituency; the party was not able to field any candidates in the 2008 election, as they had less than 500 paying members, a requirement by the New Zealand Electoral commission. The party is treated more as a "joke" party than any real political force.
Nigeria
Between 1967 and 1970, the Eastern Region seceded from Nigeria and established the Republic of Biafra, which led to a war that ended with the state returning to Nigeria. In 1999, at the beginning of a new democratic regime, other secessionist movements emerged, including the Indigenous People of Biafra led by Nnamdi Kanu formed as a Political wing of the Republic of Biafra.
Norway and Sweden
Sweden, having left the Kalmar Union with Denmark–Norway in the 16th century, entered into a loose personal union with Norway in 1814. Following a constitutional crisis, on June 7, 1905, the Norwegian Storting declared that King OscarII had failed to fulfil his constitutional duties. He was therefore removed as King of Norway. Because the union depended on the two countries sharing a king, it was dissolved. After negotiations, Sweden agreed to mutual independence on October 26 and on April 14.
Pakistan
After the Awami League won the 1970 national elections, negotiations to form a new government floundered, resulting in the Bangladesh Liberation War by which East Pakistan seceded, becoming Bangladesh. The Balochistan Liberation Army (also Baloch Liberation Army or Boluchistan Liberation Army) (BLA) is a Baloch nationalist militant secessionist organization. The stated goals of the organization include the establishment of an independent state of Balochistan free of Pakistani, Iranian and Afghan Federations. The name Baloch Liberation Army first became public in the summer of 2000, after the organization claimed credit for a series of bomb attacks in markets and removal of railways lines.
Papua New Guinea
The island of Bougainville has made several efforts to secede from Papua New Guinea.
Somalia
Somaliland is an autonomous region,No Winner Seen in Somalia's Battle With Chaos New York Times, June 2, 2009 which is part of the Federal Republic of Somalia.The Transitional Federal Charter of the Somali Republic : "The Somali Republic shall have the following boundaries. (a) North; Gulf of Aden. (b) North West; Djibouti. (c) West; Ethiopia. (d) South south-west; Kenya. (e) East; Indian Ocean." Those who call the area the Republic of Somaliland consider it to be the successor state of the former British Somaliland protectorate. Having established its own local government in Somalia in 1991, the region's self-declared independence remains unrecognized by any country or international organization.
South Africa
In 1910, following the Boer Republics defeat by the British Empire in the Boer Wars, four self-governing colonies in the south of Africa were merged into the Union of South Africa. The four regions were the Cape Colony, Orange Free State, Natal and Transvaal. Three other territories, High Commission Territories of Bechuanaland (now Botswana), Basutoland (now Lesotho) and Swaziland (now Eswatini) later became independent states in the 1960s. Following the election of the Nationalist government in 1948, some English-speaking whites in Natal advocated either secession or a loose federation.SOUTH AFRICA: Cry of Secession TIME, Monday, May 11, 1953 There were also calls for secession, with Natal and the eastern part of the Cape Province breaking awaySecession Talked by Some Anti-Republicans, Saskatoon Star-Phoenix, 11 October 1960 following the referendum in 1960 on establishing a republic. In 1993, prior to South Africa's first elections under universal suffrage and the end of apartheid, some Zulu leaders in KwaZulu-NatalLaunching Democracy in South Africa: The First Open Election, April 1994, R. W. Johnson, Lawrence Schlemmer, Yale University Press, 1996 again considered secession as did some politicians in the Cape Province.Party Wants the Cape to Secede", Business Day, December 24, 1993.
In 2008, a political movement calling for the return to independence of the Cape resurged in the shape of the political organisation, the Cape Party. The Cape Party contested their first elections on 22 April 2009.Cape Party Website, Monday, May 11, 1953 They finished the Western Cape provincial elections in 2019 with 9,331 votes, or 0.45% of votes, gaining no seats
The idea gained popularity in the early half of the 2020s, with polling suggesting that 58% of Western Cape Voters want a referendum on independence in July 2021.
South Sudan
A referendum took place in Southern Sudan from 9to 15 January 2011, on whether the region should remain a part of Sudan or become independent. The referendum was one of the consequences of the 2005 Naivasha Agreement between the Khartoum central government and the Sudan People's Liberation Army/Movement (SPLA/M).
On 7 February 2011, the referendum commission published the final results, with 98.83% voting in favour of independence. While the ballots were suspended in 10 of the 79 counties for exceeding 100% of the voter turnout, the number of votes was still well over the requirement of 60% turnout, and the majority vote for secession is not in question.
A simultaneous referendum was supposed to be held in Abyei on whether to join Southern Sudan but it has been postponed because of conflict over demarcation and residency rights. In October 2013, a symbolic referendum was held in which 99.9% of voters in Abyei voted to join Southern Sudan. However, this resolution was non-binding. As of February 2024, an official referendum still has not taken place. Abyei currently holds "special administrative status".
The predetermined date for the creation of an independent state was 9July 2011.
Soviet Union
On November 15, 1917, the day in which Declaration of the Rights of the Peoples of Russia was declared by the Bolsheviks, Finland seceded after the non-Socialist Senate proposed that Parliament declare Finland's independence, which was voted by the Parliament on 6 December 1917. On December 18, 1917, it was recognized by Council of People's. It was followed by the Finnish Civil War.
The Constitution of the Soviet Union guaranteed all SSRs the right to secede from the Union. In 1990, after free elections, the Lithuanian Soviet Socialist Republic declared independence and other republics, including certain break-away polities, soon followed. Despite the Soviet central government's refusal to recognize the independence of the republics, the Soviet Union dissolved in 1991.
Spain
Present-day Spain (known officially as "the Kingdom of Spain") was assembled as a central state in the French model between the 18th and 19th centuries from various component kingdoms with varying languages, cultures and legislations. Spain has several secessionist movements, the most notable ones being in Catalonia, the Basque Country and Galicia.
Sri Lanka
The Liberation Tigers of Tamil Eelam, operated a de facto independent state for Tamils called Tamil Eelam in eastern and northern Sri Lanka until 2009.
Switzerland
In 1847, seven disaffected Catholic cantons formed a separate alliance because of moves to change the cantons of Switzerland from a confederation to a more centralized government federation. This effort was crushed in the Sonderbund War and a new Swiss Federal Constitution was created.A Brief Survey of Swiss History, Switzerland Federal Department of Foreign Affairs.
Ukraine
In 2014 after the start of Russian intervention in Ukraine, several groups of people declared the independence of several Ukrainian regions:
The Donetsk People's Republic was declared to be independent from Ukraine on 7April 2014, comprising the territory of the Donetsk Oblast. There have been military confrontations between the Ukrainian Army and the forces of the Donetsk People's Republic when the Ukrainian Government attempted to reassert control over the oblast.
The Lugansk Parliamentary Republic was proclaimed on 27 April 2014. before being succeeded by the Lugansk People's Republic. The Lugansk forces have successfully occupied vital buildings in Lugansk since 8April, and controlled the City Council, prosecutor's office, and police station since 27 April. The Government of the Luhansk Oblast announced its support for a referendum, and granted the governorship to independence leader Valeriy Bolotov.
United Kingdom
The Irish republicans attempted to withdraw Ireland from the United Kingdom during the Easter Rebellion of 1916. Ireland gained independence as the Irish Free State in 1922, except for six Ulster counties which chose to remain in the United Kingdom as Northern Ireland. The United Kingdom has a number of secession movements:
In Northern Ireland, Irish republicans and nationalists have long called for the secession of Northern Ireland to join the Republic of Ireland. This is opposed by Unionists. A minority have supported the independence of Northern Ireland from the United Kingdom without joining the Republic of Ireland.
In Scotland, the Scottish National Party (SNP) campaigns for Scottish independence and direct Scottish membership in the European Union. It has representation at all levels of Scottish politics and forms the devolved Scottish government. Later pro-independence parties have had lesser electoral success. The Scottish Greens and the Scottish Socialist Party are most widely publicised. However, all independence movements/parties are opposed by unionists. A referendum on independence in which voters were asked "Should Scotland be an independent country?" took place in September 2014. It saw "no" win, as 55.3% of voters voted against independence.
In Wales, Plaid Cymru (Party of Wales) stands for Welsh independence within the European Union. It is also represented at all levels of Welsh politics and has often been the second largest party in the Senedd (Welsh Parliament).
England:
In Cornwall, supporters of Mebyon Kernow call for the creation of a Cornish Assembly and separation from England, giving the county significant self-government, whilst remaining within the United Kingdom as a fifth home nation.
London has supporters of an independent or semi-autonomous city-state since the 2016 EU Referendum in which Londoners voted overwhelmingly to remain in the EU. A London independence party, known as Londependence, was established in June 2019. Their calls increased after the 2019 General Election in which most Londoners voted for the Labour Party, gaining a representative, bucking the national trend.
The Northern Independence Party is a party formed in 2020 that seeks to make Northern England an independent state under the name Northumbria.
United States
Discussions and threats of secession often surfaced in American politics during the first half of the 19th century, and secession was declared by the Confederate States of America in the South during the American Civil War. However, in 1869, the United States Supreme Court ruled in Texas v. White that unilateral secession was not permitted, saying that the union between a state (Texas in the case before the bar) and the other states "was as complete, as perpetual, and as indissoluble as the union between the original States. There was no place for reconsideration or revocation, except through revolution or through consent of the States." Current secession movements still exist, the most notable example of which is the Hawaiian sovereignty movement which formed after the illegal annexation of the Kingdom of Hawaii by the United States under the Newlands Resolution passed by Congress in 1898. Many international organizations consider Hawaii under American occupation.
Yemen
North Yemen and South Yemen merged in 1990; tensions led to a 1994 southern secession which was crushed in a civil war.
Yugoslavia
On June 25, 1991, Croatia and Slovenia seceded from the Socialist Federal Republic of Yugoslavia. Bosnia and Herzegovina and North Macedonia also declared independence, after which the federation broke up, causing the separation of the remaining two countries Serbia and Montenegro. Several wars ensued between the Federal Republic of Yugoslavia and seceding entities and among other ethnic groups in Slovenia, Croatia, Bosnia and Herzegovina, and later, Kosovo. Montenegro peacefully separated from its union with Serbia in 2006.
Kosovo unilaterally declared independence from Serbia on February 17, 2008, and was recognized by around 100 countries, with the rest considering it remaining under United Nations administration.
See also
Lists
Lists of historical separatist movements
Lists of active separatist movements
List of unrecognized countries
List of U.S. state secession proposals
List of U.S. county secession proposals
Topics
Autonomy
Bioregionalism
City state
Decentralization
Dissolution
Homeland
Independence
Intersectionality
Irredentism
Micronation
Nullification (U.S. Constitution)
Partition
Schism (religion)
Separatism
Urban secession
Movements
Balochistan Liberation Army
Black Liberation Army
Cape Independence
Cascadia
East Turkestan Independence Movement
Essex Junto
European Free Alliance
Free State Project
Hartford Convention
Kurdistan
League of the South
New York City secession
Orania, Northern Cape
Secession of Quebec
Scottish Secession Church
Second Vermont Republic
South Carolina Exposition and Protest
Texas Secession Movement
Tibetan Independence Movement
Unrepresented Nations and Peoples Organization
Notes
References
Sources
Further reading
Buchanan, Allen, Justice, Legitimacy, and Self-Determination: Moral Foundations for International Law, Oxford University Press, 2007.
Buchanan, Allen, Secession: The Morality Of Political Divorce From Fort Sumter To Lithuania And Quebec, Westview Press, 1991.
Coppieters, Bruno; Richard Sakwa, Richard (eds.), Contextualizing Secession: Normative Studies in Comparative Perspective, Oxford University Press, 2003
Kohen, Marcelo G. (ed.), Secession: International Law Perspectives, Cambridge University Press, 2006.
Kohr, Leopold, The Breakdown of Nations, Routledge & K. Paul, 1957.
Lehning, Percy, Theories of Secession, Routledge, 1998.
López Martín, Ana Gemma and Perea Unceta, José Antonio, Statehood and Secession: Lessons from Spain and Catalonia, Routledge, 2021
Norman, Wayne, Negotiating Nationalism: Nation-Building, Federalism, and Secession in the Multinational State, Oxford University Press, 2006.
Roeder, Philip G. 2018. National secession: persuasion and violence in independence campaigns. Cornell University Press.
Sorens, Jason, Secessionism: Identity, Interest, and Strategy, McGill-Queen's University Press, 2012.
Spencer, Metta, Separatism: Democracy and Disintegration, Rowman & Littlefield, 1998.
Weller, Marc, Autonomy, Self Governance and Conflict Resolution (Kindle Edition), Taylor & Francis, 2007.
Wellman, Christopher Heath, A Theory of Secession, Cambridge University Press, 2005.
Secession And International Law: Conflict Avoidanceregional Appraisals, United Nations Publications, 2006.
External links
Secession (Stanford Encyclopedia of Philosophy)
Category:International law
Category:Separatism
Category:Sovereignty
Category:Changes in political power
Category:Partition (politics)
|
politics_government
| 8,687
|
145401
|
Mexican Revolution
|
https://en.wikipedia.org/wiki/Mexican_Revolution
|
The Mexican Revolution () was an extended sequence of armed regional conflicts in Mexico from 20 November 1910 to 1 December 1920. It has been called "the defining event of modern Mexican history".Joseph, Gilbert and Jürgen Buchenau (2013). Mexico's Once and Future Revolution. Durham: Duke University Press, 1 It saw the destruction of the Federal Army, its replacement by a revolutionary army, and the transformation of Mexican culture and government. The northern Constitutionalist faction prevailed on the battlefield and drafted the present-day Constitution of Mexico, which aimed to create a strong central government. Revolutionary generals held power from 1920 to 1940. The revolutionary conflict was primarily a civil war, but foreign powers, having important economic and strategic interests in Mexico, figured in the outcome of Mexico's power struggles; the U.S. involvement was particularly high. The conflict led to the deaths of around one million people, mostly non-combatants.
Although the decades-long regime of President Porfirio Díaz (1876–1911) was increasingly unpopular, there was no foreboding in 1910 that a revolution was about to break out. The aging Díaz failed to find a controlled solution to presidential succession, resulting in a power struggle among competing elites and the middle classes, which occurred during a period of intense labor unrest, exemplified by the Cananea and Río Blanco strikes.Womack, John Jr. "The Mexican Revolution, 1910–1920". Mexico Since Independence. New York: Cambridge University Press 1991, 128. When wealthy northern landowner Francisco I. Madero challenged Díaz in the 1910 presidential election and Díaz jailed him, Madero called for an armed uprising against Díaz in the Plan of San Luis Potosí. Rebellions broke out first in Morelos (immediately south of the nation's capital city) and then to a much greater extent in northern Mexico. The Federal Army could not suppress the widespread uprisings, showing the military's weakness and encouraging the rebels. Díaz resigned in May 1911 and went into exile, an interim government was installed until elections could be held, the Federal Army was retained, and revolutionary forces demobilized. The first phase of the Revolution was relatively bloodless and short-lived.
Madero was elected President, taking office in November 1911. He immediately faced the armed rebellion of Emiliano Zapata in Morelos, where peasants demanded rapid action on agrarian reform. Politically inexperienced, Madero's government was fragile, and further regional rebellions broke out. In February 1913, prominent army generals from the former Díaz regime staged a coup d'etat in Mexico City, forcing Madero and Vice President Pino Suárez to resign. Days later, both men were assassinated by orders of the new President, Victoriano Huerta. This initiated a new and bloody phase of the Revolution, as a coalition of northerners opposed to the counter-revolutionary regime of Huerta, the Constitutionalist Army led by the Governor of Coahuila Venustiano Carranza, entered the conflict. Zapata's forces continued their armed rebellion in Morelos. Huerta's regime lasted from February 1913 to July 1914, and the Federal Army was defeated by revolutionary armies. The revolutionary armies then fought each other, with the Constitutionalist faction under Carranza defeating the army of former ally Francisco "Pancho" Villa by the summer of 1915.
Carranza consolidated power and a new constitution was promulgated in February 1917. The Mexican Constitution of 1917 established universal male suffrage, promoted secularism, workers' rights, economic nationalism, and land reform, and enhanced the power of the federal government.Gentleman, Judith. "Mexico since 1910". Encyclopedia of Latin American History and Culture, vol. 4, 15. Carranza became President of Mexico in 1917, serving a term ending in 1920. He attempted to impose a civilian successor, prompting northern revolutionary generals to rebel. Carranza fled Mexico City and was killed. From 1920 to 1940, revolutionary generals held the office of president, each completing their terms (except from 1928-1934). This was a period when state power became more centralized, and revolutionary reform implemented, bringing the military under the civilian government's control. The Revolution was a decade-long civil war, with new political leadership that gained power and legitimacy through their participation in revolutionary conflicts. The political party those leaders founded in 1929, which would become the Institutional Revolutionary Party (PRI), ruled Mexico until the presidential election of 2000. When the Revolution ended is not well defined, and even the conservative winner of the 2000 election, Vicente Fox, contended his election was heir to the 1910 democratic election of Francisco Madero, thereby claiming the heritage and legitimacy of the Revolution.Bantjes, Adrien A. "The Mexican Revolution". In A Companion to Latin American History, London: Wiley-Blackwell 2011, 330
Prelude to revolution: Porfiriato and the 1910 election
Liberal general and war veteran Porfirio Díaz came to the presidency of Mexico in 1876 and remained almost continuously in office until 1911 in an era now called Porfiriato.Garza, James A. "Porfirio Díaz", in Encyclopedia of Mexico, 406 Coming to power after a coup to oppose the re-election of Sebastián Lerdo de Tejada, he could not run for re-election in 1880. His close ally, General Manuel González, was elected president (1880–1884). Díaz saw himself as indispensable, and after that interruption, he ran for the presidency again and served in office continuously until 1911. The constitution had been amended to allow unlimited presidential re-election.Garner, Paul. Porfirio Díaz. New York: Pearson 2001, p. 98. During the Porfiriato, there were regular elections, widely considered sham exercises, marked by contentious irregularities.
In his early years in the presidency, Díaz consolidated power by playing opposing factions against each other and by expanding the , a force of armed and mounted police directly under his control that seized land from local peasants. Peasants were forced to make futile attempts to win back their land through courts and petitions. By 1900, over ninety percent of Mexico's communal lands were sold, with an estimated 9.5 million peasants forced into the service of wealthy landowners or hacendados. Diaz rigged elections, arguing that only he knew what was best for his country, and he enforced his belief with a strong hand. "Order and Progress" were the watchwords of his rule.
Díaz's presidency was characterized by the promotion of industry and the development of infrastructure by opening the country to foreign investment. Díaz suppressed opposition and promoted stability to reassure foreign investors. Farmers and peasants both complained of oppression and exploitation. The situation was further exacerbated by the drought that lasted from 1907 to 1909. The economy took a great leap during the Porfiriato, through the construction of factories, industries and infrastructure such as railroads and dams, as well as improving agriculture. Foreign investors bought large tracts of land to cultivate crops and range cattle for export. The cultivation of exportable goods such as coffee, tobacco, henequen for cordage, and sugar replaced the domestic production of wheat, corn and livestock that peasants had lived on. Wealth, political power and access to education were concentrated among a handful of elite landholding families mainly of European and mixed descent. These controlled vast swaths of the country through their huge estates (for example, the Terrazas had one estate in Sonora that alone comprised more than a million acres). Many Mexicans became landless peasants laboring on these vast estates or industrial workers toiling long hours for low wages. Foreign companies (mostly from the United Kingdom, France, and the U.S.) also exercised influence in Mexico.
Díaz and the military
Díaz had legitimacy as a leader through his battlefield accomplishments. He knew that the long tradition of military intervention in politics and its resistance to civilian control would prove challenging to his remaining in power. He set about curbing the power of the military, reining in provincial military chieftains, and making them subordinate to the central government. He contended with a whole new group of generals who had fought for the liberal cause and who expected rewards for their services. He systematically dealt with them, providing some rivals with opportunities to enrich themselves, ensuring the loyalty of others with high salaries, and others were bought off by rewards of landed estates and redirecting their political ambitions. Military rivals who did not accept the alternatives often rebelled and were crushed. It took him some 15 years to complete the transformation, reducing the army by 500 officers and 25 generals, creating an army subordinate to central power. He also created the military academy to train officers, but their training aimed to repel foreign invasions. Díaz expanded the rural police force, the as an elite guard, including many former bandits, under the direct control of the president.Vanderwood, Paul. Disorder and Progress: Bandits, Police, and Mexican Development. Lincoln: University of Nebraska Press 1981. With these forces, Díaz attempted to appease the Mexican countryside, led by a stable government that was nominally civilian, and the conditions to develop the country economically with the infusion of foreign investments.
During Díaz's long tenure in office, the Federal Army became overstaffed and top-heavy with officers, many of them elderly who last saw active military service against the French in the 1860s. Some 9,000 officers commanded the 25,000 rank-and-file on the books, with some 7,000 padding the rosters and nonexistent so that officers could receive the subsidies for the numbers they commanded. Officers used their positions for personal enrichment through salary and opportunities for graft. Although Mexicans had enthusiastically volunteered in the war against the French, the ranks were now filled by draftees. There was a vast gulf between officers and the lower ranks. "The officer corps epitomized everything the masses resented about the Díaz system." With multiple rebellions breaking out in the wake of the fraudulent 1910 election, the military was unable to suppress them, revealing the regime's weakness and leading to Díaz's resignation in May 1911.
Political system
Although the Díaz regime was authoritarian and centralizing, it was not a military dictatorship. His first presidential cabinet was staffed with military men, but over successive terms as president, important posts were held by able and loyal civilians.Camp, Roderic Ai. Political Recruitment Across Two Centuries, Mexico 1884–1991. Austin: University of Texas Press 1995, 62 He did not create a personal dynasty, excluding family from the realms of power, although his nephew Félix attempted to seize power after the fall of the regime in 1911. Díaz created a political machine, first working with regional strongmen and bringing them into his regime, then replacing them with (political bosses) who were loyal to him. He skillfully managed political conflict and reined in tendencies toward autonomy. He appointed several military officers to state governorships, including General Bernardo Reyes, who became governor of the northern state of Nuevo León, but over the years military men were largely replaced by civilians loyal to Díaz.
As a military man himself, and one who had intervened directly in politics to seize the presidency in 1876, Díaz was acutely aware that the Federal Army could oppose him. He augmented the , a police force created by Benito Juárez, making them his private armed force. The were only 2,500 in number, as opposed to the 30,000 in the army and another 30,000 in the federal auxiliaries, irregulars and National Guard.Womack, John Jr. "The Mexican Revolution", in Mexico Since Independence, Leslie Bethell, ed. Cambridge: Cambridge University Press, 1991, p. 130. Despite their small numbers, the were highly effective in controlling the countryside, especially along the 12,000 miles of railway lines. They were a mobile force, often sent on trains with their horses to put down rebellions in relatively remote areas of Mexico.Vanderwood, Paul. Disorder and Progress: Bandits, Police, and Mexican Development. Wilmington, Delaware: SR Books, rev. ed. 1992.
The construction of railways had been transformative in Mexico (as well as elsewhere in Latin America), accelerating economic activity and increasing the power of the Mexican state. The isolation from the central government that many remote areas had enjoyed or suffered was ending. Telegraph lines constructed next to the railroad tracks meant instant communication between distant states and the capital.Coatsworth, John. Growth Against Development: The Economic Impact of Railroads in Porfirian Mexico. DeKalb: Northern Illinois University Press, 1981. P. 47
The political acumen and flexibility Díaz exhibited in his early years in office began to decline after 1900. He brought the state governors under his control, replacing them at will. The Federal Army, while large, was increasingly an ineffective force with aging leadership and troops conscripted into service. Díaz attempted the same kind of manipulation he executed with the Mexican political system with business interests, showing favoritism to European interests against those of the U.S.Baldwin, Deborah J. Protestants and the Mexican Revolution. Urbana: University of Illinois Press, 1990, p. 68.
Rival interests, particularly those of the foreign powers with a presence in Mexico, further complicated an already complex system of favoritism. As economic activity increased and industries thrived, industrial workers began organizing for better conditions. Díaz enacted policies that encouraged large landowners to intrude upon the villagers' land and water rights. With the expansion of Mexican agriculture, landless peasants were forced to work for low wages or move to the cities. Peasant agriculture was under pressure as haciendas expanded, such as in the state of Morelos, just south of Mexico City, with its burgeoning sugar plantations. There was what one scholar has called "agrarian compression", in which "population growth intersected with land loss, declining wages and insecure tenancies to produce widespread economic deterioration", but the regions under the greatest stress were not the ones that rebelled.Tutino, John. From Insurrection to Revolution: Social Bases of Agrarian Violence in Mexico, 1750–1940. Princeton: Princeton University Press 1986.
Opposition to Díaz
Díaz effectively suppressed strikes, rebellions, and political opposition until the early 1900s. Mexicans began to organize in opposition to Díaz, who had welcomed foreign capital and capitalists, suppressed nascent labor unions, and consistently moved against peasants as agriculture flourished. In 1905 the group of Mexican intellectuals and political agitators who had created the Mexican Liberal Party () drew up a radical program of reform, specifically addressing what they considered to be the worst aspects of the Díaz regime. Most prominent in the PLM were Ricardo Flores Magón and his two brothers, Enrique and Jesús. They, along with Luis Cabrera and Antonio Díaz Soto y Gama, were connected to the anti-Díaz publication . Political cartoons by José Guadalupe Posada lampooned politicians and cultural elites with mordant humor, portraying them as skeletons. The Liberal Party of Mexico founded the anti-Díaz anarchist newspaper , which appeared in both Spanish and English. In exile in the United States, Práxedis Guerrero began publishing an anti-Díaz newspaper, ("Red Dawn"), in San Francisco, California. Although leftist groups were small, they became influential through their publications, articulating their opposition to the Díaz regime. Francisco Bulnes described these men as the "true authors" of the Mexican Revolution for agitating the masses.Claudio Lomnitz citing Francisco Bulnes, . In Claudio Lomnitz, The Return of Ricardo Flores Magón. New York: Zone Books, 2014, p. 55 and fn. 6, p. 533. As the 1910 election approached, Francisco I. Madero, an emerging political figure and member of one of Mexico's richest families, funded the newspaper , in opposition to the continual re-election of Díaz.
Organized labor conducted strikes for better wages and just treatment. Demands for better labor conditions were central to the Liberal Party program, drawn up in 1905. Mexican copper miners in the northern state of Sonora took action in the 1906 Cananea strike. Starting June 1, 1906, 5,400 miners began organizing labor strikes. Among other grievances, they were paid less than U.S. nationals working in the mines. In the state of Veracruz, textile workers rioted in January 1907 at the huge Río Blanco factory, the world's largest, protesting against unfair labor practices. They were paid in credit that could be used only at the company store, binding them to the company.
These strikes were ruthlessly suppressed, with factory owners receiving support from government forces. In the Cananea strike, mine owner William Cornell Greene received support from Díaz's rurales in Sonora as well as Arizona Rangers called in from across the U.S. border. This Arizona Rangers were ordered to use violence to combat labor unrest. In the state of Veracruz, the Mexican army gunned down Rio Blanco textile workers and put the bodies on train cars that transported them to Veracruz, "where the bodies were dumped in the harbor as food for sharks".
Since the press was censored in Mexico under Díaz, little was published that was critical of the regime. Newspapers barely reported on the Rio Blanco textile strike, the Cananea strike or harsh labor practices on plantations in Oaxaca and Yucatán. Leftist Mexican opponents of the Díaz regime, such as Ricardo Flores Magón and Práxedis Guerrero, went into exile in the relative safety of the United States, but cooperation between the U.S. government and Díaz's agents resulted in the arrest of some radicals.
Presidential succession in 1910
Díaz had ruled continuously since 1884. The question of presidential succession was an issue as early as 1900 when he turned 70.Garner, Paul. Porfirio Díaz. New York: Pearson, 2001, p. 209. Díaz re-established the office of vice president in 1906, choosing Ramón Corral. Rather than managing political succession, Díaz marginalized Corral, keeping him away from decision-making. Díaz publicly announced in an interview with journalist James Creelman for Pearson's Magazine that he would not run in the 1910 election. At age 80, this set the scene for a possible peaceful transition in the presidency. It set off a flurry of political activity. To the dismay of potential candidates to replace him, he reversed himself and ran again. His later reversal on retiring from the presidency set off tremendous activity among opposition groups.
Díaz seems to have initially considered Finance Minister José Yves Limantour as his successor. Limantour was a key member of the , the circle of technocratic advisers steeped in positivist political science. Another potential successor was General Bernardo Reyes, Díaz's Minister of War, who also served as governor of Nuevo León. Reyes, an opponent of the Científicos, was a moderate reformer with a considerable base of support. Díaz became concerned about him as a rival and forced him to resign from his cabinet. He attempted to marginalize Reyes by sending him on a "military mission" to Europe,Garner, Porfirio Díaz p. 210. distancing him from Mexico and potential political supporters. "The potential challenge from Reyes would remain one of Díaz's political obsessions through the rest of the decade, which ultimately blinded him to the danger of the challenge of Francisco Madero's anti-re-electionist campaign."
In 1910, Francisco I. Madero, a young man from a wealthy landowning family in the northern state of Coahuila, announced his intent to challenge Díaz for the presidency in the next election, under the banner of the Anti-Reelectionist Party. Madero chose as his running mate Francisco Vázquez Gómez, a physician who had opposed Díaz.Mark Wasserman, "Francisco Vázquez Gómez", in Encyclopedia of Mexico, 151. Madero campaigned vigorously and effectively. To ensure Madero did not win, Díaz had him jailed before the election. He escaped and fled for a short period to San Antonio, Texas. Díaz was announced the winner of the election by a "landslide".
End of the Porfiriato: November 1910 – May 1911
On 5 October 1910, Madero issued a "letter from jail", known as the Plan de San Luis Potosí, with its main slogan ("effective voting, no re-election"). It declared the Díaz presidency illegal and called for a revolt against him, starting on 20 November 1910. Madero's political plan did not outline a major socioeconomic revolution but offered hopes of change for many disadvantaged Mexicans. The plan strongly opposed militarism in Mexico as it was constituted under Díaz, calling on Federal Army generals to resign before true democracy could prevail in Mexico. Madero realized he needed a revolutionary armed force, enticing men to join with the promise of formal rank, and encouraged to join the revolutionary forces with the promise of promotion.
Madero's plan was aimed at fomenting a popular uprising against Díaz, but he also understood that the support of the United States and American financiers would be of crucial importance in undermining the regime. The rich and powerful Madero family drew on its resources to make regime change possible, with Madero's brother Gustavo A. Madero hiring, in October 1910, the firm of Washington lawyer Sherburne Hopkins, the "world's best rigger of Latin-American revolutions", to encourage support in the U.S. A strategy to discredit Díaz with American business and the U.S. government achieved some success, with Standard Oil representatives engaging in talks with Gustavo Madero. More importantly, the American government "bent neutrality laws for the revolutionaries".Womack, The Mexican Revolution, p. 131.
In late 1910, revolutionary movements arose in response to Madero's Plan de San Luis Potosí. Still, their ultimate success resulted from the Federal Army's weakness and inability to suppress them. Madero's vague promises of land reform attracted many peasants throughout the country. Spontaneous rebellions arose in which ordinary farm laborers, miners and other working-class Mexicans, along with much of the country's population of indigenous peoples, fought Díaz's forces with some success. Madero attracted the forces of rebel leaders such as Pascual Orozco, Pancho Villa, Emiliano Zapata, and Venustiano Carranza. A young and able revolutionary, Orozco—along with Chihuahua Governor Abraham González—formed a powerful military union in the north and, although they were not especially committed to Madero, took Mexicali and Chihuahua City. These victories encouraged alliances with other revolutionary leaders, including Villa. Against Madero's wishes, Orozco and Villa fought for and won Ciudad Juárez, bordering El Paso, Texas, on the south side of the Rio Grande. Madero's call to action had unanticipated results, such as the Magonista rebellion of 1911 in Baja California.Taylor, Laurence D. "The Magonista Revolt in Baja California". The Journal of San Diego History, 45 (1) 1999.
Interim presidency: May–November 1911
With the Federal Army defeated in several battles with irregular, voluntary forces, Díaz's government began negotiations with the revolutionaries in the north. In historian Edwin Lieuwen's assessment, "Victors always attribute their success to their own heroic deeds and superior fighting abilities ... In the spring of 1911, armed bands under self-appointed chiefs arose all over the republic, drove Díaz officials from the vicinity, seized money and stamps, and staked out spheres of local authority. Towns, cities, and the countryside passed into the hands of the Maderistas."
Díaz sued for peace with Madero, who himself did not want a prolonged and bloody conflict. The result was the Treaty of Ciudad Juárez, signed on 21 May 1911. The signed treaty stated that Díaz would abdicate the presidency along with his vice president, Ramón Corral, by the end of May 1911 to be replaced by an interim president, Francisco León de la Barra, until elections were held. Díaz and his family and a number of top supporters were allowed to go into exile.Cumberland, Charles C. Mexican Revolution: Genesis Under Madero. Austin: University of Texas Press 1952, p. 150. When Díaz left for exile in Paris, he was reported as saying, "Madero has unleashed a tiger; let us see if he can control it."quoted in Cumberland, Mexican Revolution, p. 151.
With Díaz in exile and new elections to be called in October, the power structure of the old regime remained firmly in place. Francisco León de la Barra became interim president, pending an election to be held in October 1911. Madero considered De la Barra an acceptable figure for the interim presidency since he was not a or politician, but rather a Catholic lawyer and diplomat. He appeared to be a moderate, but the German ambassador to Mexico, Paul von Hintze, who associated with the Interim President, said of him that "De la Barra wants to accommodate himself with dignity to the inevitable advance of the ex-revolutionary influence, while accelerating the widespread collapse of the Madero party."quoted in The Federal Army, despite its numerous defeats by the revolutionaries, remained intact as the government's force. Madero called on revolutionary fighters to lay down their arms and demobilize, which Emiliano Zapata and the revolutionaries in Morelos refused to do.
The cabinet of De la Barra and the Mexican congress was filled with supporters of the Díaz regime. Madero campaigned vigorously for the presidency during this interim period, but revolutionaries who had supported him and brought about Díaz's resignation were dismayed that the sweeping reforms they sought were not immediately instituted. He did introduce some progressive reforms, including improved funding for rural schools; promoting some aspects of agrarian reform to increase the amount of productive land; labor reforms including workman's compensation and the eight-hour day; but also defended the right of the government to intervene in strikes. According to historian Peter V. N. Henderson, De la Barra's and congress's actions "suggests that few Porfirians wished to return to the status quo of the dictatorship. Rather, the thoughtful, progressive members of the Porfirian meritocracy recognized the need for change."Henderson, Peter V. N. "Francisco de la Barra" in Encyclopedia of Mexico, 397. De la Barra's government sent General Victoriano Huerta to fight in Morelos against the Zapatistas, burning villages and wreaking havoc. His actions drove a wedge between Zapata and Madero, which widened when Madero was inaugurated as president.Ross, Stanley R. Francisco I. Madero: Apostle of Democracy, pp. 188–202. Zapata remained in arms continuously until his assassination in 1919.
Madero won the 1911 election decisively and was inaugurated as president in November 1911, but his movement had lost crucial momentum and revolutionary supporters in the months of the Interim Presidency and left in place the Federal Army.
Madero presidency: November 1911 – February 1913
Madero had drawn some loyal and militarily adept supporters who brought down the Díaz regime by force of arms. Madero himself was not a natural soldier, and his decision to dismiss the revolutionary forces that brought him to power isolated him politically. He was an inexperienced politician, who had never held office before. He firmly held to democratic ideals, which many consider evidence of naivete. His election as president in October 1911 raised high expectations among many Mexicans for positive change. The Treaty of Ciudad Juárez guaranteed that the essential structure of the Díaz regime, including the Federal Army, was kept in place. Madero fervently held to his position that Mexico needed real democracy, which included regime change by free elections, a free press, and the right of labor to organize and strike.
The rebels who brought him to power were demobilized and Madero called on these men of action to return to civilian life. According to a story told by Pancho Villa, a leader who had defeated Díaz's army and forced his resignation and exile, he told Madero at a banquet in Ciudad Juárez in 1911, "You [Madero], sir, have destroyed the revolution ... It's simple: this bunch of dandies have made a fool of you, and this will eventually cost us our necks, yours included."quoted in Ignoring the warning, Madero increasingly relied on the Federal Army as armed rebellions broke out in Mexico in 1911–12, with particularly threatening insurrections led by Emiliano Zapata in Morelos and Pascual Orozco in the north. Both Zapata and Orozco had led revolts that had put pressure on Díaz to resign, and both felt betrayed by Madero once he became president.
The press embraced its newfound freedom and Madero became a target of its criticism. Organized labor, which had been suppressed under Díaz, could and did stage strikes, which foreign entrepreneurs saw as threatening their interests. Although there had been labor unrest under Díaz, labor's new freedom to organize also came with anti-American currents. The anarcho-syndicalist (House of the World Worker) was founded in September 1912 by Antonio Díaz Soto y Gama, Manuel Sarabia, and Lázaro Gutiérrez de Lara and served as a center of agitation and propaganda, but it was not a formal labor union.Cumberland, Charles C. Mexican Revolution: The Constitutionalist Years. Austin: University of Texas Press 1972, pp. 252–253.Lear, John. "Casa del Obrero Mundial" in Encyclopedia of Mexico, 206–207
Political parties proliferated. One of the most important was the National Catholic Party, which in several regions of the country was particularly strong. Several Catholic newspapers were in circulation during the Madero era, including and , only to be later suppressed under the Victoriano Huerta regime (1913–1914). Under Díaz relations between the Roman Catholic Church and the Mexican government were stable, with the anticlerical laws of the Mexican Constitution of 1857 remaining in place, but not enforced, so conflict was muted. During Madero's presidency, Church-state conflict was channeled peacefully. The National Catholic Party became an important political opposition force during the Madero presidency. In the June 1912 congressional elections, "militarily quiescent states ... the Catholic Party (PCN) did conspicuously well." During that period, the Catholic Association of Mexican Youth (ACJM) was founded. Although the National Catholic Party was an opposition party to the Madero regime, "Madero clearly welcomed the emergence of a kind of two-party system (Catholic and liberal); he encouraged Catholic political involvement, echoing the exhortations of the episcopate." What was emerging during the Madero regime was "Díaz's old policy of Church-state detente was being continued, perhaps more rapidly and on surer foundations." The Catholic Church in Mexico was working within the new democratic system promoted by Madero, but it had its interests to promote, some of which were the forces of the old conservative Church, while the new, progressive Church supporting social Catholicism of the 1891 papal encyclical was also a current. When Madero was overthrown in February 1913 by counter-revolutionaries, the conservative wing of the Church supported the coup.
Madero did not have the experience or the ideological inclination to reward men who had helped bring him to power. Some revolutionary leaders expected personal rewards, such as Pascual Orozco of Chihuahua. Others wanted major reforms, most especially Emiliano Zapata and Andrés Molina Enríquez, who had long worked for land reform. Madero met personally with Zapata, telling the guerrilla leader that the agrarian question needed careful study. His meaning was clear: Madero, a member of a rich northern family, was not about to implement comprehensive agrarian reform for aggrieved peasants.
In response to this lack of action, Zapata promulgated the Plan de Ayala in November 1911, declaring himself in rebellion against Madero. He renewed guerrilla warfare in the state of Morelos. Madero sent the Federal Army to deal with Zapata, unsuccessfully. Zapata remained true to the demands of the Plan de Ayala and in rebellion against every central government up until his assassination by an agent of President Venustiano Carranza in 1919.
The northern revolutionary General Pascual Orozco, a leader in taking Ciudad Juárez, had expected to become governor of Chihuahua. In 1911, although Orozco was "the man of the hour", Madero gave the governorship instead to Abraham González, a respectable revolutionary, with the explanation that Orozco had not reached the legal age to serve as governor, a tactic that was "a useful constitutional alibi for thwarting the ambitions of young, popular, revolutionary leaders". Madero had put Orozco in charge of the large force of in Chihuahua, but to a gifted revolutionary fighter who had helped bring about Díaz's fall, Madero's reward was insulting. After Madero refused to agree to social reforms calling for better working hours, pay, and conditions, Orozco organized his army, the , also called the ("Red Flaggers") and issued his on 25 March 1912, enumerating why he was rising in revolt against Madero.Meyer, Michael C. Mexican Rebel: Pascual Orozco and the Mexican Revolution, 138–147.
In April 1912, Madero dispatched General Victoriano Huerta of the Federal Army to put down Orozco's dangerous revolt. Madero had kept the army intact as an institution, using it to put down domestic rebellions against his regime. Huerta was a professional soldier and continued to serve in the army under the new commander-in-chief. Huerta's loyalty lay with General Bernardo Reyes rather than with the civilian Madero. In 1912, under pressure from his cabinet, Madero called on Huerta to suppress Orozco's rebellion. With Huerta's success against Orozco, he emerged as a powerful figure for conservative forces opposing the Madero regime.Richmond, Douglas W. "Victoriano Huerta". In Encyclopedia of Mexico, vol. 1, p. 656. During the Orozco revolt, the governor of Chihuahua mobilized the state militia to support the Federal Army. Pancho Villa, now a colonel in the militia, was called up at this time. In mid-April, at the head of 400 irregular troops, he joined the forces commanded by Huerta. Huerta, however, viewed Villa as an ambitious competitor. During a visit to Huerta's headquarters in June 1912, after an incident in which he refused to return a number of stolen horses, Villa was imprisoned on charges of insubordination and robbery and sentenced to death. Raúl Madero, the President's brother, intervened to save Villa's life. Jailed in Mexico City, Villa escaped and fled to the United States, later to return and play a major role in the civil wars of 1913–1915.
There were other rebellions, one led by Bernardo Reyes and another by Félix Díaz, nephew of the former president, that were quickly put down and the generals jailed. They were both in Mexico City prisons and, despite their geographical separation, they were able to foment yet another rebellion in February 1913. This period came to be known as the Ten Tragic Days (), which ended with Madero's resignation and assassination and Huerta assuming the presidency. Although Madero had reason to distrust Victoriano Huerta, Madero placed him in charge of suppressing the Mexico City revolt as interim commander. He did not know that Huerta had been invited to join the conspiracy, but had initially held back. During the fighting that took place in the capital, the civilian population was subjected to artillery exchanges, street fighting and economic disruption, perhaps deliberately caused by the coup plotters to demonstrate that Madero was unable to keep order.Tuñon Pablos, Esperanza. "Mexican Revolution: February 1913 – October 1915", in Encyclopedia of Mexico, vol. 2, pp. 855–756.
A military coup overthrows Madero: 9–22 February 1913
The Madero presidency was unravelling, to no one's surprise except perhaps Madero's, whose support continued to deteriorate, even among his political allies. Madero's supporters in congress before the coup, the so-called ("the renewers"), criticized him, saying, "The revolution is heading toward collapse and is pulling the government to which it gave rise down with it, for the simple reason that it is not governing with revolutionaries. Compromises and concessions to the supporters of the old [Díaz] regime are the main causes of the unsettling situation in which the government that emerged from the revolution finds itself ... The regime appears relentlessly bent on suicide."quoted in
Huerta, formally in charge of the defense of Madero's regime, allowed the rebels to hold the armory in Mexico City—the Ciudadela—while he consolidated his political power. He changed allegiance from Madero to the rebels under Félix Díaz (Bernardo Reyes having been killed on the first day of the open armed conflict). U.S. Ambassador Henry Lane Wilson, who had done all he could to undermine American confidence in Madero's presidency, brokered the Pact of the Embassy, which formalized the alliance between Félix Díaz and Huerta, with the backing of the United States.Tuñon Pablos, Mexican Revolution: February 1913 – October 1915, p. 855 Huerta was to become provisional president following the resignations of Madero and his vice president, José María Pino Suárez. Rather than being sent into exile with their families, the two were murdered while being transported to prison—a shocking event, but one that did not prevent the Huerta regime's recognition by most world governments, with the notable exception of the U.S.
Historian Friedrich Katz considers Madero's retention of the Federal Army, which was defeated by the revolutionary forces and resulted in Díaz's resignation, "was the basic cause of his fall". His failure is also attributable to "the failure of the social class to which he belonged and whose interests he considered to be identical to those of Mexico: the liberal hacendados" (owners of large estates). Madero had created no political organization that could survive his death and had alienated and demobilized the revolutionary fighters who had helped bring him to power. In the aftermath of his assassination and Huerta's seizure of power via a military coup, former revolutionaries had no formal organization through which to raise opposition to Huerta.
Huerta regime and civil war: February 1913 – July 1914
Madero's "martyrdom accomplished what he was unable to do while alive: unite all the revolutionists under one banner."Ross, Francisco I. Madero, Apostle of Democracy, 340 Within 16 months, revolutionary armies defeated the Federal Army and the Huerta regime fell. Like Porfirio Díaz, Huerta went into exile. The Federal Army was disbanded, leaving only revolutionary military forces.
Upon taking power, Huerta had moved swiftly to consolidate his hold in the North, having learned the lesson from Díaz's fall that the north was a crucial region to hold. Within a month of the coup, rebellions began to spread throughout Mexico, most prominently led by the governor of the state of Coahuila, Venustiano Carranza, along with Pablo González. Huerta expected state governors to fall into line with the new government. But Carranza and Abraham González, Governor of Chihuahua did not. Carranza issued the Plan of Guadalupe, a strictly political plan to reject the legitimacy of the Huerta government, and called on revolutionaries to take up arms. Revolutionaries who had brought Madero to power only to be dismissed in favor of the Federal Army eagerly responded to the call, most prominently Pancho Villa. Alvaro Obregón of Sonora, a successful rancher and businessman who had not participated in the Madero revolution, now joined the revolutionary forces in the north, the Constitutionalist Army under the ("First Chief") Venustiano Carranza. Huerta had Governor González arrested and murdered, for fear he would foment rebellion. When northern General Pancho Villa became governor of Chihuahua in 1914, following the defeat of Huerta, he located González's bones and had them reburied with full honors. In Morelos, Emiliano Zapata continued his rebellion under the Plan of Ayala (while expunging the name of counter-revolutionary Pascual Orozco from it), calling for the expropriation of land and redistribution to peasants. Huerta offered peace to Zapata, who rejected it.Richmond, Douglas W., "Victoriano Huerta", in Encyclopedia of Mexico, vol. 1, p. 657. The Huerta government was thus challenged by revolutionary forces in the north of Mexico and the strategic state of Morelos, just south of the capital.
Huerta's presidency is usually characterized as a dictatorship. From the point of view of revolutionaries at the time and the construction of historical memory of the Revolution, it is without any positive aspects. "Despite recent attempts to portray Victoriano Huerta as a reformer, there is little question that he was a self-serving dictator."Richmond, Douglas W., "Victoriano Huerta", in Encyclopedia of Mexico, vol. 1, p. 655. There are few biographies of Huerta, but one strongly asserts that Huerta should not be labeled simply as a counter-revolutionary, arguing that his regime consisted of two distinct periods: from the coup in February 1913 up to October 1913. During that time he attempted to legitimize his regime and demonstrate its legality by pursuing reformist policies; and after October 1913, when he dropped all attempts to rule within a legal framework and began murdering political opponents while battling revolutionary forces that had united in opposition to his regime.Tuñon Pablos, Esperanza. "Mexican Revolution: February 1913 – October 1915". In Encyclopedia of Mexico, vol. 2, p. 6.
Supporting the Huerta regime initially were business interests in Mexico, both foreign and domestic; landed elites; the Roman Catholic Church; and the German and British governments. The U.S. President Woodrow Wilson did not recognize the Huerta regime, since it had come to power by coup. Huerta and Carranza were in contact for two weeks immediately after the February coup, but they did not come to an agreement. Carranza then declared himself opposed to Huerta and became the leader of the anti-Huerta forces in the north.Knight, Alan. "Venustiano Carranza". In Encyclopedia of Latin American History and Culture, vol. 1, p. 573. Huerta gained the support of revolutionary general Pascual Orozco, who had helped topple the Díaz regime, then rebelled against Madero because of his lack of action on agrarian issues. Huerta's first cabinet comprised men who had supported the February 1913 Pact of the Embassy, among them some who had supported Madero, such as Jesús Flores Magón; supporters of General Bernardo Reyes; supporters of Félix Díaz; and former Interim President Francisco León de la Barra.
During the counter-revolutionary regime of Huerta, the Catholic Church in Mexico initially supported him. "The Church represented a force for reaction, especially in the countryside." However, when Huerta cracked down on political parties and conservative opposition, he had "Gabriel Somellera, president of the [National] Catholic Party arrested; , which, like other Catholic papers, had protested Congress's dissolution and the rigged elections [of October 1913], locked horns with the official press and was finally closed down. , the main Catholic newspaper, survived for a time."
Huerta was even able to briefly muster the support of Andrés Molina Enríquez, author of The Great National Problems (), a key work urging land reform in Mexico. Huerta was seemingly deeply concerned with the issue of land reform, since it was a persistent spur of peasant unrest. Specifically, he moved to restore "ejido lands to the Yaquis and Mayos of Sonora and [advanced] proposals for distribution of government lands to small-scale farmers." When Huerta refused to move faster on land reform, Molina Enríquez disavowed the regime in June 1913, later going on to advise the 1917 constitutional convention on land reform.
U.S. President Taft left the decision of whether to recognize the new government up to the incoming president, Woodrow Wilson. Despite the urging of American ambassador Henry Lane Wilson, who had played a key role in the coup d'état, President Wilson not only declined to recognize Huerta's government but first supplanted the ambassador by sending his "personal representative" John Lind, a progressive who sympathized with the Mexican revolutionaries, and the president recalled Ambassador Wilson. The United States lifted the arms embargo imposed by Taft in order to supply weapons to the landlocked rebels; while under the complete embargo Huerta had still been able to receive shipments from the British by sea. Wilson urged European powers to not recognize Huerta's government, and attempted to persuade Huerta to call prompt elections "and not present himself as a candidate". The United States offered Mexico a loan on the condition that Huerta accept the proposal. He refused. Lind "clearly threatened a military intervention in case the demands were not met".
In the summer of 1913, Mexican conservatives who had supported Huerta sought a constitutionally-elected, civilian alternative to Huerta, brought together in a body called the National Unifying Junta. Political parties proliferated in this period, a sign that democracy had taken hold, and there were 26 by the time of the October congressional elections. From Huerta's point of view, the fragmentation of the conservative political landscape strengthened his own position. For the country's conservative elite, "there was a growing disillusionment with Huerta, and disgust at his strong-arm methods." Huerta closed the legislature on 26 October 1913, having the army surround its building and arresting congressmen perceived to be hostile to his regime. Despite that, congressional elections went ahead, but given that congress was dissolved and some members were in jail, opposition candidates' fervor disappeared. The sham election "brought home to [Woodrow] Wilson's administration the fatuity of relying on elections to demonstrate genuine democracy." The October 1913 elections were the end of any pretension to constitutional rule in Mexico, with civilian political activity banned. Prominent Catholics were arrested and Catholic newspapers were suppressed.
Huerta militarized Mexico to a greater extent than it already was. When Huerta seized power in 1913, the army had on the books approximately 50,000 men, but Huerta mandated the number rise to 150,000, then 200,000 and, finally in spring 1914, 250,000. Raising that number of men in so short a time would not occur with volunteers, and the army resorted to the , forced conscription. The revolutionary forces had no problem with voluntary recruitment. Most Mexican men avoided government conscription at all costs and the ones dragooned into the forces were sent to areas far away from home and were reluctant to fight. Conscripts deserted, mutinied and attacked and murdered their officers.
In April 1914 American opposition to Huerta culminated in the seizure and occupation of the port of Veracruz by U.S. Marines and sailors. Initially intended to prevent a German merchant vessel from delivering a shipment of arms to the Huerta regime, the muddled operation evolved into a seven-month stalemate resulting in the death of 193 Mexican soldiers, 19 American servicemen and an unknown number of civilians. The German ship landed its cargo—largely American-made rifles—in a deal brokered by American businessmen (at a different port). American forces eventually left Veracruz in the hands of the Carrancistas, but with lasting damage to U.S.-Mexican relations.Alan McPherson (2013). Encyclopedia of U.S. Military Interventions in Latin America , p. 393, ABC-CLIOSusan Vollmer (2007). Legends, Leaders, Legacies , p. 79, Biography & Autobiography.
In Mexico's south, Zapata took Chilpancingo, Guerrero in mid-March; he followed this soon afterward with the capture of the Pacific coast port of Acapulco; Iguala; Taxco; and Buenavista de Cuellar. He confronted the federal garrisons in Morelos, the majority of which defected to him with their weapons. Finally he moved against the capital, by sending his subordinates into Mexico state.
Constitutionalist forces made major gains against the Federal Army. In early 1914 Pancho Villa had moved against the Federal Army in the border town of Ojinaga, Chihuahua, sending the federal soldiers fleeing to Fort Bliss, in the U.S. state of New Mexico. In mid-March he took Torreón, a well-defended railway hub city. After bitter fighting for the hills surrounding Torreón, and later point-blank bombardment, on April 3 Villa's troops entered the devastated city. The Federal Army made a last stand at San Pedro de las Colonias, only to be undone by squabbling between the two commanding officers, General Velasco and General Maas, over who had the higher rank. As of mid-April, Mexico City sat undefended before Constitutionalist forces under Villa. Obregón moved south from Sonora along the Pacific Coast. When his way was blocked by federal gunboats, Obregón attacked these boats with an airplane, an early use of an airplane for military purposes. In early July he defeated federal troops at Orendain, Jalisco, leaving 8,000 federals dead and capturing a large trove of armaments. He was now in a position to arrive at Mexico City ahead of Villa, who was diverted by orders from Carranza to take Saltillo. Carranza, the civilian First Chief, and Villa, the bold and successful commander of the Division of the North, were on the verge of splitting. Obregón, the other highly successful Constitutionalist general, sought to keep the northern coalition intact.
The Federal Army's defeats caused Huerta's position to continue to deteriorate and in mid-July 1914, he stepped down and fled to the Gulf Coast port of Puerto México, seeking to get himself and his family out of Mexico rather than face the fate of Madero. He turned to the German government, which had generally supported his presidency. The Germans were not eager to allow him to be transported into exile on one of their ships, but relented. Huerta carried "roughly half a million marks in gold with him" as well as paper currency and checks. In exile, Huerta sought to return to Mexico via the United States. American authorities arrested him and he was imprisoned in Fort Bliss, Texas. He died in January 1916, six months after going into exile.Richmond, Douglas W., "Victoriano Huerta", in Encyclopedia of Mexico, vol. 1, p. 658.
Huerta's resignation marked the end of an era. The Federal Army, a spectacularly ineffective fighting force against the revolutionaries, ceased to exist.Archer, Christon I. "Military, 1821–1914", in Encyclopedia of Mexico, vol. 2, p. 910. The revolutionary factions that had united in opposition to Huerta's regime now faced a new political landscape with the counter-revolutionaries decisively defeated. The revolutionary armies now contended for power and a new era of civil war began after an attempt at an agreement among the winners at a Convention of Aguascalientes.
Meeting of the winners, then civil war: 1914–1915
With Huerta's ouster in July 1914 and the dissolution of the Federal Army in August, the revolutionary factions agreed to meet and make "a last-ditch effort to avert more intense warfare than that which unseated Huerta".Hart, Revolutionary Mexico, 276. Commander of the Division of the North, Pancho Villa, and the Division of the Northeast, Pablo González had drawn up the Pact of Torreón in early July, pushing for a more radical agenda than Carranza's Plan of Guadalupe. It also called for a meeting of revolutionary generals to decide Mexico's political future.
Carranza called for a meeting in October 1914 in Mexico City, which he now controlled with Obregón, but other revolutionaries opposed to Carranza's influence successfully moved the venue to Aguascalientes. The Convention of Aguascalientes did not, in fact, reconcile the various victorious factions in the Mexican Revolution. The break between Carranza and Villa became definitive during the Convention. "Carranza spurned it, and Villa effectively hijacked it. Mexico's lesser were forced to choose" between those two forces.Knight, "Venustiano Carranza", 573. It was a brief pause in revolutionary violence before another all-out period of civil war ensued.
Carranza had expected to be confirmed in his position as First Chief of revolutionary forces, but his supporters "lost control of the proceedings".Tuñon Pablos, Esperanza. "Mexican Revolution: February 1913 – October 1915" in Encyclopedia of Mexico, 858 Opposition to Carranza was strongest in areas where there were popular and fierce demands for reform, particularly in Chihuahua where Villa was powerful, and in Morelos where Zapata held sway. The Convention of Aguascalientes brought that opposition out in an open forum.
The revolutionary generals of the Convention called on Carranza to resign executive power. Although he agreed to do so, he laid out conditions for it. He would resign if both Pancho Villa and Emiliano Zapata, his main rivals for power, would resign and go into exile, and that there should be a so-called pre-constitutionalist government "that would take charge of carrying out the social and political reforms the country needs before a fully constitutional government is re-established."Carranza quoted in Enrique Krauze, Mexico: Biography of Power, 349.
Rather than First Chief Carranza being named president of Mexico at the convention, General Eulalio Gutiérrez was chosen for a term of 20 days. The Convention declared Carranza in rebellion against it. Civil war resumed, this time between revolutionary armies that had fought in a united cause to oust Huerta in 1913–1914. During the Convention, Constitutionalist General Álvaro Obregón had attempted to be a moderating force and had been the one to convey the Convention's call for Carranza to resign.
The lines were now drawn. When the Convention forces declared Carranza in rebellion against it, Obregón supported Carranza rather than Villa and Zapata. Villa and Zapata went into a loose alliance. Their forces moved separately on Mexico City, and took it when Carranza's forces evacuated it in December 1914 for Veracruz. The famous picture of Zapata and Villa in the National Palace, with Villa sitting in the presidential chair, is a classic image of the Revolution. Villa is reported to have said to Zapata that the presidential chair "is too big for us".
In practice, the alliance between Villa and Zapata as the Army of the Convention did not function beyond this initial victory against the Constitutionalists. Villa and Zapata left the capital, with Zapata returning to his southern stronghold in Morelos, where he continued to engage in warfare under the Plan of Ayala. Lacking a firm center of power and leadership, the Convention government was plagued by instability. Villa was the real power emerging from the Convention, and he prepared to strengthen his position by winning a decisive victory against the Constitutionalist Army.
Villa had a well-earned reputation as a fierce and successful general, and the combination of forces arrayed against Carranza by Villa, other northern generals and Zapata was larger than the Constitutionalist Army, so it was not at all clear that Carranza's faction would prevail. He did have the advantage of the loyalty of General Álvaro Obregón. Despite Obregón's moderating actions at the Convention of Aguascalientes, even trying to persuade Carranza to resign his position, he ultimately sided with Carranza.Cumberland, Mexican Revolution: Constitutionalist Years, 180.
Another advantage of Carranza's position was the Constitutionalists' control of Veracruz, even though the United States still occupied it. The United States had concluded that both Villa and Zapata were too radical and hostile to its interests and sided with the moderate Carranza in the factional fighting.Cumberland, Mexican Revolution: Constitutionalist Years, 181. The U.S. timed its exit from Veracruz, brokered at the Niagara Falls peace conference, to benefit Carranza and allowed munitions to flow to the Constitutionalists. The U.S. granted Carranza's government diplomatic recognition in October 1915.
The rival armies of Villa and Obregón clashed in April 1915 in the Battle of Celaya, which lasted from the sixth to the 15th. The frontal cavalry charges of Villa's forces were met by the shrewd, modern military tactics of Obregón. The victory of the Constitutionalists was complete, and Carranza emerged as the political leader of Mexico with a victorious army to keep him in that position. Villa retreated north. Carranza and the Constitutionalists consolidated their position as the winning faction, with Zapata remaining a threat until his assassination in 1919. Villa also remained a threat to the Constitutionalists, complicating their relationship with the United States when elements of Villa's forces raided Columbus, New Mexico, in March 1916, prompting the U.S. to launch a punitive expedition into Mexico in an unsuccessful attempt to capture him.
Constitutionalists in power under Carranza: 1915–1920
Carranza's 1913 Plan of Guadalupe was narrowly political, designed to unite the anti-Huerta forces in the north. But once Huerta was ousted, the Federal Army dissolved, and former Constitutionalist Pancho Villa defeated, Carranza sought to consolidate his position. The Constitutionalists retook Mexico City, which had been held by the Zapatistas, and held it permanently. He did not take the title of provisional or interim President of Mexico, since in doing so he would have been ineligible to become the constitutional president. Until the promulgation of the 1917 Constitution his was framed as the "pre-constitutional government."
In October 1915, the U.S. recognized Carranza's government as the de facto ruling power, following Obregón's victories. This gave Carranza's Constitutionalists legitimacy internationally and access to the legal flow of arms from the U.S. The Carranza government still had active opponents, including Villa, who retreated north.Knight, Alan. "Venustiano Carranza" in Encyclopedia of Latin American History and Culture, vol. 1, pp. 573–575 Zapata remained active in the south, even though he was losing support, Zapata remained a threat to the Carranza regime until his assassination by order of Carranza on 10 April 1919.Brunk, Samuel. "Emiliano Zapata". In Encyclopedia of Latin American History and Culture, vol. 5, p. 494. Disorder and violence in the countryside was largely due to anti-Carranza forces, but banditry as well as military and police misconduct contributed to the unsettled situation. The government's inability to keep order gave an opening to supporters of the old order headed by Félix Díaz (nephew of former President Porfirio Diaz). Some 36 generals of the dissolved Federal Army stood with Díaz.
The Constitutionalist Army was renamed the "Mexican National Army" and Carranza sent some of its most able generals to eliminate threats. In Morelos, he sent General Pablo González to fight Zapata's Liberating Army of the South.Matute, Álvaro. "Mexican Revolution: May 1917 – December 1920" in Encyclopedia of Mexico, 862. Morelos was very close to Mexico City, so Zapata's control of it and parts of the adjacent state of Puebla made Carranza's government vulnerable. Constitutionalist Army soldiers assassinated Zapata in an ambush in 1919, after their commanding officer tricked Zapata by pretending that he intended to defect to Zapata's side. Carranza sent General Francisco Murguía and General Manuel M. Diéguez to track down and eliminate Villa, but they were unsuccessful. They did capture and execute one of Villa's top men, General Felipe Angeles, the only general of the old Federal Army to join the revolutionaries.Matute, "Mexican Revolution: May 1917 – December 1920", Encyclopedia of Mexico, 863. Revolutionary generals asserted their "right to rule", having been victorious in the Revolution, but "they ruled in a manner which was a credit neither to themselves, their institution, nor the Carranza government. More often than not, they were predatory, venal, cruel and corrupt." The system of central government control over states that Díaz had created over decades had broken down during the revolutionary fighting. Autonomous fiefdoms arose in which governors simply ignored orders by the Carranza government. One of these was Governor of Sonora, General Plutarco Elías Calles, who later joined in the 1920 successful coup against Carranza.
The 1914 Pact of Torreón had contained far more radical language and promises of land reform and support for peasants and workers than Carranza's original plan. Carranza issued the "Additions to the Plan of Guadalupe", which for the first time promised significant reform. He also issued an agrarian reform law in 1915, drafted by Luis Cabrera, sanctioning the return of all village lands illegally seized in contravention of an 1856 law passed under Benito Juárez. The Carranza reform declared village lands were to be divided among individuals, aiming at creating a class of small holders, and not to revive the old structure of communities of communal landholders. In practice, land was transferred not to villagers, but rather redistributed to Constitutional army generals, and created new large-scale enterprises as rewards to the victorious military leaders.Gilly, Adolfo. The Mexican Revolution. New York: The New Press 2005, 185–187
Carranza did not move on land reform, despite his rhetoric. Rather, he returned confiscated estates to their owners. Not only did he oppose large-scale land reform, he vetoed laws that would have increased agricultural production by giving peasants temporary access to lands not under cultivation.Markiewicz, Dana. The Mexican Revolution and the Limits of Agrarian Reform, 1915–1946. Boulder: Lynne Rienner Publisher 1993, p. 31. In places where peasants had fought for land reform, Carranza's policy was to repress them and deny their demands. In the southeast, where hacienda owners held strong, Carranza sent the most radical of his supporters, Francisco José Múgica in Tabasco and Salvador Alvarado in Yucatan, to mobilize peasants and be a counterweight to the hacienda owners. After taking control of Yucatán in 1915, Salvador Alvarado organized a large Socialist Party and carried out extensive land reform. He confiscated the large landed estates and redistributed the land in smaller plots to the liberated peasants.Busky, Donald F. Democratic Socialism: A Global Survey Maximo Castillo, a revolutionary brigadier general from Chihuahua was frustrated by the slow pace of land reform under the Madero presidency. He ordered the subdivision of six haciendas belonging to Luis Terrazas, which were given to sharecroppers and tenants.
Carranza's relationship with the United States had initially benefited from its recognition of his government, with the Constitutionalist Army being able to buy arms. In 1915 and early 1916, there is evidence that Carranza was seeking a loan from the U.S. with the backing of American bankers and a formal alliance with the U.S. Mexican nationalists in Mexico were seeking a stronger stance against the colossus of the north, by taxing foreign holdings and limiting their influence. Villa's raid against Columbus, New Mexico in March 1916, ended the possibility of a closer relationship with the U.S. Under heavy pressure from public opinion in the U.S. to punish the attackers (stoked mainly by the papers of ultra-conservative publisher William Randolph Hearst, who owned a large estate in Mexico), American President Woodrow Wilson sent General John J. Pershing and around 5,000 troops into Mexico in an attempt to capture Villa.
The U.S. Army intervention, known as the Punitive Expedition, was limited to the western Sierras of Chihuahua. From the Mexican perspective, as much as Carranza sought the elimination of his rival Villa, but as a Mexican nationalist he could not countenance the extended U.S. incursion into its sovereign territory. Villa knew the inhospitable terrain intimately and operating with guerrilla tactics, he had little trouble evading his U.S. Army pursuers. Villa was deeply entrenched in the mountains of northern Mexico and knew the terrain too well to be captured. American General John J. Pershing could not continue with his unsuccessful mission; declaring victory the troops returned to the U.S. after nearly a year. They were shortly thereafter deployed to Europe when the U.S. entered World War I on the side of the Allies. The Punitive Mission not only damaged the fragile United States-Mexico relationship, but also caused a rise in anti-American sentiment among the Mexicans. Carranza asserted Mexican sovereignty and forced the U.S. to withdraw in 1917
With the outbreak of World War I in Europe in 1914, foreign powers with significant economic and strategic interests in Mexico—particularly the U.S., Great Britain and Germany—made efforts to sway Mexico to their side, but Mexico maintained a policy of neutrality. In the Zimmermann Telegram, a coded cable from the German government to Carranza's government, Germany attempted to draw Mexico into war with the United States, which was itself neutral at the time. Germany hoped to draw American troops from deployment to Europe and as a reward in the event of a German victory to return the territory lost to Mexico to the U.S. in the Mexican–American War. Carranza did not pursue this policy, but the leaking of the telegram pushed the U.S. into war against Germany in 1917.
1917 Constitution
The Constitutionalist Army fought in the name of the 1857 Constitution promulgated by liberals during the Reform era, sparking a decade-long armed conflict between liberals and conservatives. In contrast, the 1917 Constitution came at the culmination of revolutionary struggle. Drafting a new constitution was not a given at the outbreak of the Revolution. Carranza's 1913 Plan of Guadalupe was a narrow political plan to unite Mexicans against the Huerta regime and named Carranza as the head of the Constitutionalist Army. Increasingly revolutionaries called for radical reform. Carranza had consolidated power and his advisers persuaded him that a new constitution would better accomplish incorporating major reforms than a piecemeal revision of the 1857 constitution.Niemeyer, E. V. Revolution at Querétaro: The Mexican Constitutional Convention of 1916–1917. Austin: University of Texas Press 1974, 26–27
In 1916 Carranza was only acting president at the time, and the expectation was to hold presidential elections. He called for a constituent congress to draft a new document based on liberal and revolutionary principles. Labor had supported the Constitutionalists and Red Battalions had fought against the Zapatistas, the peasant revolutionaries of Morelos. As revolutionary violence subsided in 1916, leaders of the Constitutionalist faction met in Querétaro to revise the 1857 constitution. The delegates were elected by jurisdiction and population, with the exclusion of those who served the Huerta regime, continued to follow Villa after the split with Carranza, as well as Zapatistas. The election of delegates was to frame the creation of the new constitution as the result of popular participation. Carranza provided a draft revision for the delegates to consider.
Once the convention was in session after disputes about delegates, delegates reviewed Carranza's draft constitution. That document was a minor revision of the 1857 constitution and included none of the social, economic, and political demands for which revolutionary forces fought and died. The convention was divided between conservatives, mostly politicians who had supported Madero and then Carranza, and progressives, who were soldiers who had fought in revolutionary battles. The progressives, deemed radical Jacobins by the conservatives, "sought to integrate deep political and social reforms into the political structure of the country."Gilly, The Mexican Revolution, 232. making principles for which many of the revolutionaries had fought into law.
The Mexican Constitution of 1917 was strongly nationalist, giving the government the power to expropriate foreign ownership of resources and enabling land reform (Article 27). It also had a strong code protecting organized labor (Article 123) and extended state power over the Roman Catholic Church in Mexico in its role in education (Article 3).
and were excluded from the Constituent Congress, but their political challenge pushed the delegates to radicalize the Constitution, which in turn was far more radical than Carranza himself. While he was elected constitutional president in 1917, he did not implement its most revolutionary elements, particularly those dealing with land reform. Carranza came from the old Porfirian landowning class and was repulsed by peasant demand for redistribution of land and their expectation that land seized would not revert to their previous owners.
Although revolutionary generals were not part formal delegates to the convention, Álvaro Obregón indirectly, then directly, sided with the progressives against Carranza. In historian Frank Tannenbaum's assessment, "The Constitution was written by the soldiers of the Revolution, not by the lawyers, who were there [at the convention], but were generally in opposition."Tannenbaum, Frank. Peace by Revolution, 166. The constitution was drafted and ratified quickly, in February 1917. In December 1916, Villa had captured the major northern city of Torreón, with Obregón especially realizing that Villa was a continuing threat to the Constitutionalist regime. Zapata and his peasant followers in Morelos also never put down their guns and remained a threat to the government in Mexico City. Incorporating radical aspects of Villa's program and the Zapatistas' Plan of Ayala, the constitution became a way to outflank the two opposing revolutionary factions.
Carranza was elected president under the new constitution, and once formally in office, largely ignored or actively undermined the more radical aspects of the constitution. Obregón returned to Sonora and began building a power base that would launch his presidential campaign in 1919, which included the new labor organization headed by Luis N. Morones, the Regional Confederation of Mexican Workers (CROM). Carranza increasingly lost support of labor, crushing strikes against his government. Carranza did not move forward on land reform, fueling increasing opposition from peasants. In an attempt to suppress the continuing armed opposition conflict in Morelos, Carranza sent General Pablo González with troops. Going further, Carranza ordered the assassination of Emiliano Zapata in 1919. It was a huge blow, but Zapatista General Genovevo de la O continued to lead the armed struggle there.
Emiliano Zapata and the Revolution in Morelos
From the late Porfiriato until his assassination by an agent of President Carranza in 1919, Emiliano Zapata played an important role in the Mexican Revolution, the only revolutionary of first rank from southern Mexico.Harris and Sadler, The Secret War in El Paso, ix His home territory in Morelos was of strategic importance just south of Mexico City. Of the revolutionary factions, it was the most homogeneous, with most fighters being free peasants and only few peons on haciendas. With no industry to speak of in Morelos, there were no industrial workers in the movement and no middle-class participants. A few intellectuals supported the Zapatistas. The Zapatistas' armed opposition movement just south of the capital needed to be heeded by those in power in Mexico City. Unlike northern Mexico, close to the U.S. border and access to arms sales from there, the Zapatista territory in Morelos was geographically isolated from access to arms. The Zapatistas did not appeal for support to international interests nor play a role in international politics the way Pancho Villa, the other major populist leader, did. The movement's goal was for land reform in Morelos and restoration of the rights of communities. The Zapatistas were divided into guerrilla fighting forces that joined together for major battles before returning to their home villages. Zapata was not a peasant himself but led peasants in his home state in regionally concentrated warfare to regain village lands and return to subsistence agriculture. Morelos was the only region where land reform was enacted during the years of fighting.
Zapata initially supported Madero, since his Plan de San Luis Potosí had promised land reform. But Madero negotiated a settlement with the Díaz regime that continued its power. Once elected in November 1911, Madero did not move on land reform, prompting Zapata to rebel against him and draft the Plan of Ayala (1911).Womack, John Jr., Zapata and the Mexican Revolution (1968)McNeely, John H. "Origins of the Zapata revolt in Morelos." Hispanic American Historical Review (1966): pp. 153–169.
After Madero's overthrow and murder, Zapata disavowed his previous admiration for Pascual Orozco and directed warfare against the Huerta government, as did northern states of Mexico in the Constitutionalist movement, but Zapata did not ally or coordinate with it. With the defeat of Huerta in July 1914, Zapata loosely allied with Pancho Villa, who had split from Venustiano Carranza and the Constitutionalist Army. The loose Zapata-Villa alliance lasted until Obregón decisively defeated Villa in a series of battles in 1915, including the Battle of Celaya. Zapata continued to oppose the Constitutionalists, but lost support in his own area and attempted to entice defectors back to his movement. That was a fatal error. He was ambushed and killed on 10 April 1919 by agents of now President Venustiano Carranza.Brunk, Samuel. "Emiliano Zapata" vol. 5, p. 494. Photos were taken of his corpse, demonstrating that he had indeed been killed.
Although Zapata was assassinated, the agrarian reforms that peasants themselves enacted in Morelos were impossible to reverse. The central government came to terms with that state of affairs. Zapata had fought for land and for those who tilled it in Morelos and succeeded. His credentials as a steadfast revolutionary made him an enduring hero of the Revolution. His name and image were invoked in the 1994 uprising in Chiapas, with the Zapatista Army of National Liberation.
The last successful coup: 1920
Even as Carranza's political authority was waning, he attempted to impose Mexico's ambassador to the U.S., Ignacio Bonillas, as his successor. Under the Plan of Agua Prieta, a triumvirate of Sonoran generals, Álvaro Obregón, Plutarco Elías Calles, and Adolfo de la Huerta, with elements from the military and labor supporters in the CROM, rose in successful rebellion against Carranza, the last successful coup of the revolution.Knight, "Venustiano Carranza", vol. 1, pp. 574–575 Carranza fled Mexico City by train toward Veracruz, but continued on horseback and died in an ambush, perhaps an assassination, but also possibly by suicide. Carranza's attempt to impose his choice was considered a betrayal of the Revolution and his remains were not placed in the Monument to the Revolution until 1942.Benjamin, La Revolución, p. 91.
"Obregón and the Sonorans, the architects of Carranza's rise and fall, shared his hard headed opportunism, but they displayed a better grasp of the mechanisms of popular mobilization, allied to social reform, that would form the bases of a durable revolutionary regime after 1920."Knight, "Venustiano Carranza", vol. 1, p. 574. The interim government of Adolfo de la Huerta negotiated Pancho Villa's surrender in 1920, rewarding him with an hacienda where he lived in peace until he floated political interest in the 1924 election. Villa was assassinated in July 1923.Wasserman, Mark. "Francisco "Pancho" Villa" in Encyclopedia of Latin American History and Culture, vol. 5, p. 416. Álvaro Obregón was elected president in October 1920, the first of a string of revolutionary generals – Calles, Rodríguez, Cárdenas, and Ávila Camacho—to hold the presidency until 1946, when Miguel Alemán, the son of a revolutionary general, was elected.
Consolidation of the Revolution: 1920–1940
The period 1920–1940 is generally considered to be one of revolutionary consolidation, with the leaders seeking to return Mexico to the level of development it had reached in 1910, but under new parameters of state control. Authoritarian tendencies rather than liberal democratic principles characterized the period, with generals of the revolution holding the presidency and designating their successors.Gentleman, Judith, "Revolutionary Consolidation, 1920–1940". Encyclopedia of Latin American History and Culture, v. 4, 16–17 Revolutionary generals continued to revolt against the new political arrangements, particularly at the juncture of an election. General Adolfo de la Huerta rose in rebellion in 1923, contesting Obregón's choice of Calles as his successor; Generals Arnulfo Gómez and Francisco Serrano revolted in 1928, contesting Obregón's bid for a second term as president; and General José Gonzalo Escobar revolted in 1929 against Calles, who remained a power behind the presidency with the assassination of Obregón in 1928. All these revolts were unsuccessful. In the late 1920s, anticlerical provisions of the 1917 Constitution were stringently enforced, leading to a major grassroots uprising against the government, the bloody Cristero War that lasted from 1926 to 1929. Although the period is characterized as a consolidation of the Revolution, who ruled Mexico and the policies the government pursued were met with violence.
Sonoran generals in the presidency: 1920–1928
There is no consensus when the Revolution ended, but the majority of scholars consider the 1920s and 1930s as being on the continuum of revolutionary change.Meyer, Jean. "Revolution and Reconstruction in the 1920s." Mexico since Independence, Cambridge: Cambridge University Press 1991, 201–240Benjamin, Thomas. "Rebuilding the Nation". The Oxford History of Mexico. New York: Oxford University Press 2000, 467–502The end date of revolutionary consolidation has also been set at 1946, with the last general serving as president and the political party morphing into the Institutional Revolutionary Party.Knight, Alan. "The rise and fall of Cardenismo, c. 1930–1946". Mexico since Independence, Cambridge: Cambridge University Press 1991, 241–320
In 1920, Sonoran revolutionary general Álvaro Obregón was elected President of Mexico and inaugurated in December 1920, following the coup engineered by him and revolutionary generals Plutarco Elías Calles, and Adolfo de la Huerta. The coup was supported by other revolutionary generals against the civilian Carranza attempting to impose another civilian, Ignacio Bonillas as his successor. Obregón did not have to deal with two major revolutionary leaders. De la Huerta managed to persuade revolutionary general Pancho Villa to lay down his arms against the regime in return for a large estate in Durango, in northern Mexico. Carranza's agents had assassinated Emiliano Zapata in 1919, removing a consistent and effective opponent. Some counterrevolutionaries in Chiapas laid down their arms. The only pro-Carranza governor to resist the regime change was Esteban Cantú in Baja California, suppressed by northern revolutionary general Abelardo Rodríguez,Matute, "Mexican Revolution: May 1917–1920". Encyclopedia of Mexico. Chicago: Fitzroy Dearborn 1997, 864 later to become president of Mexico. Although the 1917 Constitution was not fully implemented and parts of the country were still controlled by local strongmen, caciques, Obregón's presidency did begin consolidation of parts of the revolutionary agenda, including expanded rights of labor and the peasantry.
Obregón was a pragmatist and not an ideologue, so that domestically he had to appeal to both the left and the right to ensure Mexico would not fall back into civil war. Securing labor rights built on Obregón's existing relationship with urban labor. The Constitutionists had made an alliance with labor during the revolution, mobilizing the Red Battalions against Zapata's and Villa's force. This alliance continued under Obregón's and Calles's terms as president. Obregón also focused on land reform. He had governors in various states push forward the reforms promised in the 1917 constitution. These were, however, quite limited. Former Zapatistas still had strong influence in the post-revolutionary government, so most of the reforms began in Morelos, the birthplace of the Zapatista movement.
Obregón's government was faced with the need for stabilizing Mexico after a decade of civil war. With the revolutionary armies having defeated the old federal army, Obregón now dealt with military leaders who were used to wielding power violently. Enticing them to leave the political arena in exchange for material rewards was one tactic. De la Huerta had already successfully used it with Pancho Villa. Not trusting Villa to remain on the sidelines, Obregón had him assassinated in 1923.Dulles, John F.W. Yesterday in Mexico: A Chronicle of the Revolution, 1919–1936. Austin: University of Texas Press 1961, 177–180 In 1923 De la Huerta rebelled against Obregón and his choice of Calles as his successor as president, leading to a split in the military. The rebellion was suppressed and Obregón began to professionalize the military, reduced the number of troops by half, and forced officers to retire. Obregón (1920–24) followed by Calles (1924–28) viewed bringing the armed forces under state control as essential to stabilizing Mexico.Serrano, Mónica. "Military, 1914–1996". Encyclopedia of Mexico. Chicago: Fitzroy Dearborn 1997, 911 Downsizing the military meant that state funds were freed up for other priorities, especially education.Matute, "Álvaro Obregón", 1032. Obregón's Minister of Education, José Vasconcelos, initiated innovative broad educational and cultural programs.
Obregón sought diplomatic recognition by the U.S. in order to be considered legitimately holding power. He believed that once U.S. recognition was secured, other nations would follow suit. The U.S. and foreign interests were alarmed at provision in the new constitution powering the government to expropriate private property, and foreigners also had claims against Mexico for damage to their property during the decade of turmoil. American and British entrepreneurs had developed the petroleum industry in Mexico and had claims to oil still in the ground. Foreigners held extensive agricultural land that was now at risk to be distributed to landless Mexicans. Obregón and the U.S. entered in talks to sort out many issues, the Bucareli Treaty, concluded in 1923, with the U.S. recognizing Obregón's government.Dulles, Yesterday in Mexico, 158–172 In Mexico the agreement was controversial, with it being perceived as making major concessions to the U.S. and undermining revolutionary goals, but Obregón pushed it through the legislature and gained American recognition. When his fellow Sonoran general De La Huerta rebelled later in 1923, the U.S. supplied Obregón with arms to put down the challenge.Matute, Álvaro. "Álvaro Obregón". Encyclopedia of Mexico. Chicago: Fitzroy Dearborn 1997, 1032–1033
In an attempt to buffer his regime against further coups, Calles began arming peasants and factory workers with surplus weapons. He continued other reforms pushed by his predecessor, but Calles was virulently anti-clerical and unlike Obregón who largely avoided direct conflict with the Catholic Church, Calles as president enforced the anticlerical provisions of the 1917 Constitution. Calles also put into effect a national school system that was largely secular to combat church influence in late 1924. After two years the state crackdown, the Catholic Church protested by going on its version of a strike, refusing to baptize, marry, give last rites, or give communion to parishioners. Many peasants also joined in opposition to the state's crackdown on religion, beginning the Cristero War, named for their clarion call Viva Cristo Rey ("long live Christ the king"). It was a lengthy, major uprising against the revolutionary vision of the Mexican state in central Mexico, not a short-lived, localized rebellion. Calles's stringent enforcement of anticlerical laws had an impact on the presidential succession, with Calles's comrade and chosen successor, ex-President and President-elect Obregón being assassinated by a religious fanatic in 1928, plunging the political system into a major crisis. By law Calles could not be re-elected, but a solution needed to be found to keep political power in the hands of the revolutionary elite and prevent the country from reverting to civil war.
Political crisis and the founding of the revolutionary party
With the 1917 Constitution enshrining the principle of "no re-election", revolutionaries who had fought for the principle could not ignore it. Elections were when disgruntled aspirants to the presidency made their move, because it was a period of political transition. The Sonoran triumvirate had done so in 1920. In 1923, De la Huerta rebelled against Obregón's choice of Calles rather than himself as candidate. When Calles designated ex-president Obregón to succeed him, permitted by a constitutional amendment, the principle of no re-elected was technically adhered to, but there was the clear possibility of an endless alternation of the two powerful men. Other rebellions of revolutionary generals broke out in 1927, by Francisco Serrano and Arnulfo R. Gómez, which was suppressed, and the leaders executed. Obregón was elected, but assassinated before he took office, plunging the country into a political crisis over presidential succession. Since the Mexican Revolution had been sparked by the 1910 re-election of Díaz, Calles and others were well aware that the situation could spiral out of control. This political crisis came when the bloody Cristero War raged across central Mexico. A managed political solution to the crisis of presidential succession had to be found. The answer was the founding of the Partido Nacional Revolucionario. In 1929 Calles brought together the various factions, mainly regional strongmen. Calles himself could not become president again, but he remained a powerful figure, the Jefe Máximo, in a period called the Maximato (1928-34). Three men (Emilio Portes Gil, Pascual Ortiz Rubio, and Abelardo L. Rodríguez) held the presidency in what would have been Obregón's second term. To avoid alternation of the presidency by men who had previously held the office, the constitution was revised, reverted to the principle of no re-election.Matute, "Álvaro Obregón", 1032–1033
An achievement in this period was the 1929 peace agreement between the Catholic Church and the Mexican state, brokered by Dwight Morrow, U.S. Ambassador to Mexico. The church-state conflict went into hibernation following the designation of General Manuel Ávila Camacho to succeed President Lázaro Cárdenas in 1940.
Revitalization under Lázaro Cárdenas: 1934–1940
In 1934, Calles chose Lázaro Cárdenas as the PNR's presidential candidate. Unlike his three predecessors controlled by Calles, Cárdenas threw off the jefe máximo's power and set about implementing a re-vitalilzed revolutionary agenda. He vastly expanded agrarian reform, expropriated commercial landed estates; nationalized the railways and the petroleum industry; kept the peace with the Catholic Church as an institution; put down a major rebellion by Saturnino Cedillo; founded a new political party that created sectoral representation of industrial workers, peasants, urban office workers, and the army; engineered the succession of his hand-picked candidate; and then, perhaps the most radical act of all, stepped away from presidential power, letting his successor, General Manuel Ávila Camacho, exercise fully presidential power.
Cárdenas came from the southern state of Michoacan, but during the revolution had fought in the north, rising to the rank of general, and becoming a part of the northern dynasty. He returned to Michoacan after the revolution, and implemented a number of reforms that were precursors of those he enacted as president. With Calles's founding of the PNR, Cárdenas became part of the party apparatus. Calles had no idea that Cárdenas was as politically savvy as he turned out to be, managing to oust Calles from his role as the power behind the presidency and forcing him into exile. Calles had increasingly moved to the political right, abandoning support for land reform. Peasants who had joined the revolution with the hope that land reform would be enacted, and the constitution had empowered the state to expropriate land and other resources. During Cárdenas's presidency, he expropriated and distributed land and organized peasant leagues, incorporating them into the political system. Although in theory peasants and workers could come together as a single powerful sector, the PNR ruled that peasant organizations were to be separate from industrial labor, and organizing the countryside should be under the control of the party.Knight, Alan. "The Rise and Fall of Cardenismo", 275.
Cárdenas encouraged working class organizations and sought to bring them into the political system under state control. The CROM, an umbrella labor organization, had declined in power with the ouster of Calles. Radical labor leader Vicente Lombardo Toledano helped create the Confederation of Mexican Workers (CTM), a nationalist, autonomous, non-politically affiliated organization. Communists in the labor movement were aligned with the Moscow-controlled Communist International, and Cárdenas sought to strengthen the Mexican labor organization aligned with the Mexican revolutionary state.
His first acts of reform in 1935, were aimed towards peasants. Former strongmen within the land owning community were losing political power, so he began to side with the peasants more and more. He also tried to further centralize the government's power by removing regional caciques, allowing him to push reforms easier. To fill the political vacuum, Cárdenas helped the formation of PNR-sponsored peasant leagues, empowering both peasants and the government. Other reforms included nationalization of key industries such as petroleum and the railroads. To appease workers, Cárdenas furthered provisions to end debt peonage and company stores, which were largely eliminated under his rule, except in the most backwater areas of Mexico. To prevent conservative factions in the military from plotting and to put idle soldiers to work, Cárdenas mobilized the military to build public works projects. That same year another Cristero revolt occurred. This was partially caused by Cárdenas' mandate for secular education early in his presidency in 1934. The Cristeros were not supported by the Catholic hierarchy and Cárdenas quashed the revolt. The Catholic Church told rebels to surrender themselves to the government.
In the next year, 1936, to further stabilize his rule, Cárdenas further armed the peasants and workers and begins to organize them into formal militias. This proved to be useful later in his presidency as the militias came to his aid in an attempted military coup in 1938. Seeing no opposition from the bourgeoisie, generals, or conservative landlords, in 1936 Cárdenas began building collective agricultural enterprises called ejidos to help give peasants access to land, mostly in southern Mexico. These appeased some agriculturalists, but many peasants would have preferred receiving individual plots of land to which they had title. The aim of ejidos was to replace the large-scale landed estates, many of which were foreign owned. Andrés Molina Enríquez, the intellectual father of article 27 of the constitution empowering the state to expropriate property, criticized the move, saying that the state itself was replacing private landowners, while the peasants remained tied to the land. Ejidos were not very good at feeding large populations, causing an urban food crisis. To alleviate this, Cárdenas co-opted the support of capitalists to build large commercial farms to feed the urban population. This put the final nail in the coffin of the feudal hacienda system, making Mexico a mixed economy, combining agrarian socialism and industrial capitalism by 1940.
Cárdenas dissolved the revolutionary party founded by Calles, and established a new party, the Partido de la Revolución Mexicana, organized by sectors. There were four sectors: industrial workers, peasants, middle class workers, largely employed by the government, and the army. Bringing the military into the party structure was controversial, privately opposed by General Manuel Avila Camacho, who succeeded Cárdenas and in the final reformulation of the party, removed the military sector.Camp, Mexico's Military on the Democratic Stage, 22 Cárdenas calculated to manage the military politically and to remove it from independently intervening in politics and to keep it from becoming a separate caste. This new party organization was a resurrection of corporatism, essentially organization by estates or interest groups.Weston, Charles H., Jr. "The Political Legacy of Lázaro Cárdenas", The Americas vol. 39, no. 3 (Jan. 1963), 388. The party was reorganized once again in 1946 as the Institutional Revolutionary Party, which kept sectoral representation but eliminated the military as a sector.
Cárdenas left office in 1940 at age 45. His departure marked the end of the social revolution and ushering in half a century of relative stability. However, in the assessment of historian Alan Knight, the 1940 election was "a requiem for Cardenismo: it revealed that hopes of a democratic succession were illusory; that electoral endorsement of the regime had to be manufactured; and that the Cardenista reforms, while creating certain loyal clienteles (some loyal from conviction, some by virtue of co-optation) had also raised up formidable opponents who now looked to take the offensive."Knight, "The Rise and Fall of Cardenismo", 301–302 He had a long and lustrous post-presidency, remaining influential in political life, and considered "the moral conscience of the Revolution".Krauze, Enrique, Mexico: Biography of Power, 480 Cárdenas and his supporters carried "reforms further than any of their predecessors in Mexico or their counterparts in other Latin American countries."Hamilton, Nora. "Lázaro Cárdenas". Encyclopedia of Mexico, 195.
Characteristics
Violence in the Revolution
The most obvious acts of violence which occurred during the Revolution involved soldiers participating in combat or summary executions. The actual fighting which occurred during the Maderista phase of the Revolution (1910–11) did not result in a large number of casualties, but during the Huerta era, the Federal Army summarily executed rebel soldiers, and the Constitutionalist Army executed Federal Army officers. There were no prisoner of war internment camps. Often rank-and-file soldiers of a losing faction were incorporated as troops by the ones who defeated them. The revolutionaries were not ideologically-driven, so they did not target their rivals for reprisals and they did not wage a "revolutionary terror" against them after they triumphed, in contrast to the French and Russian Revolutions. An exception to this pattern of behavior in the history of Mexico occurred in the aftermath of its nineteenth-century wars against indigenous rebels.
The death toll of the combatants was not as large as it might have been, because the opposing armies rarely engaged in open-field combat. The revolutionaries initially operated as guerrilla bands, and they launched hit-and-run strikes against the enemy. They drew the Federal Army into combat on terms which were favorable to them, they did not engage in open battle nor did they attack heavily defended positions. They acquired weapons and ammunition which were abandoned by Federal forces and they also commandeered resources from landed estates and used them to feed their men. The Federal Army was unable to stray from the railway lines that transported them to contested areas, and they were unable to pursue the revolutionaries when they were attacked.
The death toll and the displacement of the population due to the Revolution is difficult to calculate. Mexico's population loss of 15 million was high, but numerical estimates vary greatly. Perhaps 1.5 million people died, and nearly 200,000 refugees fled abroad, especially to the United States.Robert McCaa (2001). "Missing millions: the human cost of the Mexican Revolution" . Mexican Studies 19(2). The violence caused by the Mexican Revolution resulted in Mexican immigration to the United States increasing five-fold from 1910 to 1920, with 100,000 Mexicans entering the United States by 1920 , seeking better economic conditions, social stability, and political stability.
The violence which occurred during the Revolution did not just involve the largely male combatants, it also involved civilian populations of men, women, and children. Some ethnic groups were deliberately targeted, most particularly, the Chinese in northern Mexico. During the Maderista campaign in northern Mexico, there was anti-Chinese violence, particularly, the May 1911 massacre at Torreón, a major railway hub.Jacques, Leo M. Dambourges. Autumn 1974 "The Chinese Massacre in Torreon (Coahuila) in 1911". Arizona and the West, University of Arizona Press, volume 16, no. 3 1974, pp. 233–246 In 1905, anti-Chinese sentiment was espoused in the Liberal Party Program of 1905.
Landed estates, many of which were owned by foreigners, were targeted for looting, the crops and animals were sold or they were used by the revolutionaries. The owners of some estates were killed. In the wake of the Revolution, a joint American-Mexican Claims Commission assessed the monetary damage and the amount of the monetary compensation which was due.Feller, A.H. The Mexican Claims Commissions, 1823–1934: A Study in the Law and Procedure of International Tribunals. New York: The MacMillan Company, 1935
Cities were the prizes in revolutionary clashes, and many of them were severely damaged. A notable exception is Mexico City, which only sustained damage during the days leading up to the ouster and murder of Madero, when rebels shelled the central core of the capital, causing the death of many civilians and animals. The rebels launched the attack in an attempt to convince observers in Mexico and the world that Madero had completely lost control. The capital changed hands several times during the post-Huerta period. When the Conventionists held power, Villa and his men committed acts of violence against major supporters of Huerta and those who were considered revolutionary traitors with impunity. Villa's terror was not on the same scale as the reigns of terror which occurred during the French and Bolshevik Revolutions, but the assassinations and the kidnappings of wealthy people for ransom damaged Villa's reputation and they also caused the U.S. government's enthusiasm for him to cool.
Political assassination became a frequent way to eliminate rivals both during and after the Revolution. All of the major leaders of the Revolution were later assassinated: Madero in 1913, Zapata in 1919, Carranza in 1920, Villa in 1923, and Obregón in 1928. Porfirio Díaz, Victoriano Huerta, and Pascual Orozco had gone into exile. Believing that he would also go into exile, Madero turned himself into Huerta's custody. Huerta considered that too dangerous a course, since he could have been a rallying point. Huerta did not want to execute Madero publicly. The cover story of Madero and Pino Suárez being caught in the crossfire gave Huerta plausible deniability. He needed it, since he only had a thin veil of legitimacy in his ascention to the presidency. The bodies of Madero and Pino Suárez were not photographed nor were they displayed, but pictures of Madero's clothing were taken, showing bullet holes in the back. Zapata's death in 1919 was at the hands of Carranza's military. There was no need for a coverup since he had remained a threat to the Carranza regime. Photos of the dead Zapata were taken and published, as proof of his demise, but Carranza was tainted by the deed.
The economic damage which the revolution caused lasted for years. the Population losses which were due to military and civilian casualties, the displacement of populations which migrated to safer areas, and the damage to the infrastructure all had significant impacts. The nation would not regain the level of development which it reached in 1910 for another twenty years.Wasserman, Mark. "Mexican Revolution". Encyclopedia of Latin American History and Culture, v. 4, 36
The railway lines which were constructed during the Porfiriato facilitated the movement of men, horses, and artillery and they were extensively used by all of the factions. This was much greater in northern Mexico, it was less so in the areas controlled by Zapata. When men and horses were transported by rail, the soldiers rode on the tops of boxcars. Railway lines, engines, and rolling stock were targeted for sabotage and the rebuilding of tracks and bridges was an ongoing issue. Major battles in the north were fought along railway lines or railway junctions, such as Torreón. Early on, northern revolutionaries also added hospital cars so the wounded could be treated. Horses remained important in troop movements, they were either directly ridden to combat zones or they were loaded on trains. Infantry also still played a role. Arms purchases, mainly from the United States, gave northern armies almost inexhaustible access to rifles and ammunition so long as they had the means to pay for them. New military technology, particularly machine guns, mechanized death on a large scale. El Paso, Texas became a major supplier of weaponry to the Constitutionalist Army.Harris and Sadler, The Secret War in El Paso, 87–105
Cultural aspects of the Mexican Revolution
There was considerable cultural production during the Revolution itself, including printmaking, music and photography, while in the post revolutionary era, revolutionary themes in painting and literature shaped historical memory and understanding of the Revolution.
Journalism and propaganda
Anti-Díaz publications before the outbreak of the Revolution helped galvanize opposition to him, and he cracked down with censorship. As President Madero believed in freedom of the press, which helped galvanize opposition to his own regime. The Constitutionalists had an active propaganda program, paying writers to draft appeals to opinion in the U.S. and to disparage the reputations of Villa and Zapata as reactionaries, bandits, and unenlightened peasants. El Paso, Texas just across from Ciudad Juárez was an important site for revolutionary journalism in English and Spanish. Mariano Azuela wrote Los de Abajo ("The Underdogs") in El Paso and published in serial form there.Dorado Romo, David. "Charting the Legacy of the Revolution: How the Mexican Revolution Transformed El Paso's Cultural and Urban Landscape" in Open Borders to a Revolution, Washington D.C. 2013, 156–157 The alliance Carranza made with the Casa del Obrero Mundial helped fund that appealed to the urban working class, particularly in early 1915 before Obregón's victories over Villa and González's over Zapata. Once the armed opposition was less of a threat, Carranza dissolved Vanguardia as a publication.Lear, John. (2017) Picturing the Proletariat: Artists and Labor in Revolutionary Mexico, 1908–1940. Austin: University of Texas Press, 60
Meanwhile, in the United States, Mexican-Americans created newspapers to help with the war effort, denouncing Diaz's regime as well as professing their support to the revolution. There were multiple newspapers written in the Spanish language, most notably, La Cronica, (The Chronicle in English) created by Nicasio Idar and his family in Laredo, Texas, a city which saw much action as a border town. La Cronica, as well as other Chicano newspapers, would mostly cover stories about the Mexican-American and Tejano communities in the border regions, as well as supporting the revolution. These articles were named fronterizo ("by the border" in English), a newspaper dedicated to describing life in the border regions which would write about Mexican-Americans and their long rooted history and culture pertaining to these lands, as people living by the international border would be called fronterizos (border-dwellers). These fronterizos would start out with two goals: to decry the racism and discrimination experienced by Mexicans and Mexicans-Americans in the United States, and to support the ongoing reforms in Mexico, equating the tyranny of Porfirio Díaz to that of white Texan politicians. A month after the start of the conflict, Idar from La Cronica argued that Mexican immigrants and American born Mexican-Americans should be inspired by the revolution's promise of land reform to fight for more civil rights in the United States. Fronterizos worked to produce a nationalistic perspective placing the borderlands as an integral part of Mexican culture, history, and as a crucial part to the revolution, as the borderlands and its communities have been ignored by both the United States and Mexican governments.
Prints and cartoons
During the late Porfiriato, political cartooning and print making developed as popular forms of art. The most well known print maker of that period is José Guadalupe Posada, whose satirical prints, particularly featuring skeletons, circulated widely.Barajas, Rafael. Myth and Mitote: The Political Caricature of José Guadalupe Posada and Manuel Alfonso Manila. Mexico City: Fondo de Cultura Económica, 2009 Posada died in early 1913, so his caricatures are only of the early revolution. One published in El Vale Panchito entitled "oratory and music" shows Madero atop a pile of papers and the Plan of San Luis Potosí, haranguing a dark-skinned Mexican whose large sombrero has the label pueblo (people). Madero is in a dapper suit. The caption reads "offerings to the people to rise to the presidency."Ades, Dawn and Alison McClean, Revolution on Paper: Mexican Prints 1910–1960. Austin: University of Texas Press 2009, p. 18. Political cartoons by Mexicans as well as Americans caricatured the situation in Mexico for a mass readership.Britton, John A. Revolution and Ideology Images of the Mexican Revolution in the United States. Louisville: The University Press of Kentucky, 1995. Political broadsides including songs of the revolutionary period were also a popular form of visual art. After 1920, Mexican muralism and printmaking were two major forms of revolutionary art. Prints were easily reproducible and circulated widely, while murals commissioned by the Mexican government necessitated a journey to view them. Printmaking "emerged as a favored medium, alongside government sponsored mural painting among artists ready to do battle for a new aesthetic as well as a new political order."Ades, Dawn. "The Mexican Printmaking Tradition, c. 1900–1930" in Revolution on Paper, p. 11. Diego Rivera, better known for his painting than printmaking, reproduced his depiction of Zapata in the murals in the Cortés Palace in Cuernavaca in a 1932 print.Ades, Revolution on Paper, catalogue 22, pp. 76–77
Photography, motion pictures, and propaganda
The Mexican Revolution was extensively photographed as well as filmed, so that there is a large, contemporaneous visual record. "The Mexican Revolution and photography were intertwined."Chilcote, Ronald H. "Introduction" Mexico at the Hour of Combat, p. 9. There was a large foreign viewership for still and moving images of the Revolution. The photographic record is by no means complete since much of the violence took place in relatively remote places, but it was a media event covered by photographers, photojournalists, and professional cinematographers. Those behind the lens were hampered by the large, heavy cameras that impeded capturing action images, but no longer was written text enough, with photographs illustrating and verifying the written word.
The revolution "depended heavily, from its inception, on visual representations and, in particular, on photographs."Debroise, Olivier. Mexican Suite, p. 177. The large number of Mexican and foreign photographers followed the action and stoked public interest in it. Among the foreign photographers were Jimmy Hare, Otis A. Aultman, Homer Scott, and Walter Horne. Images appeared in newspapers and magazines, as well as postcards.Vanderwood, Paul J. and Frank N. Samponaro. Border Fury: A Picture Postcard Record of Mexico's Revolution and U.S. War Preparedness, 1910–1917. Albuquerque: University of New Mexico Press 1988. Horne was associated with the Mexican War Postcard Company.Debroise, Mexican Suite, p. 178.
Most prominent of the documentary film makers were Salvador Toscano and Jesús H. Abitía, and some 80 cameramen from the U.S. filmed as freelancers or employed by film companies. The footage has been edited and reconstructed into documentary films, Memories of a Mexican (Carmen Toscano de Moreno 1950) and Epics of the Mexican Revolution (Gustavo Carrera).Pick, Constructing the Image of the Mexican Revolution, p. 2 Principal leaders of the Revolution were well aware of the propaganda element of documentary film making, and Pancho Villa contracted with an American film company to record for viewers in the U.S. his leadership on the battlefield. The film has been lost, but the story of the film making was interpreted in the HBO scripted film And Starring Pancho Villa as Himself.Pick, Constructing the Image of the Revolution, pp. 41–54 The largest collection of still photographs of the Revolution is the Casasola Archive, named for photographer Agustín Casasola (1874–1938), with nearly 500,000 images held by the Fototeca Nacional in Pachuca. A multivolume history of the Revolution, Historia Gráfica de la Revolución Mexicana, 1900–1960 contains hundreds of images from the era, along with explanatory text.Casasola, Gustavo. Historia Gráfica de la Revolución Mexicana, 1900–1960. 5 vols. Mexico: Editorial F. Tillas, S.A. 1967.
Painting
Venustiano Carranza attracted artists and intellectuals to the Constitutionalist cause. Painter, sculptor and essayist Gerardo Murillo, known as Dr. Atl, was ardently involved in art production in the cause of the revolution. He was involved with the anarcho-syndicalist labor organization, the Casa del Obrero Mundial and in met and encouraged José Clemente Orozco and David Alfaro Siqueiros in producing political art.John, Picturing the Proletariat 56–67 The government of Álvaro Obregón (1920–24) and his Minister of Education, José Vasconcelos commissioned artists to decorate government buildings of the colonial era with murals depicting Mexico's history. Many of these focused on aspects of the Revolution. The "Big Three" of Mexican muralism, Diego Rivera, Orozco, and Siqueiros produced narratives of the Revolution, shaping historical memory and interpretation.Coffey, Mary. How a Revolutionary Art Became Official Culture: Murals, Museums, and the Mexican State. Durham: Duke University Press 2012.*Folgarait, Leonard. Mural Painting and Social Revolution in Mexico, 1920–1940. Cambridge: Cambridge University Press, 1998.
Music
A number of traditional Mexican songs or corridos were written at the time, serving as a kind of news report and functioned as propaganda, memorializing aspects of the Mexican Revolution.Herrera Sobek, María, The Mexican Corrido: A Feminist Analysis. Bloomington: Indiana University Press 1990Simmons, Merle. The Mexican corrido as a source of interpretive study of modern Mexico, 1900–1970. Bloomington: Indiana University Press, 1957 The term Adelitas an alternative word for soldaderas, is from a corrido titled "La Adelita". The song "La Cucaracha", with numerous verses, was popular at the time of the Revolution, and subsequently, and is too in the present day. Published corridos often had images of particular revolutionary heroes along with the verses.
Literature
Few novels of the Mexican Revolution were written at the time: Mariano Azuela's Los de Abajo (translated as The Underdogs) is a notable one, originally published in serial form in newspapers. Literature is a lens through which to see the Revolution.Rutherford, John D. Mexican society during the Revolution: a literary approach. Oxford: Oxford University Press, 1971. Nellie Campobello is one of the few women writers of the Revolution; her Cartucho (1931) is an account of the Revolution in northern Mexico, emphasizing the role of Villistas, when official discourse was erasing Villa's memory and emphasizing nationalist and centralized ideas of the Revolution.Klahn, Norma. "Nellie Campobello" in Encyclopedia of Mexico. Chicago: Fitzroy Dearborn 1997, p. 187. Martín Luis Guzmán's El águila y el serpiente (1928) and La sombra del caudillo(1929) drew on his experiences in the Constitutionalist Army.Camp, Roderic Ai. "Martín Luis Guzmán" in Encyclopedia of Latin American History and Culture, vol. 3, p. 157. New York: Charles Scribner's Sons, 1996.Perea, Héctor. "Martín Luis Guzmán Franco" in Encyclopedia of Mexico. Chicago: Fitzroy Dearborn, 1997, pp. 622–623. In the fiction of Carlos Fuentes, particularly The Death of Artemio Cruz, the Revolution and its perceived betrayal are key factors in driving the narrative.
Gender
The revolution that occurred during 1910 greatly affected gender roles present in Mexico. However, it continued to create a strict separation between genders although both men and women were involved in the revolution. Women were involved by promoting political reform as well as enlisting in the military. Women who were involved in political reform would create reports that outlined the changes people wanted to see in their area. That type of activism was seen inside and outside of the cities. Women not only took political action but also enlisted in the military and became teachers to contribute to the change that they wanted to see after the revolution. Women were seen as prizes by many men involved in the military. Being involved in the military gave men a greater sense of superiority over women, which gave women the connotation of being a prize. That idea often lead to violence against women, which meanwhile increased. After the revolution, the ideas women contributed to the revolution were put on hold for many years. Women would often promote the ideas of establishing a greater justice system and creating ideals surrounded by democracy. The revolution caused many people to further reinstate the idea that women were meant to be taking care of the household. Women were also put in the lower part of the social class because of this idea.
Female soldiers during the revolution
Women who had been discarded by their families would often join the military. Being involved in the military would lead to scrutiny amongst some male participants. In order to avoid sexual abuse many women would make themselves appear more masculine. They would also dress more masculine in order to gain more experience with handling weapons, and learning more about military jobs.
María de Jesús González
An example of this is presented by María de Jesús González who was a secret agent involved in Carranza's army. She would often present herself as a man in order to complete certain tasks assigned to her. After she completed these tasks she would return to her feminine appearance.
Rosa Bobadilla
Rosa Bodilla, however, maintained her feminine appearance throughout her military career. She joined the Zapata's military with her husband. When he died, she was given his title, which became "Colonel Rosa Bobadila widow of Casas." She gave orders to men while continuing to dress as a woman.
Amelio Robles
After the revolution, Amelio Robles continued to look like and identify as a male for the rest of his life. Robles abandoned his home in order to join the Zapata military. Throughout the war, Robles began to assume a more masculine identity. After the war, he did not return to his former appearance like other females had. Robles carried on with his life as Amelio, and remained to look as well as act masculine. He reestablished himself into the community as a male, and was recognized as a male on his military documents.
Interpreting the history of the revolution
There is a vast historiography on the Mexican Revolution, with many different interpretations of the history. Over time it has become more fragmented. There is consensus as to when the revolution began, that is in 1910, but there is no consensus when it ended. The Constitutionalists defeated their major rivals and called the constitutional convention that drafted the 1917 Constitution, but did not effectively control all regions. The year 1920 was the last successful military rebellion, bringing the northern revolutionary generals to power. According to Álvaro Matute, "By the time Obregón was sworn in as president on December 1, 1920, the armed stage of the Mexican Revolution was effectively over."Matute, Álvaro Matute, "Mexican Revolution: May 1917 – December 1920". Encyclopedia of Mexico. Chicago: Fitzroy Dearborn 1997, 862. The year 1940 saw revolutionary general and President Lázaro Cárdenas choose Manuel Avila Camacho, a moderate, to succeed him. A 1966 anthology by scholars of the revolution was entitled Is the Mexican Revolution Dead?.Ross, Stanley R. Is the Mexican Revolution Dead? New York: Knopf 1966. Historian Alan Knight has identified "orthodox" interpretation of the revolution as a monolithic, popular, nationalist revolution, while revisionism has focused on regional differences, and challenges its credentials revolution.Knight, Alan, "Mexican Revolution: Interpretations". Encyclopedia of Mexico, 869. One scholar classifies the conflict as a "great rebellion" rather than a revolution.Ruiz, Ramon Eduardo. The Great Rebellion: Mexico, 1905–1924. New York: W.W. Norton 1980
Major leaders of the Revolution have been the subject of biographies, including the martyred Francisco I. Madero. There are many biographies of Zapata and Villa, whose movements did not achieve power, along with studies of the presidential career of revolutionary general Lázaro Cárdenas. In recent years, biographies of the victorious northerners Carranza, Obregón, and Calles have reassessed their roles in the Revolution. Sonorans in the Mexican Revolution have not yet collectively been the subject of a major study.
Often studied as an event solely of Mexican history, or one also involving Mexico's northern neighbor, scholars now recognize that "From the beginning to the end, foreign activities figured crucially in the Revolution's course, not simple antagonism from the U.S. government, but complicated Euro-American imperialist rivalries, extremely intricate during the first world war."Womack, "The Mexican Revolution", 128. A key work illuminating the international aspects of the Revolution is Friedrich Katz's 1981 work The Secret War in Mexico: Europe, the United States, and the Mexican Revolution.
Historical memory
The centennial of the Mexican Revolution was another occasion to construct of historical of the events and leaders. In 2010, the Centennial of the Revolution and the Bicentennial of Independence was an occasion to take account of Mexico's history. The centennial of independence in 1910 had been the swan song of the Porfiriato. With President Felipe Calderón (2006–2012) of the conservative National Action Party, there was considerable emphasis on the bicentennial of independence rather than on the Mexican Revolution.
Heroes and villains
The popular heroes of the Mexican Revolution are the two radicals who lost: Emiliano Zapata and Pancho Villa. As early as 1921, the Mexican government began appropriating the memory and legacy of Zapata for its own purposes.Brunk, Samuel, The Posthumous Career of Emiliano Zapata: Myth, Memory, and Mexico's Twentieth Century Pancho Villa fought against those who won the Revolution and he was excluded from the revolutionary pantheon for a considerable time, but his memory and legend remained alive among the Mexican people. The government recognized his continued potency and had his remains reburied in the Monument of the Revolution after considerable controversy.
With the exception of Zapata who rebelled against him in 1911, Francisco Madero was revered as "the apostle of democracy". Madero's murder in the 1913 counterrevolutionary coup elevated him as a "martyr" of the Revolution, whose memory unified the Constitutionalist coalition against Huerta. Venustiano Carranza gained considerable legitimacy as a civilian leader of the Constitutionalists, having supported Madero in life and led the successful coalition that ousted Huerta. Then Carranza downplayed Madero's role in the revolution in order to substitute himself as the origin of the true revolution. Carranza owned "the bullets taken from the body of Francisco I. Madero after his murder. Carranza had kept them in his home, perhaps because they were a symbol of a fate and a passive denouement he had always hoped to avoid."Enrique Krauze, Mexico: Biography of Power. New York: HarperCollins, 1997, p. 373.
Huerta remains the enduring villain of the Mexican Revolution for his coup against Madero. Díaz is still popularly and officially reviled, although there was an attempt to rehabilitate his reputation in the 1990s by President Carlos Salinas de Gortari, who was implementing the North American Free Trade Agreement and amending the constitution to eliminate further land reform. Pascual Orozco, who with Villa captured Ciudad Juárez in May 1911, continues to have an ambiguous status, since he led a major rebellion against Madero in 1912 and then threw his lot in with Huerta. Orozco much more than Madero was considered a manly man of action.
Monuments
The most permanent manifestations of historical are in the built landscape, especially the Monument to the Revolution in Mexico City and statues and monuments to particular leaders. The Monument to the Revolution was created from the partially built Palacio Legislativo, a major project of Díaz's government. The construction was abandoned with the outbreak of the Revolution in 1910. In 1933, during the Maximato of Plutarco Elías Calles, the shell was re-purposed to commemorate the Revolution. Buried in the four pillars are the remains of Francisco I. Madero, Venustiano Carranza, Plutarco Elías Calles, Lázaro Cárdenas, and Francisco [Pancho] Villa.The Green Guide: Mexico, Guatemala and Belize. London: Michelin, 2011, p. 149. In life, Villa fought Carranza and Calles, but his remains were transferred to the monument in 1979 during the administration of President José López Portillo.Rubén Osorio Zúñiga, "Francisco (Pancho) Villa" in Encyclopedia of Mexico, vol. 2. p. 1532. Chicago: Fitzroy Dearborn, 1997. Prior to the construction of that monument, one was built in 1935 to the amputated arm of General Álvaro Obregón, lost in victorious battle against Villa in the 1915 Battle of Celaya. The monument is on the site of the restaurant La Bombilla, where he was assassinated in 1928. The arm was cremated in 1989, but the monument remains.Buchenau, Jürgen, "The Arm and Body of the Revolution: Remembering Mexico's Last Caudillo, Álvaro Obregón" in Lyman L. Johnson, ed. Body Politics: Death, Dismemberment, and Memory in Latin America. Albuquerque: University of New Mexico Press, 2004, pp. 179–207.Fabrizio Mejía Madrid, "Insurgentes" in The Mexico City Reader, ed. Rubén Gallo. Madison: University of Wisconsin Press, 2004, p. 63.
Naming
Names are a standard way governments commemorate people and events. Many towns and cities of Mexico recall the revolution. In Mexico City, there are delegaciones (boroughs) named for Álvaro Obregón, Venustiano Carranza, and Gustavo A. Madero, brother of murdered president. There is a portion of the old colonial street Calle de los Plateros leading to the main square zócalo of the capital named Francisco I. Madero.
The Mexico City Metro has stations commemorating aspects of the Revolution and the revolutionary era. When it opened in 1969, with line 1 (the "Pink Line"), two stations alluded to the revolution. Most directly referencing the Revolution was Metro Pino Suárez, named after Francisco I. Madero's vice president, who was murdered with him in February 1913. There is no Metro stop named for Madero. The other was Metro Balderas, whose icon is a cannon, alluding to the Ciudadela armory where the coup against Madero was launched. In 1970, Metro Revolución opened, with the station at the Monument to the Revolution. As the Metro expanded, further stations with names from the revolutionary era opened. In 1980, two popular heroes of the Revolution were honored, with Metro Zapata explicitly commemorating the peasant revolutionary from Morelos. A sideways commemoration was Metro División del Norte, named after the Army that Pancho Villa commanded until its demise in the Battle of Celaya in 1915. The year 1997 saw the opening of the Metro Lázaro Cárdenas station. In 1988, Metro Aquiles Serdán honors the first martyr of the Revolution Aquiles Serdán. In 1994, Metro Constitución de 1917 opened, as did Metro Garibaldi, named after the grandson of Italian fighter for independence, Giuseppi Garibaldi. The grandson had been a participant in the Mexican Revolution. In 1999, the radical anarchist Ricardo Flores Magón was honored with the Metro Ricardo Flores Magón station. Also opening in 1999 was Metro Romero Rubio, named after the leader of Porfirio Díaz's Científicos, whose daughter Carmen Romero Rubio became Díaz's second wife.Perhaps enough time had passed since the Revolution and Romero Rubio was just a name with no historical significance to ordinary Mexicans. In 2000, the Institutional Revolutionary Party lost the presidential election to the candidate of the National Action Party. In 2012, a new Metro line opened with a Metro Hospital 20 de Noviembre stop, a hospital named after the date that Madero set in 1910 for rebellion against Díaz. There are no Metro stops named for revolutionary generals and presidents of Mexico, Carranza, Obregón, or Calles, and only an oblique reference to Villa in Metro División del Norte.
Role of women
The role of women in the Mexican Revolution has not been an important aspect of official historical memory, although the situation is changing. Carranza pushed for the rights of women, and gained women's support. During his presidency he relied on his personal secretary and close aide, Hermila Galindo de Topete, to rally and secure support for him. Through her efforts he was able to gain the support of women, workers and peasants. Carranza rewarded her efforts by lobbying for women's equality. He helped change and reform the legal status of women in Mexico.Mirande, Alfredo; Enriquez, Evangelina. La Chicana: The Mexican-American Woman. United States: University of Chicago Press, 1981, pp. 217–219. . In the Historical Museum of the Mexican Revolution, there is a recreation of Adelita, the idealized female revolutionary combatant or soldadera. The typical image of a soldadera is of a woman with braids, wearing female attire, with ammunition belts across her chest. There were a few revolutionary women, known as coronelas, who commanded troops, some of whom dressed and identified as male; they do not fit the stereotypical image of soldadera and are not celebrated in historical memory at present.Cano, Gabriela. "Soldaderas and Coronelas" in Encyclopedia of Mexico, vol. 1, pp. 1357–1360. Chicago: Fitzroy Dearborn 1997.
Legacies
Strong central government, civilian subordination of military
Although the ignominious end of Venustiano Carranza's presidency in 1920 cast a shadow over his legacy in the Revolution, sometimes viewed as a conservative revolutionary, he and his northern allies laid "the foundation of a more ambitious, centralizing state dedicated to national integration and national self-assertion." In the assessment of historian Alan Knight, "a victory of Villa and Zapata would probably have resulted in a weak, fragmented state, a collage of revolutionary fiefs of varied political hues presided over by a feeble central government." Porfirio Díaz had successfully centralized power during his long presidency. Carranza was an old politico of the Díaz regime, considered a kind of bridge between the old Porfirian order and the new revolutionary. The northern generals seized power in 1920, with the "Sonoran hegemony prov[ing] complete and long lasting."Meyer, Jean. "Revolution and Reconstruction in the 1920s" in Mexico since Independence, Leslie Bethell, ed. New York: Cambridge University Press, 1991, p. 201 The Sonorans, particularly Álvaro Obregón, were battle-tested leaders and pragmatic politicians able to consolidate centralized power immediately after 1920. The revolutionary struggle destroyed the professional army and brought to power men who joined the Revolution as citizen-soldiers. Once in power, successive revolutionary generals holding the presidency, Obregón, Calles, and Cárdenas, systematically downsized the army and instituted reforms to create a professionalized force subordinate to civilian politicians. By 1940, the government had controlled the power of the revolutionary generals, making the Mexican military subordinate to the strong central government, breaking the cycle of military intervention in politics dating to the independence era. It is also in contrast to the pattern of military power in many Latin American countries.
Constitution of 1917
An important element of the revolution's legacy is the 1917 Constitution. The document brought numerous reforms demanded by populist factions of the revolution, with article 27 empowering the state to expropriate resources deemed vital to the nation. These powers included expropriation of hacienda lands and redistribution to peasants. Article 27 also empowered the government to expropriate holdings of foreign companies, most prominently seen in the 1938 expropriation of oil. In Article 123 the constitution codified major labor reforms, including an 8-hour workday, a right to strike, equal pay laws for women, and an end to exploitative practices such as child labor and company stores. The constitution strengthened restrictions on the Catholic Church in Mexico, which when enforced by the Calles government, resulted in the Cristero War and a negotiated settlement of the conflict. The restrictions on the religion in the Constitution remained in place until the early 1990s. The Salinas government introduced reforms to the constitution that rolled back the government's power to expropriate property and its restrictions on religious institutions, as part of his policy to join the U.S. and Canada Free Trade Agreement.Blancarte, Roberto "Recent Changes in Church-State Relations in Mexico: An Historical Approach". Journal of Church & State, Autumn 1993, vol. 35. No. 4. Just as the government of Carlos Salinas de Gortari was amending significant provisions of the constitution, Metro Constitución de 1917 station was opened.
Institutional Revolutionary Party
The creation of the Institutional Revolutionary Party (PRI) emerged as a way to manage political power and succession without resorting to violence. It was established in 1929 by President Calles, in the wake of the assassination of President-elect Obregón and two rebellions by disgruntled revolutionary generals with presidential ambitions. Initially, Calles remained the power behind the presidency, during a period known as the Maximato, but his hand-picked presidential candidate, Lázaro Cárdenas, won a power struggle with Calles, expelling him from the country. Cárdenas reorganized the party that Calles founded, creating formal sectors for interest groups, including one for the Mexican military. The reorganized party was named Party of the Mexican Revolution. In 1946, the party again changed its name to the Institutional Revolutionary Party. The party under its various names held the presidency uninterruptedly from 1929 to 2000, and again from 2012 to 2018 under President Enrique Peña Nieto. In 1988, Cuauhtémoc Cárdenas, son of president Lázaro Cárdenas, broke with the PRI, forming an independent leftist party, the Party of the Democratic Revolution, or PRD. It is not by chance that the party used the word "Revolution" in its name, challenging the Institutional Revolutionary Party's appropriation of the Mexican Revolution.
The PRI was built as a big-tent corporatist party, to bring many political factions and interest groups (peasantry, labor, urban professionals) together, while excluding conservatives and Catholics, who eventually formed the opposition National Action Party in 1939. To incorporate the populace into the party, Presidents Calles and Cárdenas created an institutional structure to bring in popular, agrarian, labor, and popular sectors. Cárdenas reorganized the party in 1938, controversially bringing in the military as a sector. His successor President Avila Camacho reorganized the party into its final form, removing the military. This channeled both political patronage and limited political options of those sectors. This structure strengthened the power of the PRI and the government. Union and peasant leaders themselves gained power of patronage, and the discontent of the membership was channeled through them. If organizational leaders could not resolve a situation or gain benefits for their members, it was they who were blamed for being ineffective brokers. There was the appearance of union and peasant leagues' power, but the effective power was in the hands of the PRI. Under PRI leadership before the 2000 elections which saw the conservative National Action Party elected most power came from a Central Executive Committee, which budgeted all government projects. This in effect turned the legislature into a rubber stamp for the PRI's leadership. The Party's name is aimed at expressing the Mexican state's incorporation of the idea of revolution, and especially a continuous, nationalist, anti-imperialist, Mexican revolution, into political discourse, and its legitimization as a popular, revolutionary party. According to historian Alan Knight, the memory of the revolution became a sort of "secular religion" that justified the Party's rule.Knight, Alan "The Myth of the Mexican Revolution" pp. 223–273 from Past & Present, No. 209, November 2010 pp. 226–227.
Social changes
The Mexican Revolution brought about various social changes. First, the leaders of the Porfiriato lost their political power (but kept their economic power), and the middle class started to enter the public administration. "At this moment the bureaucrat, the government officer, the leader were born […]". The army opened the sociopolitical system and the leaders in the Constitutionalist faction, particularly Álvaro Obregón and Plutarco Elías Calles, controlled the central government for more than a decade after the military phase ended in 1920. The creation of the PNR in 1929 brought generals into the political system, but as an institution, the army's power as an interventionist force was tamed, most directly under Lázaro Cárdenas, who in 1936 incorporated the army as a sector in the new iteration of the party, the Revolutionary Party of Mexico (PRM). The old federal army had been destroyed during the revolution, and the new collection of revolutionary fighters were brought under state control.
Although the proportion between rural and urban population, and the number of workers and the middle class remained practically the same, the Mexican Revolution brought substantial qualitative changes to the cities. Big rural landlords moved to the city escaping from chaos in the rural areas. Some poor farmers also migrated to the cities, and they settled on neighborhoods where the Porfiriato elite used to live. The standard of living in the cities grew: it went from contributing to 42% of the national GDP to 60% by 1940. However, social inequality remained.
The greatest change occurred among the rural population. The agrarian reform allowed some revolutionary men to have access to land, (ejidos), that remained under control of the government. However, the structure of land ownership for ejidetarios did not promote rural development and impoverished the rural population even further.Appendini, Kirsten. "Ejido" in Encyclopedia of Mexico, 450. "From 1934 to 1940 wages fell 25% on rural areas, while for city workers wages increased by 20%". "There was a lack of food, there was not much to sell and even less to buy. […] the habit of sleeping in the floor remains, […] diet is limited to beans, tortilla, and chili pepper; clothing is poor". Peasants temporarily migrated to other regions to work in the production of certain crops where they were frequently exploited, abused, and suffered from various diseases. Others decided to migrate to the United States.
A modern legacy of revolution in the rural sphere is the Chiapas insurgency of the 1990s, taking its name from Emiliano Zapata, the Zapatista Army of National Liberation (). The neo-Zapatista revolt began in Chiapas, which was very reliant and supportive of the revolutionary reforms, especially the ejido system, which it had pioneered before Cárdenas took power. Most revolutionary gains were reversed in the early 1990s by President Salinas, who began moving away from the agrarian policies of the late post revolution period in favor of modern capitalism. This culminated in the dismantling of the ejido system in Chiapas, removing many landless peasants' hope of achieving access to land. Calling to Mexico's revolutionary heritage, the EZLN draws heavily on early revolutionary rhetoric. It is inspired by many of Zapata's policies, including a call for decentralized local rule.
Reaction of Mexican Americans
While the war was raging in Mexico, Mexicans and Mexican Americans living in the United States had a multitude of reaction and responses to the war. These responses were not unified, however, as class, race, regional origins, and political ideologies contributed to a large amount of different reactions from the Mexican diaspora in the United States. Furthermore, not all Mexicans had the same citizenship status, with some being immigrants, refugees, exiles, or people whose family had lived in the south-western states from Texas to California since before the Mexican–American War. Within Mexicans and Mexican Americans, there was a wide political spectrum present, from extreme anarchists, to conservative counterrevolutionaries. Some of these groups included Tejano Progressives who supported the revolution and actively helped out by raising awareness to social justice, and Border Anarchists who were a more radical group that participated in violence.
Memory and myth of the Revolution
The violence of the Revolution is a powerful memory. Mexican survivors of the Revolution desired a lasting peace and were willing to accept a level of "political deficiencies" to maintain peace and stability.Camp, Mexico's Military on the Democratic Stage, 17. The memory of the revolution was used as justification for the [Institutional Revolutionary] party's policies with regard to economic nationalism, educational policies, labour policies, and land reform.Garrard,Virginia; Henderson, Peter.; McMann, Bryan. [Latin American in the Modern World]. Oxford University Press Academic US, 2022. https://oup-bookshelf.vitalsource.com/reader/books/9780197574102 Mexico commemorates the Revolution in monuments, statues, school textbooks, naming of cities, neighborhoods, and streets, images on peso notes and coins.
See also
United States involvement in the Mexican Revolution
Mexican Border War (1910–1919)
Military history of Mexico
List of factions in the Mexican Revolution
List of wars involving Mexico
List of Mexican Revolution and Cristero War films
Partido Revolucionario Institucional
Sonora in the Mexican Revolution
Bourgeois revolution
References
Many portions of this article are translations of excerpts from the article Revolución Mexicana in the Spanish Wikipedia.
Bibliography
Further reading
There is a huge bibliography of works in Spanish on the Mexican Revolution. Below are works in English, some of which have been translated from Spanish. Some of the works in English have been translated to Spanish.
Mexican Revolution – general histories
Brenner, Anita. The Wind that Swept Mexico. New Edition. Austin, TX: University of Texas Press, 1984.
Cumberland, Charles C. Mexican Revolution: Genesis under Madero. Austin, TX: University of Texas Press, 1952.
Cumberland, Charles C. Mexican Revolution: The Constitutionalist Years. Austin, T: University of Texas Press, 1972.
Gilly, A. The Mexican Revolution. London, 1983. Translated from Spanish.
Gonzales, Michael J. The Mexican Revolution: 1910–1940. Albuquerque, NM: University of New Mexico Press, 2002.
Hart, John Mason. Revolutionary Mexico: The Coming and Process of the Mexican Revolution. Berkeley and Los Angeles: University of California Press, 1987.
Joseph, Gilbert M. and Jűrgen Buchanau. Mexico's Once and Future Revolution: Social Upheaval and the Challenge of Rule since the Late Nineteenth Century. Durham: Duke University Press 2013.
Krauze, Enrique. Mexico: Biography of Power. New York: HarperCollins, 1997. Translated from Spanish.
Niemeyer, Victor E. Revolution at Querétaro: The Mexican Constitutional Convention of 1916–1917. Austin: University of Texas Press, 1974.
Quirk, Robert E. The Mexican Revolution, 1914–1915: The Convention of Aguascalientes. New York: The Citadel Press, 1981.
Quirk, Robert E. The Mexican Revolution and the Catholic Church 1910–1919. Bloomington: Indiana University Press, 1973
Ruiz, Ramón Eduardo. The Great Rebellion: Mexico, 1905–1924. New York: Norton, 1980.
Tutino, John. From Insurrection to Revolution. Princeton: Princeton University Press, 1985.
Wasserman, Mark. The Mexican Revolution: A Brief History with Documents. (Bedford Cultural Editions Series) first edition, 2012.
Womack, John, Jr. "The Mexican Revolution" in The Cambridge History of Latin America, vol. 5, ed. Leslie Bethell. Cambridge: Cambridge University Press, 1986.
Biography and social history
Baldwin, Deborah J. Protestants and the Mexican Revolution: Missionaries, Ministers, and Social Change. Urbana: University of Illinois Press 1990.
Beezley, William H. Insurgent Governor: Abraham González and the Mexican Revolution in Chihuahua. Lincoln, NE: University of Nebraska Press, 1973.
Brunk, Samuel. Emiliano Zapata: Revolution and Betrayal in Mexico. Albuquerque: University of New Mexico Press 1995.
Buchenau, Jürgen, Plutarco Elías Calles and the Mexican Revolution. Lanham MD: Rowman and Littlefied 2007.
Buchenau, Jürgen. The Last Caudillo: Alvaro Obregón and the Mexican Revolution. Malden MA: Wiley-Blackwell 2011.
Cockcroft, James D. Intellectual Precursors of the Mexican Revolution. Austin: University of Texas Press 1968.
Fisher, Lillian Estelle. "The Influence of the Present Mexican Revolution upon the Status of Mexican Women," Hispanic American Historical Review, Vol. 22, No. 1 (Feb. 1942), pp. 211–228.
Garner, Paul. Porfirio Díaz. New York: Pearson 2001.
Guzmán, Martín Luis. Memoirs of Pancho Villa. Translated by Virginia H. Taylor. Austin: University of Texas Press 1966.
Hall, Linda. Alvaro Obregón, Power, and Revolution in Mexico, 1911–1920. College Station: Texas A&M Press 1981.
Henderson, Peter V. N. In the Absence of Don Porfirio: Francisco León de la Barra and the Mexican Revolution. Wilmington, DE: Scholarly Resources, 2000
Lomnitz, Claudio. The Return of Comrade Ricardo Flores Magón. Brooklyn NY: Zone Books 2014.
Lucas, Jeffrey Kent. The Rightward Drift of Mexico's Former Revolutionaries: The Case of Antonio Díaz Soto y Gama. Lewiston, New York: Edwin Mellen Press, 2010.
McCaa, Robert. "Missing millions: The demographic costs of the Mexican Revolution." Mexican Studies 19.2 (2003): 367–400. online
Macias, Anna. "Women and the Mexican Revolution, 1910–1920". The Americas, 37:1 (Jul. 1980), 53–82.
Meyer, Michael. Mexican Rebel: Pascual Orozco and the Mexican Revolution, 1910–1915. Lincoln, NE: University of Nebraska Press, 1967.
Poniatowska, Elena. Las Soldaderas: Women of the Mexican Revolution. Texas: Cinco Puntos Press; First Edition, November 2006
Reséndez, Andrés. "Battleground Women: Soldaderas and Female Soldiers in the Mexican Revolution." The Americas 51, 4 (April 1995).
Ross, Stanley R. Francisco I. Madero: Apostle of Democracy. New York: Columbia University Press 1955.
Richmond, Douglas W. Venustiano Carranza's Nationalist Struggle: 1893–1920. Lincoln, NE: University of Nebraska Press, 1983.
Smith, Stephanie J. Gender and the Mexican Revolution: Yucatán Women and the Realities of Patriarchy. North Carolina: University of North Carolina Press, 2009
Womack, John, Jr. Zapata and the Mexican Revolution. New York: Vintage Press 1970.
Regional histories
Benjamin, Thomas and Mark Wasserman, eds. Provinces of the Revolution. Albuquerque: University of New Mexico Press, 1990.
Blaisdell, Lowell. The Desert Revolution, Baja California 1911. Madison: University of Wisconsin Press, 1962.
Brading, D. A., ed. Caudillo and Peasant in the Mexican Revolution. Cambridge: Cambridge University Press, 1980.
Buchenau, Jürgen and William H. Beezley, eds. State Governors in the Mexican Revolution, 1910–1952. Lanham MD: Rowman and Littlefield 2009.
Joseph, Gilbert. Revolution from Without: Yucatán, Mexico, and the United States, 1880–1924. Cambridge: Cambridge University Press, 1982.
Harris, Charles H. III. The Secret War in El Paso: Mexican Revolutionary Intrigue, 1906–1920. Albuquerque: University of New Mexico Press, 2009.
Jacobs, Ian. Ranchero Revolt: The Mexican Revolution in Guerrero. Austin: University of Texas Press, 1983.
LaFrance, David G. The Mexican Revolution in Puebla, 1908–1913: The Maderista Movement and Failure of Liberal Reform. Wilmington, DE: Scholarly Resources, 1989.
Lear, John. Workers, Neighbors, and Citizens: The Revolution in Mexico City. Lincoln: University of Nebraska Press 2001.
Snodgrass, Michael. Deference and Defiance in Monterrey: Workers, Paternalism, and Revolution in Mexico, 1890–1950. Cambridge University Press, 2003.
Wasserman, Robert. Capitalists, Caciques, and Revolution: The Native Elites and Foreign Enterprise in Chihuahua, Mexico, 1854–1911. Chapel Hill: University of North Carolina Press, 1984.
International dimensions
Clendenin, Clarence C. The United States and Pancho Villa: A study in unconventional diplomacy. Ithaca, NY: Cornell University Press, 1981.
Frank, Lucas N. "Playing with Fire: Woodrow Wilson, Self‐Determination, Democracy, and Revolution in Mexico." Historian 76.1 (2014): 71–96. online
Gilderhus, M. T. Diplomacy and Revolution: U.S.-Mexican Relations under Wilson and Carranza. Tucson: University of Arizona Press, 1977.
Grieb, K. J. The United States and Huerta. Lincoln, NE: University of Nebraska Press, 1969.
Haley, P. E. Revolution and Intervention: The diplomacy of Taft and Wilson with Mexico, 1910–1917. Cambridge, 1970.
Hart, John Mason. Empire and Revolution: The Americans in Mexico since the Civil War. Berkeley and Los Angeles: University of California Press, 2002.
Katz, Friedrich. The Secret War in Mexico: Europe, the United States, and the Mexican Revolution. Chicago: The University of Chicago Press, 1981.
Meyer, Lorenzo. The Mexican Revolution and the Anglo-Saxon Powers. LaJolla: Center for U.S.-Mexico Studies. University of California San Diego, 1985.
Quirk, Robert E. An Affair of Honor: Woodrow Wilson and the Occupation of Veracruz. Louisville: University of Kentucky Press 1962.
Rinke, Stefan, Michael Wildt (eds.): Revolutions and Counter-Revolutions. 1917 and its Aftermath from a Global Perspective. Campus 2017.
Smith, Robert Freeman. The United States and Revolutionary Nationalism in Mexico 1916–1932. Chicago: University of Chicago Press, 1972.
Teitelbaum, Louis M. Woodrow Wilson and the Mexican Revolution. New York: Exposition Press, 1967.
Memory and cultural dimensions
Benjamin, Thomas. La Revolución: Mexico's Great Revolution as Memory, Myth, and History. Austin: University of Texas Press, 2000.
Brunk, Samuel. The Posthumous Career of Emiliano Zapata: Myth, Memory, and Mexico's Twentieth Century. Austin: University of Texas Press, 2008.
Buchenau, Jürgen. "The Arm and Body of a Revolution: Remembering Mexico's Last Caudillo, Álvaro Obregón" in Lyman L. Johnson, ed. Body Politics: Death, Dismemberment, and Memory in Latin America. Albuquerque: University of New Mexico Press, 2004, pp. 179–207
Foster, David, W., ed. Mexican Literature: A History. Austin: University of Texas Press, 1994.
Hoy, Terry. "Octavio Paz: The Search for Mexican Identity". The Review of Politics 44:3 (July 1982), 370–385.
Gonzales, Michael J. "Imagining Mexico in 1921: Visions of the Revolutionary State and Society in the Centennial Celebration in Mexico City", Mexican Studies/Estudios Mexicanos vol. 25. No 2, summer 2009, pp. 247–270.
Herrera Sobek, María, The Mexican Corrido: A Feminist Analysis. Bloomington: Indiana University Press, 1990.
Oles, James, ed. South of the Border, Mexico in the American Imagination, 1914–1947. New Haven: Yale University Art Gallery, 1993.
O'Malley, Ilene V. 1986. The Myth of the Revolution: Hero Cults and the Institutionalization of the Mexican State, 1920–1940. Westport: Greenwood Press
Ross, Stanley, ed. Is the Mexican Revolution Dead?. Philadelphia: Temple University Press, 1975.
Rutherford, John D. Mexican society during the Revolution: a literary approach. Oxford: Oxford University Press, 1971.
Simmons, Merle. The Mexican corrido as a source of interpretive study of modern Mexico, 1900–1970. Bloomington: Indiana University Press, 1957.
Vaughn, Mary K. Negotiating Revolutionary Culture: Mexico, 1930–1940. Tucson: University of Arizona Press, 1997.
Weinstock, Herbert. "Carlos Chavez". The Musical Quarterly 22:4 (October 1936), 435–445.
Visual culture: prints, painting, film, photography
Barajas, Rafael. Myth and Mitote: The Political Caricature of José Guadalupe Posada and Manuel Alfonso Manila. Mexico City: Fondo de Cultura Económica, 2009
Britton, John A. Revolution and Ideology Images of the Mexican Revolution in the United States. Louisville: University Press of Kentucky, 1995.
Coffey, Mary. How a Revolutionary Art Became Official Culture: Murals, Museums, and the Mexican State. Durham, NC: Duke University Press, 2012.
Doremus, Anne T. Culture, Politics, and National Identity in Mexican Literature and Film, 1929–1952. New York: Peter Lang Publishing Inc., 2001.
Flores, Tatiana. Mexico's Revolutionary Avant-Gardes: From Estridentismo to ¡30–30!. New Haven: Yale University Press, 2013.
Folgarait, Leonard. Mural Painting and Social Revolution in Mexico, 1920–1940. Cambridge: Cambridge University Press, 1998.
Ittman, John, ed. Mexico and Modern Printmaking, A Revolution in the Graphic Arts, 1920 to 1950. Philadelphia: Philadelphia Museum of Art, 2006.
Lear, John. (2017) Picturing the Proletariat: Artists and Labor in Revolutionary Mexico, 1908–1940. Austin: University of Texas Press.
McCard, Victoria L. "Soldaderas of the Mexican revolution" (The Evolution of War and Its Representation in Literature and Film), West Virginia University Philological Papers 51 (2006), 43–51.
Mora, Carl J., Mexican Cinema: Reflections of a Society 1896–2004. Berkeley: University of California Press, 3rd edition, 2005
Mraz, John. Photographing the Mexican Revolution: Commitments, Testimonies, Icons. Austin: University of Texas Press 2012.
Noble, Andrea, Photography and Memory in Mexico: Icons of Revolution. Manchester: Manchester University Press, 2010.
Orellana, Margarita de, Filming Pancho Villa: How Hollywood Shaped the Mexican Revolution: North American Cinema and Mexico, 1911–1917. New York: Verso Books, 2007.
Ortiz Monasterio, Pablo. Mexico: The Revolution and Beyond: Photographs by Agustín Victor Casasola, 1900–1940. New York: Aperture 2003.
Pick, Zuzana M. Constructing the Image of the Mexican Revolution: Cinema and the Archive. Austin: University of Texas Press, 2010.
Pineda, Franco, Adela. The Mexican Revolution on the World Stage: Intellectuals and Film in the Twentieth Century, SUNY Press, 2019.
¡Tierra y Libertad! Photographs of Mexico 1900–1935 from the Casasola Archive. Oxford: Museum of Modern Art, 1985.
Historiography
Bailey, D. M. "Revisionism and the recent historiography of the Mexican Revolution." Hispanic American Historical Review 58#1 (1978), 62–79. online
Bantjes, Adrien A. "The Mexican Revolution" in A Companion to Latin American History, Thomas Holloway, ed. London: Wiley-Blackwell 2011, 330–346.
Brunk, Samuel. The Posthumous Career of Emiliano Zapata. (U of Texas Press 2008)
Golland, David Hamilton. "Recent Works on the Mexican Revolution." Estudios Interdisciplinarios de América Latina y el Caribe 16.1 (2014). online
Knight, Alan. "Mexican Revolution: Interpretations" in Encyclopedia of Mexico, vol. 2, pp. 869–873. Chicago: Fitzroy Dearborn 1997.
Knight, Alan. "The Mexican Revolution: Bourgeois? Nationalist? Or Just a 'Great Rebellion'?" Bulletin of Latin American Research (1985) 4#2 pp. 1–37 in JSTOR
Knight, Alan. "Viewpoint: Revisionism and Revolution", Past and Present 134 (1992).
McNamara, Patrick J. "Rewriting Zapata: Generational Conflict on the Eve of the Mexican Revolution." Mexican Studies-Estudios Mexicanos 30.1 (2014): 122–149.
Wasserman, Mark. "You Can Teach An Old Revolutionary Historiography New Tricks: Regions, Popular Movements, Culture, and Gender in Mexico, 1820–1940", Latin American Research Review (2008) 43#2 260–271 in Project MUSE
Womack, John Jr. "Mexican Revolution: Bibliographical Essay" in Mexico Since Independence, Leslie Bethell, ed. Cambridge: Cambridge University Press, 1991, pp. 405–414.
Primary sources
Angelini, Erin. "The Bigger Truth About Mexico"
Bulnes, Francisco. The Whole Truth About Mexico: The Mexican Revolution and President Wilson's Part Therein, as seen by a Cientifico. New York: M. Bulnes Book Company 1916.
O'Shaunessy, Edith. A Diplomat's Wife in Mexico. New York: Harper 1916.
Reed, John. Insurgent México. New York: International Publishers, 1969.
Wasserman, Mark. The Mexican Revolution: A Brief History with Documents. (Bedford Cultural Editions Series) first edition, 2012.
Online
Brunk, Samuel. The Banditry of Zapatismo in the Mexican Revolution The American Historical Review. Washington: April 1996, Volume 101, Issue 2, Page 331.
Brunk, Samuel. “‘The Sad Situation of Civilians and Soldiers’: The Banditry of Zapatismo in the Mexican Revolution.” The American Historical Review 101, no. 2 (1996): 331–53. "The Sad Situation of Civilians and Soldiers": The Banditry of Zapatismo in the Mexican Revolution.
Brunk, Samuel. "Zapata and the City Boys: In Search of a Piece of Revolution". Hispanic American Historical Review. Duke University Press, 1993.
"From Soldaderas to Comandantes" Zapatista Direct Solidarity Committee. University of Texas.
Gilbert, Dennis. "Emiliano Zapata: Textbook Hero." Mexican Studies. Berkley: Winter 2003, Volume 19, Issue 1, Page 127.
Hardman, John. "Soldiers of Fortune" in the Mexican Revolution . "Postcards of the Mexican Revolution"
Merewether Charles, Collections Curator, Getty Research Institute, "Mexico: From Empire to Revolution", January 2002.
Rausch George Jr. "The Exile and Death of Victoriano Huerta", The Hispanic American Historical Review, Vol. 42, No. 2, May 1963 pp. 133–151.
Tuck, Jim. "Zapata and the Intellectuals." Mexico Connect, 1996–2006.
External links
Library of Congress – Hispanic Reading Room portal, Distant Neighbors: The U.S. and the Mexican Revolution
Mexican Revolution – Encyclopædia Britannica
U.S. Library of Congress Country Study: Mexico
Mexican Revolution of 1910 and Its Legacy, latinoartcommunity.org
Stephanie Creed, Kelcie McLaughlin, Christina Miller, Vince Struble, Mexican Revolution 1910–1920 , Latin American Revolutions, course material for History 328, Truman State University (Missouri)
Mexican Revolution, ca. 1910–1917 Photos and postcards in color and in black and white, some with manuscript letters, postmarks, and stamps from the collection at the Beinecke Rare Book and Manuscript Library at Yale University
Mexican Revolution, in the "Children in History" website. This is an overview of the Revolution with a treatment of the impact on children.
Mexico: Photographs, Manuscripts, and Imprints from the DeGolyer Library contains photographs related to the Mexican Revolution.
Timeline of the Mexican Revolution
Elmer and Diane Powell Collection on Mexico and the Mexican Revolution from the DeGolyer Library, SMU.
Category:Wars involving Mexico
Category:1910s in Mexico
Category:20th-century revolutions
Category:Revolution-based civil wars
Category:History of socialism
Category:Military history of Mexico
Category:Wars fought in Arizona
Category:Wars fought in Texas
Category:Civil wars in Mexico
Category:Proxy wars
Category:Articles containing video clips
|
politics_government
| 23,935
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.