Click here to return to the Whirlwind World History Curriculum Online Textbook table of contents.
The Collapse of the USSR and the Soviet Bloc
In 1991 the Soviet Union collapsed. Like the proverbial sorcerer’s apprentice who unleashed an enchantment he cannot control, the (last, as it turned out) Soviet premier Mikhail Gorbachev had begun a series of reforms in 1986 that ultimately resulted in the dismantling of the Soviet state. In 1989, the communist regimes of the Eastern Bloc crumbled as it became clear that the USSR would not intervene militarily to prop them up as it had in the past. Nationalist independence movements exploded across the USSR and, finally, the entire system fell apart to be replaced by sovereign nations. In 1991, Russia itself reemerged as a distinct country in the process rather than just the most powerful part of a larger union.
Since the collapse of the USSR, some Eastern European countries (e.g. the Czech Republic, Poland) have enjoyed at least some success in modernizing their economies and keeping political corruption at bay. In Russia itself, the 1990s were an unmitigated economic and social disaster as the entire country tried to lurch into a market economy without the slightest bit of planning or oversight. Western consultants, generally associated with international banking firms, convinced Russian politicians in the infancy of its new democracy to institute “shock therapy,” dismantling social programs and government services. While foreign loans accompanied these steps, new industries did not suddenly materialize to fill the enormous gaps in the Russian economy that had been played by state agencies. Unemployment skyrocketed and the distinctions between legitimate business and illegal or extra-legal trade all but vanished.
The result was an economy that was often synonymous with the black market, gigantic and powerful organized crime syndicates, and the rise of a small number of people to stratospheric levels of wealth and power. One shocking statistic is that fewer than forty individuals controlled about 25% of the Russian economy by the late 1990s. Just as networks of contacts among the Soviet apparatchiks had once been the means of securing a job or accessing state resources, it now became imperative for regular Russian citizens to make connections with either the oligarch-controlled companies or organized crime organizations.
An oligarch, as pertains to Russia, is a rich business leader with lots of political influence.
Stability only began to return because of a new political strongman. Vladimir Putin, a former agent of the Soviet secret police force (the KGB), was elected president of Russia in 2000. Since that time, Putin has proved a brilliant political strategist, playing on anti-western resentment and Russian nationalism to buoy popular support for his regime, run by “his” political party, United Russia. While opposition political parties are not illegal, and indeed consistently try to make headway in elections, United Russia has been in firm control of the entire Russian political apparatus since shortly after Putin’s election. Opposition figures are regularly harassed or imprisoned, and many opposition figures have also been murdered (although the state itself maintains a plausible deniability in cases of outright assassination). Some of the most egregious excesses of the oligarchs of the 1990s were also reined in, while some oligarchs were instead incorporated into the United Russia power structure.
Unlike many of the authoritarian rulers of Russia in the past, Putin was (and remains as of 2021) hugely popular among Russians. Media control has played a large part in that popularity, of course, but much of Putin’s popularity is also tied to the wealth that flooded into Russia after 2000 as oil prices rose. While most of that wealth went to enrich the existing Russian elites (along with some of Putin’s personal friends, who made fortunes in businesses tied to the state), it also served as a source of pride for many Russians who saw little direct benefit. Further boosts to his popularity came from Russia’s invasion of the small republic of Georgia in 2008 and, especially, its invasion and subsequent annexation of the Crimean Peninsula from the Ukraine in 2014. While the latter prompted western sanctions and protests, it was successful in supporting Putin’s power in Russia itself.
During the mid-to-late 1980s, the weakened Soviet Union gradually stopped interfering in the internal affairs of Eastern Bloc nations and numerous independence movements took place.
In 1989, a wave of revolutions, sometimes called the “Autumn of Nations”, swept across the Eastern Bloc.
“As the former satellite states turned away from communism and Soviet influence, some of them shifted toward democracy in an orderly way, and some descended into violence and bloodshed and ethnic recrimination. In many ways, this collapse is still playing out today. In this video you’ll learn how countries like Czechoslovakia, Yugoslavia, Poland and many others moved into the post-Soviet world.” (Quote from Crash Course video description)
The Middle East
The Middle East has been one of the most conflicted regions in the world in the last century, following the collapse of the Ottoman Empire in World War I. In the recent past, much of that instability has revolved around three interrelated factors: the Middle East’s role in global politics, the Israeli-Palestinian conflict, and the vast oil reserves of the region. In turn, the United States played an outsize role in shaping the region’s politics and conflicts.
During the Cold War the Middle East was constantly implicated in American policies directed to curtail the threat of Soviet expansion. The US government tended to support political regimes that could serve as reliable clients regardless of the political orientation of the regime in question or that regime’s relationship with its neighbors. First and foremost, the US drew close to Israel.
Simultaneously, however, the US supported Arab and Persian regimes that were anything but democratic. The Iranian regime under the Pahlavi dynasty was restored to power through an American-sponsored coup in 1953. The Iranian Shah Muhammad Reza Pahlavi ruled Iran as a loyal American client for the next 26 years while suppressing dissent through a brutal secret police force. The Iranian regime purchased enormous quantities of American arms (50% of American arms sales were to Iran in the mid-1970s) and kept the oil flowing to the global market. Iranian society was highly educated and its economy thrived, but its government was an oppressive autocracy.
The word autocrat comes from the Greek autokrates, meaning ruling by oneself.
An autocrat is a ruler who has absolute power. “My way or the highway.”
Some autocratic rulers of the past:
King Henry VIII
Likewise, the equally autocratic monarchy of Saudi Arabia emerged as the third “pillar” in the US’s Middle Eastern clientage system. Despite its religious policy being based on Wahhabism, the most puritanical and rigid interpretation of Islam in the Sunni world, Saudi Arabia was welcomed by American politicians as another useful foothold in the region that happened to produce a vast quantity of oil.
Practices that have been forbidden and sometimes “punished by flogging” during Wahhabi history include performing or listening to music, dancing, television programs (unless religious), smoking, playing backgammon, chess, or cards, drawing human or animal figures, acting in a play or writing fiction (both are considered forms of lying), dissecting cadavers (even in criminal investigations and for the purposes of medical research), recorded music played over telephones on hold or the sending of flowers to friends or relatives who are in the hospital. Common Muslim practices Wahhabis believe are contrary to Islam include listening to music in praise of Muhammad, praying to God while visiting tombs (including the tomb of Muhammad), and celebrating the birthday of the Prophet.
This status quo was torn apart in 1979 by the Iranian Revolution. What began as a coalition of intellectuals, students, workers, and clerics opposed to the oppressive regime of the Shah was overtaken by the most fanatical branch of the Iranian Shia clergy under the leadership of the Ayatollah (“eye of God”) Ruhollah Khomeini.
When the dust settled from the revolution, the Ayatollah had become the official head of state and Iran had become a hybrid democratic-theological nation: the Islamic Republic of Iran. The new government featured an elected parliament and equality before the law (significantly, women enjoy full political rights in Iran, unlike in some other Middle Eastern nations like Saudi Arabia), but the Ayatollah had final say in directing politics, intervening when he felt that Shia principles were threatened. Deep-seated resentment among Iranians toward the US for the latter’s long support of the Shah’s regime became official policy in the new state, and in turn the US was swift to vilify the new regime.
You can read about the Iranian Revolution in the graphic novel: Persepolis: The Story of a Childhood
“In powerful black-and-white comic strip images, Satrapi tells the story of her life in Tehran from ages six to fourteen, years that saw the overthrow of the Shah’s regime, the triumph of the Islamic Revolution, and the devastating effects of war with Iraq. The intelligent and outspoken only child of committed Marxists and the great-granddaughter of one of Iran’s last emperors, Marjane bears witness to a childhood uniquely entwined with the history of her country.”
The 1980s and 1990s saw a botched Israeli invasion of Lebanon, an ongoing military debacle for the Soviet Union in Afghanistan, and a full-scale war between the new Islamic Republic of Iran and its neighbor, Iraq. Ruled by a secular nationalist faction, the Ba’ath Party, since 1968, Iraq represented yet another form of autocracy in the region. Saddam Hussein, The military leader at the head of the Ba’ath Party, launched the Iran-Iraq War as a straightforward territorial grab. The United States supported both sides during the war at different points despite its avowed opposition to the Iranian regime. In the end, the war sputtered to a bloody stalemate in 1988 after over a million people had lost their lives.
Just two years later, Iraq invaded the neighboring country of Kuwait and the United States (fearing the threat to oil supplies and now regarding Hussein’s regime as dangerously unpredictable) led a coalition of United Nations forces to expel it. The subsequent Gulf War was an easy victory for the US and its allies, even as the USSR spiraled toward its messy demise as communism collapsed in Eastern Europe.
The early 1990s thus saw the United States in a position of unparalleled power and influence in the Middle East, with every country either its client and ally (e.g. Saudi Arabia, Israel) or hostile but impotent to threaten US interests (e.g. Iran, Iraq). American elites subscribed to what President George Bush described as the “New World Order”: America would henceforth be the world’s policeman, overseeing a global market economy and holding rogue states in check with the vast strength of American military power.
Instead, the world was shocked when fundamentalist Muslim terrorists, not agents of a nation, hijacked and crashed airliners into the World Trade Center towers in New York City and into the Pentagon (the headquarters of the American military) on September 11, 2001. Fueled by hatred toward the US for its ongoing support of Israel against the Palestinian demand for sovereignty and for the decades of US meddling in Middle Eastern politics, the terrorist group Al Qaeda succeeded in the most audacious and destructive terrorist attacks in modern history.
President George W. Bush (son of the first President Bush) vowed a global “War on Terror.” The American military swiftly invaded Afghanistan, ruled by an extremist Sunni Muslim faction known as the Taliban, for sheltering Al Qaeda. American forces easily toppled the Taliban but failed to destroy it or Al Qaeda.
The US launched a full scale invasion in 2002 to topple Hussein. That much was easily accomplished, as once again the Iraqi military proved completely unable to hold back American forces. Within months, however, Iraq devolved into a state of murderous anarchy as former leaders of the Ba’ath Party (thrown out of office by American forces), local Islamic clerics, and members of different tribal or ethnic groups led rival insurgencies against both the occupying American military and their own Iraqi rivals. The Iraq War thus became a costly military occupation rather than an easy regime change, and in the following years the internecine violence and American attempts to suppress Iraqi insurgents led to well over a million deaths (estimates are notoriously difficult to verify, but the death toll might actually be over two million). A 2018 US Army analysis of the war glumly concluded that the closest thing to a winner to emerge from the Iraq War was, ironically, Iran, which used the anarchic aftermath of the invasion to exert tremendous influence in the region.
While Europe has suffered from economic and, to a lesser extent, political instability since the 1980s, that instability pales in comparison to the instability of other world regions. In particular, the Middle East entered into a period of outright bloodshed and chaos as the twenty-first century began. In turn, the shock waves of Middle Eastern conflict have reverberated around the globe, inspiring the growth of international terrorist groups.
The Arab Spring of 2010 led to a brief moment of hope that new democracies might take the place of military dictatorships in countries like Libya, Egypt, and Syria, only to see authoritarian regimes or parties reassert control.
Syria in particular spiraled into a horrendously bloody civil war in 2010, prompting millions of Syrian civilians to flee the country.
This is an older video, but explains the beginning of Syria’s war:
Turkey, one of the most venerable democracies in the region since its foundation as a modern state in the aftermath of World War I, saw its president Recep Tayyip Erdoğan steadily assert greater authority over the press and the judiciary. The two other regional powers, Iran and Saudi Arabia, carried on a proxy war in Yemen and fund rival paramilitary (often considered terrorist) groups across the region. Israel, meanwhile, continued to face both regional hostility and internal threats from desperate Palestinian insurgents.
In Europe, fleeing Middle Eastern (and to a lesser extent, African) refugees seeking the infinitely greater stability and opportunity available to them abroad have brought about a resurgence of far-right and, in many cases, openly neo-fascist politics. While fascistic parties like France’s National Front have existed since the 1960s, they remained basically marginal and demonized for most of their history. Since 2010, far right parties have grown steadily in importance, seeing their share of each country’s electorate increase as worries about the impact of immigration drives voters to embrace nativist, crypto-racist political messages. Even some citizens who do not harbor openly racist views have come to be attracted to the new right, since mainstream political parties often seem to represent only the interests of out-of-touch social elites.
A widespread sense of anger, disillusionment, and resentment haunts politics not just in Europe, but in much of the world.
At its most basic, globalization refers to business activities and the actions of governments beginning to be conducted on a worldwide scale. The European colonial project in the Americas, the Atlantic Slave Trade, and the activities of the British East India Company in India and China were all conducted at a worldwide scale, and all had commercial elements. Conflicts such as the Seven Years War, the War of 1812, and World Wars I and II have also involved multiple continents. And events like the Columbian Exchange, which made American staple crops available to feed growing world populations, and the “Spanish Flu” pandemic which killed up to 500 million people throughout the world, also had global consequences.
One of the important forces driving globalization has been the removal of protectionist trade policies around the world. Trade protectionism is a policy that protects domestic industries from unfair competition from foreign ones. Free trade is the opposite. It gives people the freedom to buy cheaper products from anywhere in the world. The downside to free trade is that companies are less likely to buy locally made products. That means a loss of jobs.
Transnational corporations are uniquely suited to take advantage of free trade and the world economy. Technically there are about 50,000 global corporations, but the number of corporations that are as important as states in the world economy is a bit smaller. The 2020 Fortune Magazine Global 500 was topped by Walmart, Sinopec Group (a Beijing-based oil and gas company), State Grid (the Chinese national electric company), China National Petroleum (another Beijing-based oil and gas company), and Royal Dutch Shell Toyota. The next five on the list are Saudi Aramco (oil company), Volkswagen, BP, Amazon, and Toyota. Placement on this list is based on revenues, which is similar to the GDP used to measure the size of national economies. If Walmart was a nation, it would be larger than all but 23 of the 211 members of the United Nations.
An important element of the shift away from a U.S.-centered globalization is the growing economic power of Asia. Japan’s economy was jump-started after WWII by U.S. aid including a $2 billion direct investment and letting Japan off the hook for war reparations. Japanese goods were also given preferential access to U.S. consumer markets, so the Japanese economy focused on low-wage industries producing products for export to America. The United States no longer considered Japan a threat, but rather as a potential ally against communist China. The Japanese people, already quite accustomed to austerity, complied with their government’s new industrial policy and Japan reinvested its earnings and rapidly grew from a producer of cheap knock-off copies of American products to an innovator in high technology.
Other Asian nations like Singapore, South Korea, and Taiwan followed in Japan’s footsteps in the 1960s and 70s, often also with aid from the U.S. designed to slow the spread of communism during the Cold War. After the death of Mao Zedong in 1976, Deng Xiaoping gained power in 1978 and China began shifting toward a market economy in which the government would direct development with incentives rather than decrees and directives. In addition to a plentiful supply of cheap labor, China had high savings rates and Deng’s devaluation of the nation’s currency allowed Chinese savings and foreign exchange surpluses to be invested in securities like American government bonds. This made China the world’s bank, as nations like the U.S. fell deeper into debt. Finally, a rising standard of living in China created a new middle class and a huge consumer market.
In 2002, ninety percent of the Chinese population lived in poverty, with seven percent listed as middle class, two percent upper-middle class, and one percent considered affluent by world standards. By 2012, the number of poor in China has been reduced to twenty-nine percent. Two thirds of the poor (nearly a billion people) improved standards of living, in one of the most momentous shifts in world history. Fifty-four percent of Chinese in 2012 were considered middle class, and that fifty-four percent is expected to rise to upper-middle class status by 2022, with another twenty-two percent moving from poverty into the middle class, leaving only sixteen percent of Chinese people in poverty. This is nearly the same income demographic we see in nations like the U.S., which has a ten percent poverty rate. China is becoming a dominant force in the world economy once again, and the increased spending power of the Chinese people will soon drive the global market. Chinese demand for items like automobiles is expected to outpace the rest of the world for the foreseeable future. Companies like Foxconn, which began as a contract manufacturer of low-tech items like computer cases, has become a nearly $5 billion manufacturer of the highest tech items like Apple iPhones and computers. Lenovo, which began as a Hong Kong PC clone company in 1984, has been the world’s largest personal computer maker since 2013. Lenovo acquired IBM’s PC division in 2005, and the famous IBM ThinkPad became a Chinese product. Lenovo does about $45 billion in annual revenue and was the world’s largest cell-phone maker until 2016 when it was overtaken by Apple and Samsung.
As Chinese purchasing power increases, world industry is will be challenged with producing consumer goods without exhausting finite resources or destroying the environment. Chinese cities have been known for their pollution, especially for their poor air quality. An increasingly affluent population may become less willing to tolerate environmental destruction, which might be a positive change. Hopefully, Chinese interest in projects such as the Belt and Road Initiative, which seeks to connect China with the rest of Asia, Europe, and Africa in a “New Silk Road”, will include a commitment to the environments of the places China finds its natural resources and markets for consumer goods, rather than the approach to hinterlands taken by earlier world economic powers, in which out of sight often meant out of mind.
Computers and the Internet
In addition to the increase in international trade, global culture has been permanently changed by communications technology. Computer networks and cell phones continued a process begun with the printing press, the telegraph, radio, and television. Each of these technologies has been used to spread ideas to wider audiences, often against the wishes of those in power. More recent inventions like fax machines, data communication via modems, the internet, and most recently smart phones and social networks have been used to spread news of events like the Tiananmen Square protests in China, the Arab Spring, and the Egyptian Revolution of 2011. In spite of the efforts of some nations like China and Saudi Arabia to censor media and limit internet access, it is increasingly difficult to firewall societies from the global media culture.
Because the virtual world is becoming as important to the global economy as the physical world of “bricks and mortar” commerce and communication, let’s take a moment to review the technological, government, and business changes that enabled it.
One of the first computer networks was the semi-automatic business research environment (SABRE) launched by IBM in 1960, which initially connected two mainframe systems and grew into an airline reservation system. In 1963, American psychologist and computer scientist JCR Licklider proposed a concept he called the “Intergalactic Computer Network” when he became the first director of the Pentagon’s Advanced Research Projects Agency (ARPA). Licklider described it as “an electronic commons open to all, the main and essential medium of informational interaction for governments, institutions, corporations, and individuals.”
ARPANET, begun in 1969, was a network of networks, joining government facilities and research universities on a system dedicated to official communications. It was permissible for researchers and users to occasionally communicate personally with each other using email. Commercial and political communications, however, were strictly forbidden. A computer scientist named Ted Nelson developed the basic ideas that became hypertext and the web between 1965 and 1972.
Nelson’s version of hypertext was based on the idea that there would be a “master” record of any document on the network. Exact copies of that document (which Nelson called transclusions in his 1980 book, Literary Machines) would point back to the original. Ideally, rather than just referring to the original, they would actually call up the original document wherever possible, eliminating the proliferation of copies.
This ideal was never really achieved, because even though storage was expensive, bandwidth was even scarcer. This is unfortunate, because the existence of bi-directional links would have allowed the owner of a document to know where and when it was used, and to have received compensation for its use. Two-way linkage was much more difficult to implement than a one-way hyperlink that launched the user to a new place on the web. Apple co-founder Steve Wozniak compared the two in a speech about Nelson in the 1990s, saying one-way linking was a cool hack, while two-way linking required computer science.
After the introduction of Apple Macintosh and IBM Personal Computers in the 1980s and the growth of online communication and file-sharing using services such as Compuserve and Prodigy in the early 1990s, in 1992, an online game provider called Quantum Link that had renamed itself America Online offered a Windows version of its free access software. AOL free trial CDs became ubiquitous; CEO Steve Case claimed that at one point in the 90s half the CDs produced worldwide had an AOL logo. By the mid-1990s, AOL had passed both Prodigy and CompuServe, and in 1997 more than half of all U.S. homes with internet access got it through AOL. The economic power of the online access was becoming apparent: in 1998 AOL acquired Netscape, in 1999 MapQuest, and in 2000 AOL merged with Time Warner.
The commercial nature of subscriptions like AOL stood in sharp contrast, for a while, to the early internet. The inventor of the World Wide Web, Tim Berners-Lee, was a scientist at CERN in Switzerland when he wrote “Information Management: A Proposal” in March 1989. Somebody jotted on the front page of the paper, “Vague but exciting”, and Tim was given time to work out the details on a NeXT computer in his lab. By October 1990, Berners-Lee had written the three basic technologies of the web: HTML (Hypertext Markup Language), the formatting language of the web; URI (Uniform Resource Identifier, AKA URL), which contains the protocol (http, ftp, etc.), the domain name (example.com), and folder and file names (like /blogs/index); and HPPT (Hypertext Transfer Protocol), which allows retrieval of linked resources.
Being a government-funded research facility, CERN decided to make the protocols freely available, but it was the development of the MOSAIC browser by graduate student Marc Andreessen in 1993 that made Berners-Lee’s inventions the solid basis of the web. Andreessen graduated, moved to California and met Jim Clark, who had recently left Silicon Graphics.
They formed Netscape and made their browser, called Navigator, available for free to non-commercial users. Netscape Navigator was destroyed by Microsoft’s decision to bundle its own browser, Internet Explorer, with Windows 95. Microsoft made it very difficult for PC manufacturers or even users to uninstall IE and use Netscape and Java, which led to an antitrust case in Feb 2001, in which the court ruled that Microsoft had abused its monopoly powers. Netscape never recovered from losing the “first browser war” however, and was acquired by AOL in 1999.
From 1991 (when there was only one at CERN) to 1994, when Yahoo launched, the number of websites rose to 2,738. The following year, when Altavista, Amazon, and AuctionWeb began, the number of websites had increased nearly tenfold to 23,500. In 1998, when Google launched, the number of websites had jumped tenfold again, to 2,410,000. The early years of the web, known as Web 1.0, were a period when people with modest skills could acquire a domain and build a website. One of the first powerful and intuitive apps for building websites and pages was Microsoft’s Frontpage. It was a Windows app that provided a WSIWYG design interface and output usable HTML code. Millions of people used the program to build personal and small commercial websites. Discontinued in 2003, Frontpage was not replaced by anything with similar power and ease of use. Partly this is because in Web 2.0 the do-it-yourself (DIY) element of the web has largely disappeared.
In 1999, a new generation of the web called Web 2.0 was announced, which claimed to focus on participation by users rather than people simply viewing content passively. One example of this participatory nature of the new web is the proliferation of social media. Another is people posting videos on YouTube. The web has also become a site of commerce though, so the most important instance of “participation” by web users is as consumers buying stuff – either web content like Netflix videos or iTunes music, or real-world goods on e-commerce sites like Amazon.
By 2001, when Wikipedia began, there were over a half billion internet users and over 29 million websites. There were a billion users in 2005, when YouTube and Reddit began, but growth had slowed to only 64,780,000 websites and a much larger percentage of them were commercial rather than personal. By 2010, when Pinterest and Instagram launched, there were 2 billion web users and the number of websites had actually declined from the previous year for the first time, to about 207,000.
In the 2010s the rest of the world caught up to the US in web use and website building. By 2015 there were well over 3 billion people using the web and by 2017 there were 1.7 billion websites. Since then the number of websites has decreased, dropping by nearly 10% per year. And about three quarters of these new websites aren’t active, but are parked domains or redirects. The actual number of sites in active use is probably closer to 200,000.
For people who wanted a presence on the web but didn’t have the skills or interest to own a domain or code a website, what some have called web 2.5 saw the beginning of social networking sites. The biggest of these from 2003 to 2008 was called Myspace. People could create a profile page, post images and multimedia, and see what their friends were up to. It was much less structured than what we’re used to today, allowing users a lot of flexibility to personalize their pages. Myspace was overtaken by a service, Facebook, that provided even more ease of use and uniformity. Facebook is extremely easy to use, which may be why it has recently become the place for grandparents to stalk millennials.
The final element in the story of computing and networks involves the battle between free, open resources and commerce, which we’ve already seen in the growth of the web. Operating systems in early mainframes and personal computers were tightly controlled by manufacturers like IBM, Digital Equipment Company, or Hewlett Packard, software businesses like Microsoft (DOS and Windows), and a number of workstation companies like Sun Microsystems and Silicon Graphics, which each owned a proprietary version of an operating system that had originally been developed by researchers at AT&T Bell Labs (which was prevented by an anti-trust ruling to get into computers) and the University of California, Berkeley. The Uniplexed Information and Computing Service, called UNIX, had originally been more or less open, but had become commercialized when AT&T sold its rights to a network software company, Novell, which later sold those rights to Santa Cruz Operation (SCO). In time, other organizations released versions which only ran on their hardware (and often cost thousands of dollars), including IBM (AIX), Microsoft (Xenix), Sun Microsystems (Solaris), SGI (IRIX) – all proprietary distributions with similar functionality.
In 1991, Finnish graduate student Linus Torvalds became frustrated with the high cost of UNIX. He wrote an operating system kernel in C, which he planned on calling Freax. Instead, early users called it Linux. The open-source operating system rapidly gained popularity among hackers due to its free distribution and its easy configurability. A programmer could configure the Linux kernel with just the features desired, which led to an explosion of both OS distributions (Red Hat, Debian, Ubuntu, Darwin, Android) as well as uses in embedded systems which were becoming popular. PC manufacturers like IBM and Dell adopted Linux as an option to reduce the cost of their systems and break Microsoft’s monopoly on the OS. Linux became especially popular for running file servers and internet routers, replacing expensive proprietary systems like IRIX, Microsoft NT, and Cisco. And organizations like NASA discovered that clusters of networked off-the-shelf PCs running Linux could rival the computing power of proprietary supercomputers. Companies like Sun and SGI found the markets for their workstations and servers disappearing overnight. As of 2021, all the systems on the Top-500 supercomputer list run Linux.
Fifteen years ago, China had no computers on the Top-500 list. Today it owns the top two spots. The #2 machine, which was the world’s fastest supercomputer from 2013 to 2016, uses Intel Xeon CPUs. But in 2015, the U.S. government banned the sale of these processors to China. The official reason for the ban was national security concerns, but some suspected a desire to recover the status as the world’s fastest computer for America may have been a strong motivation as well. China responded by very rapidly shifting to using its own Sunway CPUs, based on a new architecture and instruction set completed in 2016. The Sunway processors reportedly have 260 cores and the “Taihulight” supercomputer built from them runs at up to 125.44 petaflops (1 petaflop = one thousand million million floating point operations per second).
As computing power enables increasingly complex artificial intelligence (AI) systems that can control financial trading systems, power grids, and scientific research, the challenges of national technology competitions become apparent. But even the new web technology has its dark side. Social media has been implicated in helping cause the genocide in Myanmar against the minority Rohinga population. Russian meddling and manipulation of Facebook data by a company called Cambridge Analytica may have influenced the 2016 Brexit vote. Foreign hacking and social media manipulation were both alleged during the 2016 and 2020 US presidential elections. In the course of investigating charges of Russian interference, details have come to light of just how compromised social media sites like Facebook have become and how much of their users’ data they hold. And in 2013 American whistleblower Edward Snowden released information to journalists showing that intelligence agencies such as the NSA and British GCHQ are systematically invading the privacy of citizens in a number of illegal ways. As a result of these disclosures, Snowden has been forced to live in exile in Russia.
Finally, even when there’s not an adversary regime like Russia spreading disinformation, Social Media algorithms create “filter bubbles” in which people only see information that doesn’t threaten their world-views. In an attempt to generate greater advertising revenues, social media platforms and search engines routinely direct users to information that will attract and hold their attention for the longest time possible. The objective of the algorithms is not necessarily to promote a certain worldview, but simply to keep the user engaged as long as possible so that more ads can be placed and sold. However, as a result, users are directed to information that conforms with their “profile” of beliefs and biases. When information that does not conform to the user’s preconceptions is presented, it is often presented in an adversarial way, to generate anger (which is another way to insure engagement). News and information are tailored either to conform to audiences’ beliefs and prejudices, or to outrage. As time goes on, people on different sides of issues can literally find themselves living in different worlds, basing their beliefs on different data, and believing the other side is irrational and evil.
I highly recommend the following documentary:
Some have argued that global media access disproportionately benefits nations like the U.S. which has a multi-billion-dollar content-creation industry, and that it spreads values some societies disapprove of, including consumerism and pornography. Even in the U.S. and the developed world, the internet is changing from the democratic, peer-to-peer sharing institution it was designed to be, into a platform for commerce and media consumption. In the early days of the internet, communication was text-based because bandwidths were low. The advent of fiber optic network backbones in the 1990s and the worldwide web created the opportunity to communicate using images and ultimately streaming video. 4G and 5G cellular networks allow media to be streamed to smartphones and tablets. This rapidly expanding bandwidth created an opportunity for the internet to replace broadcast television just as it had replaced the analog, landline telephone network. But access may not be universally available for long.
As technology exploded, many people expected a renaissance of DIY content-creation, and the explosion of websites, blogs, vlogs, podcasts, Instas, snapchats and YouTube channels has definitely expanded the ability of regular people to be heard. Five billion YouTube videos are watched daily and 300 hours of video are uploaded every minute. On the other hand, more content is produced for the web by global corporations daily, and the Federal Communications Commission (FCC) has begun to eliminate net neutrality, so corporations can buy “fast-lane” access that will turn the web into just another platform for corporate media.
The promise of the early internet was that even though corporations participated, it was basically a peer-to-peer platform. Eliminating net neutrality could potentially kill that, unless hackers can come up with a new disruptive technology that allows the people to stay ahead of the corporations. If corporations can pay to have certain types of data or media fast-tracked, they can also pay to have other types of information slow-tracked or even suppressed. Imagine if a group with deep pockets and a political agenda could start editing what you can see on the internet. Oh, wait. Don’t imagine it. It’s already happening.
The End of This Book
Predicting the future is a fool’s errand, and one that historians in particular are generally loathe to engage in. That said, if nothing else, history provides both examples and counterexamples of things that have happened in the past that can, and should, serve as warnings for the present. As this text has demonstrated, much of history has been governed by greed, indifference to human suffering, and the lust for power. Studying history helps us avoid repeating problems of the past and helps us understand the world’s current situations. You are part of history. You are creating history.
We hope you enjoyed this whirlwind tour of the past and hope you are able to take what you learned to help make a better future.
We want to know what you thought of what you just read and watched! Leave us a comment! Please also let us know if a link or video isn’t working. 😊
Next: There is no next! You’ve reached the end of Guest Hollow’s Whirlwind World History Textbook. Please leave a comment below and let us know what you thought of this book!