Sunday, June 27, 2010

The Wisdom of Dilbert

...just something I found on one of my random journeys around the 'Net.

A Steam-Powered Vegetarian Robot?


The Victorian Era (1837-1901) is a period of human history that I’ve always felt had its own an inimitable charm, what with its delightful gas-lit city streetlights, its tailcoats and top hats, and its neo-Gothic architecture (an example of which is the U.K.’s Houses of Parliament, which were rebuilt between 1840 and 1870 after the original palace was destroyed in a fire). But without doubt, one of the most enduring icons of the Victorian Era, and the concomitant Industrial Revolution, was the ubiquitous steam engine.

Coal-fired steam engines were what drove everything from the great railways, the mines, the textile mills and other factories, the pumping of the domestic water supply, and the irrigation of farmland. By the 20th century, though, advances in internal combustion engines (the kind that’s in your car) and electric motors (like in the ceiling fan above you), and the adoption of oil as a fuel spelled doom for the once-mighty steam engine.

Or did it? Maybe steam was just waiting for a comeback born of the drug-induced hallucinations of some crazy scientist at a government laboratory. I presume that that must be what happened because, seriously, no one in a normal state of mind could come up with something like EATR (Energetically Autonomous Tactical Robot): a steam-powered, vegetarian robot. It’s still primarily a concept, but a working prototype is being built.

The motive for creating something like this (apart from the obvious “Because we can!”) is that such a robot could theoretically operate indefinitely in environments where conventional fuel sources are hard to find. It’s perfect for the American Army, for instance, because it would allow them to dispatch teams of EATRs to perform reconnaissance missions in environments like forests. It could also allow human soldiers to rest while it forages for biofuels, recharges electrical devices, or even transports heavy machinery. Civilian versions of the EATR could be used for forestry patrol and for agricultural applications.

The EATR uses image-recognition software linked to a laser and a camera to recognize plants, leaves and wood. 68kg of vegetation would provide enough electricity to travel around 160km, its builders estimate. Once it identifies appropriate fuel, a robotic arm gathers and prepares the vegetation before feeding it through a shredder into a combustion chamber. The heat from combustion turns water into steam, which drives a six-piston steam engine that turns a generator that creates electric power to be stored in batteries and delivered to EATR’s electric motors when needed. A little circuitous? You bet.

As well as using biomass, EATR’s engine can also run on petrol, diesel, kerosene, cooking oil or anything similar than could be scavenged. The ability to consume a wide range of fuels would be important if the vehicle found itself in areas like deserts, where vegetation may not be available and alternative fuel would be needed.

The robot is actually being developed by a private firm, Robotic Technologies, Inc., but has received funding from DARPA- the Defense Advanced Research Projects Agency, a US government agency. DARPA is, of course, no stranger to outlandish research projects. If the ARPANET that they had created by 1970 to link government communications networks could turn into the behemoth that is the Internet today, who’s to say that we won’t soon be letting our cars out to graze at night, instead of taking them to petrol stations?

[For a hilarious press release from Robotic Technology Inc, countering media claims that EATR feeds on the dead, go to http://www.wired.com/dangerroom/2009/07/company-denies-its-robots-feed-on-the-dead/]

[Most of the information in this article is from Technology Quarterly (June 12, 2010), a publication of The Economist]

Monday, May 17, 2010

This is Why I Love 'The Economist'



I’ve always maintained that newspapers are relatively boring. I mean, sure, they’re worth looking through if you’re really interested in current affairs, but the great majority of newspaper articles are simply a collection of facts thrown in your face- the writing itself is rarely worth remembering.  I realize that this unemotional presentation of the bare facts is part of how newspapers are supposed to work, but it does take something away from their overall appeal.
News magazines, on the other hand, can often be a whole lot of fun- and none more so than The Economist, if you ask me. (The Economist actually prefers to call itself a newspaper, but it’s really got more in common with news magazines like TIME, Forbes, Businessweek, etc.) Just check out this article to see for yourself that it’s possible to find laugh-out-loud moments in a serious analysis of the news. [It might help if I pointed out right away that the tone is meant to be ironic.]
Just in case you don’t get it because you’re not familiar with some of the illustrious (?) personalities mentioned, here’s a quick recap of what might be their greatest claims to fame.
Robert Mugabe: The current Zimbabwean president’s “land reform” efforts, which began in 2000, consisted largely of invading and grabbing farmland belonging to whites and reallocating it to supporters of his regime. These actions caused agricultural production in Zimbabwe to plummet, and to leave a once-self-sustaining nation at the mercy of donations from the World Food Program in order to avoid starvation.
Than Shwe: As leader of the ruling military junta in Myanmar, Than Shwe is partly responsible for keeping opposition leader and democracy activist Aung San Suu Kyi (who is a woman) under house arrest for fourteen of the past twenty years.
Silvio Berlusconi: Famous for his extramarital sexual exploits. Most recently, in June 2009, he was accused of hiring 42-year-old escort Patricia D’Addario to spend the night with him. Before then, in April 2009, there was outrage over his attendance at an eighteen-year-old girl’s birthday party. His wife noted that he’d missed his own sons’ 18th birthdays. Berlusconi, of course, claimed that he’d never had “spicy” relations with the girl.
One can’t help but feel that it says something about the Italians in general that they’ve allowed this man to become their longest serving Prime Minister.
Mahmoud Ahmedinajad: In the face of economic sanctions, democratic pressures, and outright threats from the rest of the world, the Iranian president has steadfastly stood by his country’s plans of developing civilian nuclear infrastructure. At least, he claims that it’s only for nonmilitary purposes- but not many people are willing to take his word for it. And so the brinkmanship continues, with Western powers continually trying to push Iran further into the corner, and Iran obdurately constructing secret nuclear facilities and keeping out nuclear regulators…
Saddam Hussein: Earns a mention here for his regime’s ‘multiculturalist’ efforts at eradicating the Kurdish people in northern Iraq and silencing Shia religious dissidents throughout the country. Attacks on the Kurds, in particular, made indiscriminate use of chemical weapons such as mustard gas and sarin. An estimated 180,000 Kurds were killed and 1.5 million were displaced during the Ba’ath Party’s rule.
Idi Amin: I actually had to look him up- and I’m glad I did. Idi Amin was the president and military dictator of Uganda between 1971 and 1979. He was famous for his egotistical behaviour, and enjoyed making provocative statements aimed at Western powers. He created and conferred upon himself the title of CBE- Conqueror of the British Empire,  parodying the existing title ofCommander of the British Empire, which is granted by the British monarch. The dig about his innovative culinary skills refers to a widespread rumour that among his numerous other eccentricities, he was also a cannibal!
Dick Cheney: As Vice President, he was George W. Bush’s second-in-command, and he probably comes in second on the list of the most despised American political figures of recent times, as well. Alongside his political career, Cheney spent time working in the private sector, and even served as the CEO of a  Fortune 500 corporation called Halliburton between 1995 and 2000.
His ties with Halliburton, which offers services to support oil exploration and drilling, later became the subject of public scrutiny, as allegations arose that the company was receiving preferential treatment in the awarding of oil contracts in Iraq after the US invasion in 2003. Cheney was always a fervent supporter of the Iraq War, and it seemed possible to many  that part of the reason for this was that Halliburton stood to make huge gains from such an action.
In July 2003, the Supreme Court ordered a group of oil company executives, including Cheney, to disclose documents relating to oil contracts in Iraq. Cheney refused, stating that the executive branch of the government had the right to keep such documents secret. (It does not.)
Kim Jong Il: Well, just take a look at the picture. And remember that he’s alwaysdressed like that.
Hugo Chavez: He’s immensely popular in his home country, largely because of his programs to support Venezuela’s poor majority. In 2009, he won a nationwide referendum to eliminate term limits for the presidency, essentially making it possible for him to govern indefinitely. Chavez is a staunch opponent of American foreign policy, and has gained international recognition for his vocal (and verbose) tirades against the Americans, and on various other topics.
His Sunday show, Alo Presidente (Hello President), a largely unscripted monologue, often exceeds seven hours, amounting to 54,000 words, or 333,000 characters, about the length of a romance novel. He’s so fond of an audience for his political views that he even recently joined Twitter. Observers are extremely skeptical of his ability to say anything in under 140 characters, though.

Existential Anguish



I can't imagine a work of art more poignantly capturing an ill-defined but all-consuming sense of existential anguish than The Scream.

The lack of recognizable facial features on the principal subject somehow only serves to convey its emotions more acutely. It's as if everything that once made this creature human has been stripped away, leaving behind only a raw core of fear and pain. These emotions are of such crushing magnitude that they distort material reality around the subject, causing the skies to boil and meld with the land and the sea. There's no detail in the surroundings because it all pales into insignificance- even nonexistence- in the face of the subject's pain. 

Crucially, though, the two figures in the background are not distorted in the same way as the faceless subject. This fact, along with their physical distance from the subject, seems to drive home the idea that it is alone in its pain. There is no hope here of misery being assuaged by finding itself in like company. It suffers alone. 

One of the most famous paintings of all time, The Scream is an example of Expressionist art, which sought to express emotional experience and the meaning of being alive rather than physical reality. It's been the victim of several high-profile art thefts, and was recovered more than once in sting operations by police forces from several countries.

Upon examination of his life, one might find that it's not too inappropriate that Edvard Munch would have created such a powerful depiction of human misery. Munch's father was religious to the point of fanaticism, and forcibly imposed his values upon all of his five children. Of his father, Munch once said: "My father was temperamentally nervous and obsessively religious—to the point of psychoneurosis. From him I inherited the seeds of madness. The angels of fear, sorrow, and death stood by my side since the day I was born."

Munch himself was chronically ill, but had to contend with the poor family's constant moving from one sordid flat to another. One of his younger sisters was diagnosed with mental illness at an early age. Another quote attributed to Munch: "I inherited two of mankind's most frightful enemies—the heritage of consumption [tuberculosis] and insanity."

It is interesting that Munch took an open-minded view of the world- and of art, in particular- in contrast to his father's unwavering adherence to parochial religious dogma. 

NASA’s Spirit Rover Nears Death on the Red Planet


(From top: The mission badge of the Spirit rover, featuring Marvin the Martian; and a few pictures that were beamed back to Earth by the Spirit rover)

Spirit is dying. And, one might say, it’s about time, too. The intrepid little rover’s been exploring the Martian surface since January the 3rd, 2004. It shares the Red Planet’s surface with its twin, Opportunity; but since their landing sites were almost diametrically opposite one another, they probably don’t get into too many fights about who’s on whose side of the planet.

Spirit, which is about the size of a dune buggy, got stuck in a sand trap in May 2009, and since only four of its six wheels remained fully operational by then, it hasn’t been able to extricate itself yet. This leaves the rover in an extremely vulnerable situation, as it’s unable to orient its solar panels to take full advantage of the sun’s energy, or to allow the wind to brush dust off the panels’ surfaces. And now, with the onset of the harsh Martian winter, when even less of the sun’s energy reaches the planet’s surface, Spirit may run out of power completely.

It would be a sad end to what’s been a long and fruitful life of adventuring. Far longer than nearly anyone expected, as a matter of fact. When Spirit and Opportunity were launched, their expected lifetimes were only three months. They’ve both already outlived that estimate by a factor of around 24. They’ve survived surviving paralyzing cold, blinding dust and long periods without sun, all of which occasionally left them silent and still, but only until conditions improved and they shook off the dust, stirred to life and puttered off to do more work.

Don’t you just wish all electronic appliances were that resilient? 

Friday, May 14, 2010

A Surprising fact about the Hubble Space Telescope


The Hubble Space Telescope- like the more recently built Large Hadron Collider (LHC) – is one of those icons of scientific endeavour that captures the imagination of thousands all over the world. Soon after it was launched in 1990, it was discovered that the main mirror, despite having been constructed to within 10 nanometres of all specifications, was incapable of producing sharply defined images. It took an extraordinarily difficult servicing mission to correct the Hubble’s optical flaws; but in the end, it was a complete success.
After that first servicing mission in 1993, The Hubble went on to produce some of the finest and most captivating images of space ever seen. (“Pillars of Creation”, one of the most famous Hubble images, is included here.) The astonishing detail and the nuanced coloration of these images lend them an evocative beauty that often transcends a lack of understanding of their actual subject matter in a way that few other scientific images do.
However, it may come as a surprise to learn that much of the appeal of these images comes not from the telescope itself, but from the astronomers and image processing specialists who- in a sense- “photoshop” the images before releasing them to the public. That’s because the Hubble only sends images in black and white!
Astronomers have to make choices about composition, colour and contrast in order to bring out specific aspects of the data that the Hubble beams down to Earth. And while these decisions often have scientific meaning (just for e.g., hotter stars are usually blue-ish white, whereas cooler ones are redder), they are also occasionally made purely in order to enhance the visual appeal of the images.
For people who’ve never had access to the Hubble’s raw data, it might be hard to rein in a vague sense of disappointment over the fact that the universe may not be quite that pretty, after all; but, looked at another way, it’s a whole lot more mysterious…

Terminator

















No sudden, sharp boundary marks the passage of day into night on planet Earth. Instead, the shadow line or terminator is diffuse and shows the gradual transition to darkness that we experience as twilight. With the Sun illuminating the scene from the right, the cloud tops reflect gently reddened sunlight filtered through the dusty troposphere, the lowest layer of the planet’s atmosphere. A clear high altitude layer, visible along the dayside’s upper edge, scatters blue sunlight and fades into the blackness of space. This picture actually is a single digital photograph taken in June of 2001 from the International Space Station, orbiting at an altitude of 211 nautical miles.

Wave-Particle Duality...?















An interesting take on wave-particle duality...  :P

Pale Blue Dot

















“Pale Blue Dot” is a photograph of planet Earth taken in 1990 by the Voyager 1 space probe from a record distance (6 billion km), showing it against the vastness of space. Both the idea for taking the distant photo and the title came from scientist and astronomer Carl Sagan, who also wrote the 1994 book of the same name.
In a commencement address delivered May 11, 1996, Sagan related his thoughts on the deeper meaning of the photograph:

Look again at that dot. That’s here. That’s home. That’s us. On it everyone you love, everyone you know, everyone you ever heard of, every human being who ever was, lived out their lives. The aggregate of our joy and suffering, thousands of confident religions, ideologies, and economic doctrines, every hunter and forager, every hero and coward, every creator and destroyer of civilization, every king and peasant, every young couple in love, every mother and father, hopeful child, inventor and explorer, every teacher of morals, every corrupt politician, every “superstar,” every “supreme leader,” every saint and sinner in the history of our species lived there – on a mote of dust suspended in a sunbeam.
The Earth is a very small stage in a vast cosmic arena. Think of the rivers of blood spilled by all those generals and emperors so that, in glory and triumph, they could become the momentary masters of a fraction of a dot. Think of the endless cruelties visited by the inhabitants of one corner of this pixel on the scarcely distinguishable inhabitants of some other corner, how frequent their misunderstandings, how eager they are to kill one another, how fervent their hatreds.
Our posturings, our imagined self-importance, the delusion that we have some privileged position in the Universe, are challenged by this point of pale light. Our planet is a lonely speck in the great enveloping cosmic dark. In our obscurity, in all this vastness, there is no hint that help will come from elsewhere to save us from ourselves.
The Earth is the only world known so far to harbor life. There is nowhere else, at least in the near future, to which our species could migrate. Visit, yes. Settle, not yet. Like it or not, for the moment the Earth is where we make our stand.
It has been said that astronomy is a humbling and character-building experience. There is perhaps no better demonstration of the folly of human conceits than this distant image of our tiny world. To me, it underscores our responsibility to deal more kindly with one another, and to preserve and cherish the pale blue dot, the only home we’ve ever known.

Earthrise


















“Earthrise” is the name given to NASA image AS8-14-2383, taken by astronaut William Anders during the historic Apollo 8 mission, the first manned voyage to orbit the Moon. The photograph was taken from lunar orbit on December 24, 1968 with a Hasselblad camera.

Calibri and Cambria- Meant to Reside Onscreen


As anyone who owns a low-to-mid range printer probably knows, printer cartridges are prohibitively expensive. In fact, they often cost just about as much as printers themselves do. Which is a very good reason to try to make use of them as economically as possible. And it seems that you can do that more effectively my making the right font choices.
Data from Printer.com, a Dutch company, reveals that Century Gothic and Times New Roman are the least ink-intensive fonts, with Arial, Calibri and Verdana coming in second place, and Trebuchet, Tahoma and Franklin Gothic Medium right after them.
The ink-intensiveness of a font is primarily dependent upon the thickness of its lines. Slightly counter-intuitively, “serif ” fonts (ones with little horizontal lines at the top and bottom of most letters) tend to use less ink than their “sans serif” counterparts. That’s because despite the added serifs, these fonts tend to make use of thinner lines for the main bodies of the letters.
Unfortunately, there’s a trade-off to take into consideration here. Fonts that are less ink-intensive tend to be more “spaced out”, meaning you may need to use more paper to print using them. A document that fits comfortably on one page in Arial could extend to a second space if switched to Century Gothic, for instance.
Microsoft Corporation, however, feels that the best thing to do is to avoid the trade-off altogether, and encourage people to avoid printing in the first place. The more pleasing a font looks on screen, the less likely it is that someone will feel the need to click Print, they say.
Screen-prettiness, therefore, is one of the criteria they use in deciding what fonts to include in software such as MS Word; and it’s why the defaults were changed from Times New Roman and Arial to Calibri and Cambria in Office 2001. They might even have swapped Calibri for Segoe UI in Office 2010 for the same reason.
[Info from: Technology Review; http://www.technologyreview.com/wire/25005/?nlid=2875&a=f]

A Game of Clue

In the parallel universe that most Hollywood movies are set in, global catastrophes are often masterminded by evil geniuses with sinister motives. The implicit rules of this nigh-incomprehensible world usually ensure that the audience is afforded a fleeting glimpse of the principal villain before the true nature and extent of the impending disaster is revealed. We find him in a darkened, opulent office, skulking in a high-backed leather chair; and apparently taking great care to make sure that nothing but the top of his head is visible over the back of his chair. In the portentous silence, a slowly curling wisp of smoke rises ponderously upwards from an expensive cigar held in a bejewelled hand.

We only really get to meet this villain once some hero has braved one unimaginable danger after another to arrive at his doorstep and have a word with him about what he’s doing to the world. The imposing leather chair now swings around smoothly, and we find ourselves face to face with a man whose appearance alone may have forced him into a career of extravagant criminal activities. Our villain sports slicked-back hair and a permanent sneer at the stupidity of the world around him; more often than not, a disfiguring scar of some sort graces his facial features. In Hollywood-land, if you’re the man behind the destruction of the world, you must look evil enough to be the man behind the destruction of the world.

But that’s just how Hollywood sees things.

In real life, global catastrophes sometimes just happen, in the complete absence of scheming criminal masterminds. Case in point: the global financial crisis that had begun to manifest itself in the world’s most powerful economies by around mid-2007. No one person caused the global financial crisis; rather, it was the complex interplay of the actions of several key individuals and institutions that led to the conditions of the crisis. Nevertheless, an examination of the facts and of the sequence of events could allow one to guess at which people were most culpable in bringing about the crisis. It’s a bit like a game of Clue, really.

Well. Let’s play, then.

Bubbles in the Economic Ocean

We begin our manhunt by contemplating the strange and (largely) inexplicable events that have come to be known as bubbles. For those completely uninitiated in the technical jargon that is usually used in discussing the global financial crisis, it should be pointed out right away that economic bubbles have about as much to do with the kind of bubbles you’d find in your bathtub as the physicist’s notion of work has to do with what you handed in last week as your homework.

An economic bubble has been defined as a condition where “trade in high volumes occurs at prices that are considerably at variance with intrinsic values”. What this basically means is that when an economic bubble is formed in the market for a particular commodity, a disproportionately large volume of that commodity is being produced and sold; and furthermore, the price at which the commodity is sold is considerably higher than its equilibrium value on the market. The equilibrium price of a commodity can be simply defined as a stable price at which the supply of the commodity consistently equals the demand for the commodity.

Bubbles are said to burst (or crash) when the market comes to its senses and “realizes” that too much of a commodity is being traded at inflated values. When this happens, both the price and the quantity of trade in the market fall drastically. Economic bubbles are notoriously difficult to identify (usually because the actual intrinsic values of assets in real-world markets are almost impossible to calculate). It is often only after a bubble has burst that economists are able to be absolutely certain that a bubble existed in the market in the first place.

Further adding to their mystique is the fact that no one really knows what causes economic bubbles. And interestingly enough, some economists even deny their existence. Nonetheless, an examination of the recent history of a few major markets in the developed world in terms of bubbles goes a long way in explaining what caused the global financial crisis. Of particular interest are the Dot-Com Bubble that burst in 2000, and the U.S. Housing Bubble that’s arguably still “bursting”. Let’s take a look at each one of these in turn.

The Internet Age Hits Adolescence

Chaos in the Stock Markets

Starting around the year 1998, the meteoric rise of a myriad of IT-related companies (collectively referred to as dot-coms) boosted the economies of nations throughout the developed world. Unfortunately, much of the economic value that these companies represented turned out to be- well, worthless. The rapid growth of many dot-coms was subsequently matched only by their sudden and spectacular failures. And while the dot-com collapses had widespread repercussions, it was in the stock markets that the blow to the economy was especially obvious. That’s where the Dot-Com Bubble had been residing, quietly biding its time, waiting for the opportunity to snatch the Internet Age out of its carefree childhood years.

According to the NASDAQ Composite index (which is a complicated tool that’s used to measure the performance of stocks on the NASDAQ Stock Exchange), the bubble burst on March 10th, 2000. Hundreds of dot-coms collapsed after burning through their venture capital, the majority of them never having made any net profit. “Get large or get lost”- the business model backed by the belief that internet companies’ survival depended primarily upon expanding their customer bases as rapidly as possible, even if it produced large annual losses- was revealed to be dangerously unsound advice. The crash of the Dot-Com Bubble caused the loss of around $5 trillion on U.S. stock exchanges, and exacerbated the conditions of the recession that occurred between 2001 and 2003.

Alan Greenspan to the Rescue?

Following the collapse of the Dot-Com Bubble, Federal Reserve Chairman Alan Greenspan initiated several policies in the United States. The U.S. Federal Reserve System (often referred to simply as The Fed) serves as the country’s central bank; it is comprised of twelve regional Federal Reserve Banks in major cities across the nation. The Federal Reserve manages the nation’s money supply and its monetary policy, and is responsible for attaining the (sometimes conflicting) goals of maximum employment, stable prices, and moderate long-term interest rates.

In the aftermath of the Dot-Com Crash, Greenspan set the federal funds rate at only 1% (for comparison, note that between 1999 and 2001, the rate had never been set below 4%). It’s been argued that this allowed huge amounts of “easy” credit-based money to be injected into the financial system, and therefore created an unsustainable economic boom. In other words, the economic growth that occurred between 2003 and 2007 is largely attributable to the excessively high level of credit that was sloshing around the economy at the time. All that credit wasn’t backed by enough actual assets, though; this began to become clear in mid-2007, and that’s when the whole house of cards came crashing down.

The Federal Funds Rate

In order to understand the role that the federal funds rate had to play in flooding the economy with credit, one must begin with the realization of a simple fact: that banks create money by lending. A comparison of two hypothetical scenarios will make this a lot clearer. In the first scenario, a fellow that we’ll call Christiano Kaka earns a $1000 bonus for his work as a pro footballer. But since he’s already got millions in his bank account, he figures that there’s no point in bothering to go down to the bank to deposit the money there.

Instead, he stuffs it under his mattress. He happens to be in the habit of losing his wallet, and this safety measure ensures that even if that were to occur again, he could readily get to the cash the next time he’s in the mood to hit the nightclubs. Now here’s the important thing: that $1000 is effectively dead for so long as it stays there under Christiano Kaka’s mattress. It plays no part in the economy, and doesn’t do anything useful for anybody.

In our second scenario, Christiano Kaka realizes that he’ll be travelling past the bank on his way to the nightclubs anyway, so he does deposit the money there. Christiano Kaka’s money gets added to a large pool of money composed of the deposits from all of the bank’s customers. When a young college dropout called Bill Jobs approaches the bank with his crazy schemes of starting a company that deals in personal computers, the bank’s manager decides to throw him a bone, and loans him $1000.

For our purposes, we might as well assume that the $1000 the bank loaned to Bill Jobs is the same $1000 that Christiano Kaka deposited earlier. But, of course, Christiano Kaka hasn’t lost that money; it’s still his, as he could prove by showing us his bank statement. It’s just that the money also happens to be Bill Jobs’ at the same time. As a matter of fact, the bank has created $1000 for Bill Jobs based on the $1000 that Christiano Kaka deposited. Where in the first scenario the $1000 was retired from the economy, in this second scenario it was used to create another $1000 that will go back into the economy (when Bill Jobs rents an office, buys furniture, pays employees, etc). And that’s how banks create money.


But they can’t just go around creating as much money as they please.

The law requires all banks to maintain a certain level of reserves, either as vault cash, or in an account with the Fed. The ratio of bank reserves to money loaned cannot be allowed to fall below the limit set by the Fed. Therefore, the amount of money that any particular bank can create depends upon the amount of actual money that it holds as deposits from customers. Now, whenever a bank makes a loan, the ratio of reserves to loans falls (assuming that reserves remain constant). A bank may decide to issue a loan large enough to cause its ratio to fall below the limit set by the Fed, but it must immediately raise the reserve ratio again by borrowing cash from other banks. The interest rate at which banks borrow from one another is known as the federal funds rate.

When the federal funds rate is as low as 1%, it becomes very cheap for banks to borrow from one another to make up for reserves deficits. Hence, in the interest of making profits, banks can give out much larger loans to many more people, and still remain on the right side of the law. And that’s exactly what happened in the U.S. economy. With banks handing out credit to any and all who cared to ask for it, the economy became flooded in virtual money. The benign economic conditions that prevailed between 2003 and 2007 created not only a nation of spenders, but a nation that spent money that it didn’t really have; a nation that buried itself under a mountain of debt.

Reagonomics Run Amuck

The deluge of credit-based free spending in the economy was the ultimate expression of a socio-political-economic ideology that had had America in its grips for more than twenty years. It was an ideology founded upon the belief that Government was the problem, not the solution. It was an ideology that declared in stentorian tones: The marketplace must be set free!

Reaganomics.

Initiated and popularized by President Ronald Reagan and his advisors in the ‘80s, the economic policies that came to be known as Reaganomics were aimed at reducing the role of the government in the economy, and allowing it to regulate itself instead. Reagan reduced taxes for the rich, decreased government oversight of financial markets, and ushered in an era of astounding fiscal irresponsibility. Reaganomics has been repeatedly criticized for raising economic inequality and throwing both the public and private sectors of the U.S. economy into massive debt.

Traditionally, the U.S. government ran significant budget deficits only in times of war or economic emergency. Federal debt as a percentage of G.D.P. fell steadily from the end of World War II until 1980; that’s when Reagan entered the scene with his own version of the New Deal[1]. Government debt rose steadily through Reagan’s two terms in office and- except for a short hiatus during the Clinton years- continued to rise right until George W. Bush left office in 2009.

The rise in public debt, however, was nothing compared to the skyrocketing private debt.

The pattern of financial deregulation that Reagan set in motion allowed American consumers access to ever-increasing amounts of credit (and hence ever-increasing levels of debt) for decades. America wasn’t always a nation of big debts and low savings: in the ‘70s, Americans saved almost ten percent of their income (even more than in the ‘60s). It was only after the Reagan-era deregulation that thrift gradually disappeared from the American way of life, culminating in the near-zero savings rate that prevailed just before the current economic crisis hit. Household debt was only 60 percent of income when Reagan took office; by 2007 it had zoomed to more than 130 percent.

It was only with the crash of the housing market in 2007- the second major shock to the U.S. economy in the last decade- that it would become painfully clear that wanton debt as a way of life would have to be abandoned.

If You Can't Understand 'Em, Don't Regulate 'Em



Returning to the antics of Alan Greenspan in the years following the Dot-Com Crash, we find that he was also responsible for vehemently opposing any regulation of financial instruments known as derivatives. He wasn’t alone in feeling that financial markets could regulate themselves just fine: Securities and Exchange Commission Chairman Arthur Levitt and Treasury Secretary Robert Rubin also held the same view. Together, in the Clinton years, they ensured that investment banks and other financial institutions were given free rein in creating and selling these complex financial instruments.

Nonetheless, those financial institutions have little to thank Greenspan and his cohorts for; they ended up crippling themselves by their involvement in an unregulated market for complex derivatives that no one fully understood. By 2008, large portions of the derivatives portfolios of major investment banks were reclassified as toxic assets. Lehman Brothers, Bear Stearns and Washington Mutual succumbed to the poison coursing through their veins and had to declare bankruptcy or sell off their assets under duress. Other major banks, such as the American International Group (AIG) and Citibank sustained huge losses and only managed to stay afloat with the help of the government.

Although most derivatives are relatively benign, the late ‘90s saw the proliferation of two particularly complex instruments that would later threaten the stability of the entire financial sector: collateralized debt obligations (CDOs) and credit default swaps (CDSs). Because these innovative new instruments offered lucrative payments in times of economic growth and rising asset prices, they spread like wildfire in the years leading up to the current financial crisis.

All derivatives derive their prices from the value of an underlying asset (except credit derivatives, which derive their prices from the values of loans). Investors can make profits on derivatives if they correctly anticipate the direction that the prices of underlying assets will move. Since hardly anyone had foreseen the appalling conditions of the current crisis, it’s probably fair to say that losses were made on derivatives of all kinds, but it was the fact that CDOs and CDSs became extremely popular in the (at the time) flourishing housing market that resulted in their posing such a great threat to the stability of many financial institutions.

The risks of subprime mortgages in the housing market (which we’ll come back to later) were commonly spread out using CDOs and CDSs. It seemed to be a win-win situation for everyone involved: investment bankers could get in on the profits from the housing market, mortgage lenders could spread the risks of questionable loans, and American consumers benefitted from an infrastructure that encouraged offering home financing for all.

But once the Housing Bubble burst and house prices crashed down to nearly-inconceivable levels, those groups were left staring stupidly at one another.

The valuation of CDOs and CDSs is actually so complicated that no one could say for sure what they were worth once the housing market crashed. And without a price that all parties could agree upon, the markets for these derivatives became completely dysfunctional. As a result of the unfortunate marriage between questionable loans and the complex derivatives that securitized those loans, many American citizens lost their homes through foreclosures; mortgage lenders lost billions in loan defaults; and investment banks found themselves laden down with assets that were literally more trouble than they were worth.

Then the government stepped in to clean up the mess.

If only it had done that on a regular basis (and amidst less catastrophic circumstances) through the implementation of a more rigorous regime of financial market regulation.
Houses Under the Sea
A Cycle of Folly

It is a regrettable fact that the powers that be in America and other developed nations seem to have very short memories when it comes to the economy. Time and again, the painful lessons learnt during economic hardship were thrown out the window once things turned for the better. The stewards of the American economy would thereby doom themselves and their compatriots to relive the suffering born of past mistakes by making the same mistakes again and expecting different results. Only once catastrophe struck again would they realize that they had brought it upon themselves by ignoring the experiences gained from the last such catastrophe. But catastrophes don’t last forever; and as the most recent one faded into the past, the collective knowledge gained from the last two would disappear from the consciousness of the nation…




And so the cycle of folly that has played such an important role in determining the economic fortunes of the nation has gone on.

It began with the Great Depression. One of the most important initiatives taken to revive the economy under President Roosevelt’s watch was the ratification of the Glass-Steagall Act in 1933. The Act provided for more stringent regulation of the banking sector, and aimed to prevent a repeat of the banking collapse of early 1933. Among other things, it prohibited any one institution from acting as a combination of a commercial bank, an investment bank, and an insurance company; it gave the Fed the ability to control interest rates; it created the Federal Deposit Insurance Corporation (FDIC) to insure bank deposits in commercial banks; and it imposed stringent restrictions on mortgage lending.

Roosevelt also spurred the government to increase spending in the economy as part of his New Deal programs. The drop in public expenditure that marked the cessation of these programs created another recession in 1937. From then onwards, significant decreases in public expenditure regularly led to dismal economic conditions. This happened again in 1953 when large portions of public spending were transferred to national security projects during the Korean War. President John F. Kennedy managed to halt the recession of 1960 by calling for increased public spending in the economy. In the ‘70s, the diversion of funds to the military during the Vietnam War (alongside the quadrupling of oil prices by OPEC in 1973) created another major recession.

And then came the ‘80s. Ronald Reagan took the helm at a time of economic prosperity (“It’s morning again in America!” he would proclaim), and undertook the most radical overhaul of the nation’s economic policies since F.D.R.’s New Deal. He convinced American citizens that their government had no business prying into the affairs of the market, and initiated sweeping cuts in public expenditure throughout the economy. And, perhaps more significantly, he overturned many of the regulatory policies that Roosevelt had set in place.

Reagan ratified the Depository Institutions Deregulation and Monetary Control Act and the Garn-St. Germain Depository Institutions Act, both of which sought to repeal parts of the regulatory provisions of the Glass-Steagall Act. Of the Garn-St. Germain Act, Reagan said: “This bill is the most important legislation for financial institutions in the last 50 years.” He may have been right about the significance of the Act, though he probably never intended for it to have the effect it eventually did. By liberalizing mortgage lending and the Savings and Loans industry, Garn-St. Germain paved the way towards a debt-ridden American economy that would be woefully unfit to weather the economic storms of the new millennium.

In the ‘90s, the American lifestyle of living beyond one’s means through the use of cheap credit was considered justifiable because once one took into account the rising values of people’s stock portfolios, everything seemed just fine. It was during this time that the final blow to the Glass-Steagall Act came in the form of the Gramm-Leach-Bliley Act of 1999. This Act allowed commercial banks, investment banks, securities firms and insurance companies to consolidate and form conglomerates.

It was believed that the conflicts of interest between these different kinds of institutions that the Glass-Steagall Act had sought to prevent would no longer be a problem in a flourishing financial sector. Nonetheless, it was because of the Gramm-Leach-Bliley Act that banks such as AIG and Citigroup (which started out as Citicorp, a commercial bank, and became a financial services conglomerate by merging with Travelers Group only after the passing of the Act) managed to get embroiled in the problems that the mortgage industry began to face after 2007. And since these two banks- and others like them- had become “too big to fail”, the government had to spend millions of taxpayer dollars in keeping them afloat.

As we’ve already noted, the stellar performances of U.S. stock markets did come to an end in 2000, with the bursting of the Dot-Com Bubble. But once the economy recovered and growth set in between 2003 and 2007, Americans returned to their free-spending ways. This time, they reasoned that a booming housing market would support their costly habits, just as they had assumed with the stock market before 2000. If anything, they were even more confident this time round. Housing was an infallible investment, right?

Wrong.


The Housing Bubble Expands



As we explore the most proximate cause of the global financial crisis- the bursting of the U.S. Housing Bubble- you’ll begin to see why it was necessary to start our discussion with the Dot-Com Crash, and to jump back and forth in time as often as we have. In a very real sense, the Housing Bubble was caused by the Dot-Com Bubble. The economic conditions that led to the formation of the Housing Bubble were created during and after the crash of the Dot-Com Bubble. Similarly, the legislation and economic policies that came into effect at that time had their roots in the policies of several decades ago; and their effects extend into the present day.

For one hundred years, between 1895 and 1995, U.S. house prices rose in line with the rate of inflation. Then, between 1995 and 2005, the Housing Bubble began to envelop the economy and house prices across the country rose at phenomenal rates. During this time, the price of the typical American house rose by 124 percent; house prices went from 2.9 times the median household income to 4.6 times household income in 2006. Where the average number of ho
uses built and sold before 1995 was 609,000, by 2005 that figure had risen to 1,283,000.



Housing appeared to be outperforming nearly every other sector of the U.S. economy, and there were those who would have us believe that it would continue to do so ad infinitum. Influential personalities such as David Lereah, the chief economist of the National Association of Realtors, regularly trumpeted the rock-solid dependability of housing as an investment; consider the title of his bestselling book- Are You Missing the Real Estate Boom? The media joined in, too, and helped inflate the bubble by glamorizing the housing boom with television programs such as House Hunters and My House is Worth What?

Even amidst all the frenzy, however, a small number of astute observers managed to figure out what was actually happening. In 2002, economist Dean Baker was the first to point out the existence of a bubble in the housing market; he put the value of the bubble at $8 trillion. And what’s even more impressive is that he correctly predicted that the collapse of the bubble would lead to a severe recession, and would devastate the mortgage lending industry.



The unreasonably high level of confidence in the housing market, coupled with the lenient regulations that governed mortgage lending, caused the number of mortgage-backed home purchases in the U.S. to shoot upwards after 2003. The real problem, however, was the fact that mortgage lenders got greedy, and began to offer mortgages to thousands of people who had little ability to repay them. Mortgage lenders are expected to assess the suitability of clients by checking their credit histories, income levels, and other relevant factors; but this process was often overlooked (or only nominally undertaken) in the heady years of the housing boom.




All this irresponsible lending created a huge market for what are known as subprime mortgages. Calling them “subprime” is a euphemistic way of saying that they’re extremely risky loans, and that there’s a high probability that they won’t be paid back. It was to people who didn’t qualify for “prime” loans that the mortgage lenders offered the subprime mortgages (usually at higher interest rates than “prime” mortgages). Lenders such as Countrywide, Indymac Bank, and Beazer Homes became notorious for the aggressive manner in which they marketed subprime mortgages to low-income consumers. (Two of those companies later declared bankruptcy, and one of them is being investigated for mortgage fraud).




Worsening the situation was the fact that nearly 80 percent of the subprime mortgages issued in the last few years were adjustable-rate mortgages (ARMs). Created by the World Savings Bank in the 1980s, the ARM seemed an innocent enough offering until the housing market turned sour in 2007. The interest rate on ARMs doesn’t remain constant throughout the term of the loan; instead, it’s tied to any one of several indices that measure economic performance.

In March 2007, house prices began to plummet, falling 13 percent in that single month. It was, however, only the beginning of a prolonged market correction that hasn’t quite ended yet. As the housing market began to deflate nationwide, the interest rates on ARMs climbed steadily upwards. Millions of homeowners across the U.S. found themselves unable to pay the higher interest rates on their mortgages, and were forced to default on their loans. This resulted in banks and mortgage lenders foreclosing those homes- in other words, throwing the former homeowners out and assuming ownership of the properties. By July 2009, more than 1.5 million homes had been foreclosed, and another 3.5 million are expected see the same fate by the end of the year.

Following closely on the heels of the “victims” of foreclosure are those that are referred to as being “under sea level”. These are the homeowners whose mortgage loans are now worth more than their houses are worth. The technical term for this phenomenon is negative equity. The unfortunate homeowners who find themselves in this position couldn’t even get enough money to clear their mortgage debts if they sold off their homes; they are, therefore, extremely vulnerable to facing foreclosure in the near future. As of December 2008, there were 7.5 million homeowners under sea level. Another 2.1 million people stood right on the brink, with homes worth only 5 percent more than their mortgages.

The Misfortunes of the Mortgage Lenders

The drastic fall in house prices first affected those financial institutions that were directly involved with the housing industry- the banks and corporations that financed house construction and mortgage lending. As the number of foreclosures soared, these companies lost millions of dollars in unredeemable loans. And if you’re thinking that at least they were left with the properties that they gained through foreclosure- well, that didn’t exactly help a great deal.

Look at it this way: in 2005, a bank puts up a $10,000 ARM so that a nice young couple can buy a house and start a family. The bank expects to profit on this investment through the interest payments it will receive over, let’s say, the next ten years. Since the economy is chugging along just swimmingly, the interest rate on the ARM is kept relatively low. But that’s okay from the bank’s point of view, because the value of the property itself makes up for the low interest rate. You see, even in the regrettable event that the new homeowners fail to keep up on their payments and the bank has to foreclose on the property, it ends up with a very marketable house that’s probably worth even more than the $10,000 it initially cost. Nice.

Once house prices began on their steep decline, though, and the interest rates on ARMs reset at much higher levels, things got ugly. There wouldn’t really have been a problem if every mortgagor still had the ability to pay the interest on his mortgage; but the aggressive subprime lending of the last few years- even to people who didn’t really qualify for mortgages- meant that there were millions of homeowners who just couldn’t pay the higher interest rates, and were forced to default on their loans.

After the inevitable foreclosures that followed, banks and mortgage lenders were left in possession of houses that nobody wanted to buy and were now worth almost nothing. Returning to our earlier example, we’d find that the bank would have lost nearly the entirety of the $10,000 that it initially put up; it would lose out on interest payments after foreclosure, and would be left with a house that could hardly even sell for five hundred dollars.

Hence, it’s no wonder that twenty-five major subprime lenders (several of them Fortune 500 companies) had to declare bankruptcy between 2007 and 2008.

Introducing Fannie, Freddie and the Credit Crunch

Next in the line of fire were the companies that dealt in the trade and securitization of mortgages. The two giants in this industry had come to be known as Fannie Mae and Freddie Mac. The quirky names come from the acronyms that represent the full names of each one of them: FNMA (Federal National Mortgage Association) for Fannie Mae and FHLMC (Federal Home Loan Mortgage Corporation) for Freddie Mac. Both Fannie and Freddie were Government Sponsored Enterprises (GSEs), meaning they operated in a sort of grey area between the public and private sectors.

Fannie Mae and Freddie Mac were responsible for buying mortgages from mortgage lenders, and creating and selling mortgage-backed securities (MBSs). By buying mortgages, they provided banks and other financial institutions with fresh money to make new loans; and by creating and selling MBSs, they created a secondary mortgage market that investment banks and securities traders could participate in. The primary purpose of all this was to the give the American housing and credit markets increased flexibility and liquidity. Fannie and Freddie were so deeply enmeshed in the housing market that by 2008 they owned $5.1 trillion in residential mortgages- about half the total U.S. mortgage market.

The final link in the chain consisted of the members of the shadow banking system- investment banks such as Lehman Brothers, Bear Stearns and Goldman Sachs. They traded in MBSs in the secondary mortgage market, and insured pools of mortgages using ridiculously complex financial instruments such as CDOs and CDSs. These derivatives were sold back to mortgage lenders and to commercial banks throughout the economy. Investment banks don’t actually hold any cash reserves, but their influence on the economy became increasingly important as the nation’s financial sector loaded up on debt and the financial instruments that backed up that debt.

Therefore, when the subprime mortgage industry imploded, it wasn’t only the mortgage lenders who were affected. An entire industry that dealt in mortgage-backed securities went down with it; Fannie Mae and Freddie Mac had to be effectively nationalized to prevent their complete collapse. The investment banks that purported to spread the risks of the mortgage industry also sustained huge losses because they had completely failed to foresee the effects of the housing market crash. And finally, banks across the country that held portfolios of credit derivatives were left with worthless, “toxic” junk

These huge losses across the financial sector created what was known as the “Credit Crunch”. Billions of dollars of capital that had been based on the housing market were wiped off the balance sheets of banks and other financial institutions. This left them with very little ability to extend new credit to consumers. It was at this point that that the crisis was said to extend its reach from “Wall Street to Main Street”, meaning that it no longer affected just the major financial institutions, but was now impacting the lives of citizens throughout the country.

As credit streams began to freeze, the entire economy slowed, and then descended into recession. Businesses began to shut down and unemployment rose. Investment and consumer spending plummeted. Alongside the U.S., other developed nations experienced similar symptoms. And with the economies of the developed world in tatters, developing nations lost major sources of manufacturing revenue; their economies began to slow, too.
All in all, the global financial crisis had arrived.

And the Rest is History

Through most of 2008, the U.S. government scrambled to contain the crisis. It initiated the Troubled Asset Relief Program (TARP) to help financial institutions get rid of their toxic assets, and injected nearly $800 billion into the economy through a stimulus package. The worst economic crisis since the Great Depression isn’t about to go down without a fight, though; many experts believe the economy won’t fully recover right until 2011.

But let’s not bother ourselves with speculations about the future. Instead, let’s go back to what we had initially set out to do: have we managed to identify the criminal mastermind behind the global financial crisis? No. Of course not. While we probably managed to gain a few interesting insights into the causes of the crisis, we never really came close to achieving that goal. If anything, we should have come to the conclusion by now that it’s ridiculous to assume that there was any one person behind it all.

Real life is far too boring for that.





[1] In an effort to resurrect the U.S. economy at the height of the Great Depression, President Franklin Delano Roosevelt initiated a sweeping range of economic reforms between 1933 and 1935 that collectively became known as the New Deal. In contrast to Reagan’s policies, Roosevelt’s New Deal stressed the importance of fiscal responsibility and government oversight of the economy.

Pages

Followers