Welcome to Iconosphere, a blog about the structure and dynamics of human social systems.
Welcome to Iconosphere, a blog about the structure and dynamics of human social systems.
Three days before Christmas, six people were killed by a runaway garbage truck. If they’d been killed instead by a suicide bomber, half our remaining civil liberties would be eradicated overnight. In the end, the story will vanish from the headlines as have all other accidental deaths that occurred throughout the year. Who, for example, remembers the 80 dead on a Spanish train 18 month ago? Yet we take it in stride as a fact of life. Accidents happen. In truth, far more people die in accidents than are killed by terrorists. For example, roughly 1,000 people were killed in railway accidents in the UK in the decade before 50 were killed in the London bombings. Three thousand people died in the World Trade Center in 2001. One hundred thousand Americans die every year from the side effects of prescription drugs. Yet because of the 3,000, we get the Patriot Act and mass surveillance; the other 100,000 are shrugged of without a second thought. In my opinion, the greatest tragedy of the internet age is the loss of context and perspective, made worse by opportunistic politicians who fan the flames of paranoia to enhance their authority.
I like French cheese, Italian ham and Greek olives. I also like French vegetables (haricots verts extra-fins, petits pois et carottes), Italian grains (pasta, polenta, risotto), German meats (sausage, veal, smoked tenderloin) and Belgian beer. I pay extra for such items because of their quality. I buy them online from UK websites.
In the stores, one finds d’Isigny camembert, Maille mustard, Poilâne bread and, well, just about anything. Debauve & Gallais chocolates are sold in Paris, Athens, New York, Montreal, Seoul, Beijing, Dubai and on their website. The sun never sets on Echiré butter.
We are living in 2013, not 1320. We have computers linked via the internet running sophisticated analysis, production, logistics and financial software We have airplanes and courier services that ship internationally, overnight. We have Facebook and Twitter to build brands and markets. We have tablets and mobile phones from which to shop.
Why, then, does the European Union need to spend 278 billion euros (30% of its annual budget) on agricultural subsidies? Do blacksmiths still make horseshoes? Are farmers running low on wooden blades for their plows? Perhaps special barns are needed for oxen.
Ah, but the bucolic life of medieval Europe must be preserved. It doesn’t matter that 85% of the global economy is now in services running on high-speed fiber-optic cable. Nor does it matter that Europeans farmers are selling their products at high prices via those same fiber-optic networks. The Common Agricultural Policy, like Chartres Cathedral, is sacred.
Germany leads the world in manufacturing – without a Common Industrial Policy – due to the excellent design and high quality of its products. Switzerland leads the world in watchmaking. France and Italy lead the world in cuisine. Should the EU also subsidize the French and Italian fashion industries?
European agriculture enjoys a huge brand advantage over other countries. Many products trace their ancestry back centuries and are rich in history. French and Italian wines are an obvious example, as is French pâtisserie (Dalloyau, Ladurée), but consider also Le Puy lentils, Agen prunes, Bayonne ham and Bresse chicken. From the ground up, Europe can compete in the global market based on the iconic status of its agricultural products. Gross subsidies on top of that advantage amount to an unfair trade practice.
Agricultural subsidies might have made sense in the aftermath of World War II (when people were starving). They might have made sense in the 1960s (before the invention of the pocket calculator) or in the 1970s (before the personal computer) or even in the 1980s (before the Internet), but not now. European agriculture has embraced globalization. Yet politicians are rooted in an ancient mentality no longer relevant to the modern world.
It took the Catholic Church 360 years to forgive Galileo for demonstrating that the earth went around the sun. Will we have to wait another 360 years for the European Union to get rid of the Common Agricultural Policy?
Building a high-speed railway from London to Edinburgh seems like a good idea. Having it pass through Birmingham, Manchester, Sheffield and Leeds seems sensible. That is, if you’re Isambard Kingdom Brunel living in the 19th century.
We don’t live in the 19th century. We live at the beginning of the 21st century, in a world where 85% of the economy is in services provided through high-speed information systems. We no longer move men to machines; we move knowledge to networks.
The estimated cost of the proposed railway is £32 billion over 20 years. In reality, that will become £100 billion over 25 years minimum. The benefit is an hour or so off journey times. The cost of travel will remain high.
If London were located in the north of England, it might makes sense to run a high-speed rail line south toward Europe. The Channel Tunnel made sense. A high-speed link between Paris and Marseille, Paris and Berlin made sense. Building a new line north from London toward the arctic is daft. Manchester, Birmingham, Sheffield and Leeds are no longer the ‘factories of the world'; they are industrial museums. A new railway would be a coat of paint on a crumbling building.
Should there be a high-speed, high-bandwidth, fiber-optic backbone running the full length of the country? Yes. An architect in Leeds or a film producer in Manchester needs to communicate with global talent and international markets. If these cities are to prosper, they must abandon their Industrial-Era mentality and embrace the Information Revolution. If Brunel were alive today, he’d be designing spacecraft, not railways.
There is no high-speed rail line from London to Reykjavik, Iceland. There is no rail link between Reykjavik and anywhere. Yet Iceland is a successful player in the global entertainment industry. Companies like CCP Games are connected electronically – artistically, intellectually and commercially – to contractors, customers, branch offices and markets around the world.
The days of transporting coal from Newcastle to forge steel in Sheffield to produce machines in Birmingham to weave textiles in Manchester are long gone. We no longer need cadres of managers to organize armies of factory workers to export manufactured goods to the colonies. We no longer live in 1912 or 1952 or 1972 – or even 2002. Facebook was launched in 2004; Twitter in 2006. The iPhone and Kindle were released in 2007. Combined, these products have revolutionized the publishing industry, creating vast new markets for information and entertainment, generating whole new industries in software applications, art and design, music and video, games and tourism. The New Economy is about new services provided by millions of boutique companies over the internet.
As the UK struggles to regenerate its stagnant economy, it must stop looking for solutions in the past. It cannot compete with Germany for the 10% of the global economy in manufacturing. Moreover, the advent of 3D printing will emphasize intellectual property – scientific and engineering concepts, software and blueprints – not mass production – especially not mass production in Leeds or Birmingham.
Are there benefits to synergy from physical proximity? Yes, Silicon Valley is a prime example, but factors far more important than geography determine the location of such enterprise hubs: climate yes, but also culture, human capital, property rights, financial infrastructure, taxes and government bureaucracy. Running a fast train to Manchester does not mean someone will set up business there. More likely, it will convert northern England into a suburb of London.
Is it worth £100 billion to extend London’s northern suburbs beyond Watford? Surely, the money could be better spent in the hands of entrepreneurs, homeowners and individuals in the form of financial incentives (reduced taxes, low-interest business loans and/or home mortgages) or alternative infrastructure investment (rebuilding the existing rail network, improving the road network, building a national fiber-optic network) and/or reducing the national debt (thereby reducing interest payments on the debt, thereby giving the government more money to spend without raising taxes).
In short, the cost/benefit ratio of building a new, high-speed railway from London to the north of England is not only poor but possibly counterproductive in that the money could be better spent elsewhere. It is a vanity project commensurate with the past, obsolete before it is even built. Can’t our politicians, for once, start thinking about the future?
2013.05.25 Warning over flagship government projects
2013.09.24 HS2: 12 arguments for and against
In 1987, Michael Ryan killed 17 people (including himself) with two semi-automatic rifles and a handgun. In 1988, the UK government banned the private ownership of semi-automatic rifles. In 1996, Thomas Hamilton killed 18 people (including himself) with four handguns. In 1997, the UK banned the private ownership of handguns. In 2010, Derrick Bird killed 13 people (including himself) with a shotgun and a rifle.
Total death from these incidents is 48.
Between 1987 and 2010, roughly 1,000 people in the UK were murdered via firearms.
Between 1987 and 2010, roughly 40,000 people were killed in motor vehicle accidents.
Between 1987 and 2010, roughly 100,000 people committed suicide.
I admire Warren Buffett. I admire his business philosophy of making long-term investments in solid, productive companies. If Warren were to offer me an opportunity to invest in a venture he himself endorsed, I would not quail at the tax implications.
The problem with this argument, which he makes in the New York Times, is that few investments carry such low risk. There is only one Warren Buffett in the world. Most investments require great effort over a long time and/or carry high risk. In other words, most investors face a situation where they have much to lose and, if they succeed, must pay a large percentage to the government in a variety of taxes.
Warren’s basic argument is that the wealthy should pay more tax. In 1980, the top 10% of income earners in the United States paid 49.28% of the total income tax burden. In 2009, they paid 70.47% (see table 6, see graph). I presume Warren would prefer they paid 75% or 80%. If so, at what point is 80% not enough?
The problem is that a government balance sheet has two sides: income (taxes) and expenditures (spending). Think of it as money flowing into a bathtub (taxes) and money draining out of the bathtub (spending). As the drain gets bigger and more money flows out, governments ask for more money to flow in. When someone suggests that the drain should be made smaller, he is branded as unpatriotic or a rich bastard who doesn’t care about widows and orphans.
By 1900, the United States had eclipsed the British Empire in terms of wealth. By 1913, the United States had become the world’s most advanced economy. In 1913, US government spending as a percentage of GDP was 8%. Today, it’s 40%. That’s huge. Nearly half of the US economy now flows through the government bathtub (see chart).
The United States is deeply in debt. In 1900, total US government debt (federal + state + local) was 20% of GDP. In 1933, it was 70%. Now, it is over 100%. Not only is much more water flowing through the bathtub, it’s draining out faster than it can be refilled. (see charts).
Warren Buffett’s solution is to get more water flowing in from the wealthy, whose contribution, as we saw earlier, has risen from 50% to 70% over the past 30 years. By the way, in the same period, the contribution of the bottom 50% of income earners has fallen from 7.05% to 2.25%. The bottom 47% of Americans pay no income tax at all.
Should the wealthy pay more tax? Perhaps, but looting the wealthy is not a viable, long-term solution to the US government’s problems. Instead, someone needs to fix the bathtub.
If the UK were smart, it would grant Julian Assange safe conduct to Ecuador. It would get rid of this hot potato and fast. The UK gains nothing from trying to break up a bar fight between Ecuador, America and Sweden – not to mention Assange’s global supporters – and it should avoid alienating South America. The situation with the Falkland Islands is delicate. Now is not the time to give political ammunition to Argentina.
Not only has the British government had to back down from its threat to storm the embassy, but it is forced to mount an expensive security operation in the searing heat of international publicity. Assange gets a free, global media platform, the US is embarrassed, the Olympics are forgotten and Britain looks the fool. Meanwhile, as expected, South America has a cause célèbre around which to unite in an Argentinian-led political, diplomatic and economic campaign against the Falklands. In short, the UK has blundered into a no-win situation that, as with any man attempting to break up a bar fight, will result in a bloody nose.
2013.06.02 One year later
2014.04.25 Two years later, the bill is £6 million
2015.02.10 Make that £10 million
Over the past year, reports that neutrinos might travel faster than the speed of light made international headlines. If this phenomenon had been confirmed, Einstein’s Theory of Relativity – which survived 100 years of rigorous testing – would have been undermined. Exceptions are important in science, often more important than the rule.
Exceptions are also important in business. If one percent of Toyota’s cars were to suddenly accelerate for no apparent reason, it would result in an international recall. If one airplane in a million falls out of the sky, vast resources are mobilized to understand the cause.
Why, then, is the same scientific curiosity not applied to smoking? Why instead are governments colluding with the medical community in a program of moral and scientific propaganda that would not be tolerated in any other area of public policy?
Let me offer myself as a case study. I have smoked over half a million cigarettes in my lifetime. I have also been exposed to very high levels of second-hand smoke for more than 50 years. I do not now nor have I ever had health problems of any kind.
I began smoking before I was born. My mother smoked (and drank) throughout her entire pregnancy. Both my parents (and their friends) smoked in the house during my childhood. I began smoking at the age of 12 (stealing my parents’ cigarettes). I started buying my own cigarettes at the age of 14. During my 20s, I smoked about 20 a day; during my 30s, 40 a day; during my 40s, 60 a day. Now, at the age of 54, I smoke 40g of strong, hand-roll tobacco per day (equivalent to 60+ regular cigarettes).
Regarding second-hand smoke, I breathed it at home, at high school dances, in university classrooms, at my desk in government jobs, in bank queues, shopping centers, buses, cars, trains, airplanes, cinemas, pool halls, bars, restaurants, cafes and nightclubs. Moreover, since I was a smoker, I preferred the company of other smokers. I didn’t avoid other people’s smoke; I gravitated toward it.
After 40 years of heavy smoking and breathing other people’s smoke, I am in excellent health: 6’2″ (188cm) with a full head of hair, perfect eyesight for my age, healthy teeth and gums, a clean complexion and healthy skin. I have normal blood pressure and no allergies. I take no medication and I’m fully functional in terms of male potency.
How can this be? Look at any cigarette package to find gruesome photographs of baldness, blindness, rotting teeth, cadaverous skin, limp dicks, blackened lungs, bloody throats and dead babies. All of this is presented as incontrovertible fact – as an absolute and inevitable consequence of smoking. Let’s call it by its real name: propaganda.
Some doctors go so far as to claim that 50 percent of smokers die as a consequence of smoking. If so, then 50 percent don’t die as a consequence of smoking. Obviously, the statement “smoking kills” is (at most) only half true, which means it is (at least) half false, which means it is just plain wrong. People die from alcohol too. Does anyone claim “alcohol kills” or “driving kills” or “mountaineering kills“? When did ‘some’ become ‘all’? No self-respecting scientist would make such a mistake.
Can smoking affect health? Yes, but it has not harmed my health – even after 40 years of extremely high exposure to direct (active) and indirect (passive) tobacco smoke. Will it affect my health in the future? Who cares? The real question – the interesting question – is why it never affected my health in the past. If smoking is so dangerous as to be banned outdoors on the beach, then I should have died a horrible death 30 years ago. What factors other than smoking affect health? The medical community doesn’t say. Instead, it pursues a single-factor agenda to persuade governments that smoking is ‘evil’.
Such scientific dishonesty is worse than tobacco advertising because, unlike tobacco companies, governments and doctors have a public duty to tell the truth. The truth is that there are risks associated with smoking, but the causal relationship is not fully understood. A propaganda campaign based on exaggerated claims of baldness, rotting teeth, bloody throats and certain death is a deliberate and irresponsible lie.
Lying to the public in the name of public safety is not only unethical, it’s counterproductive. Reefer Madness, a 1936 propaganda film against marijuana, has become a classic joke. Government credibility in the ‘war on drugs’ falls daily with each fresh pile of corpses. Various overreactions to suspected outbreaks of disease have dulled the public mind. Exaggeration, propaganda, crying wolf; these undermine public safety far more than any supposed gain from a ‘war on lifestyle’. One day, there will be a true medical emergency – and no one will believe a word of it.
Moreover, to eliminate risk from human affairs, to reduce human beings to mere statistics, to formulate public policy via spreadsheet, is to emasculate the human race. Do we really want to transform human society into a sterile, sanitized, termite colony governed by medical monarchs in a nanny state?
A decade of forgotten and/or airbrushed history
1994.06.03 Send regulations up in smoke (reprinted as “Second-Hand Smoke” in Getting it Right: Markets and Choices in a Free Economy. MIT Press, 1996): “Despite the recent assertions by the EPA, the statistical evidence for health risks from secondhand smoke is extremely weak, even by the standards of an empirical economist…. The final recourse is to admit that the scientific evidence on the health hazards of secondhand smoke is flimsy, but to point out – correctly – that this evidence does not conclusively rule out a small effect.”
1997.10.19 Cancer alert on passive smoking is ‘false alarm’ (no longer available on The Telegraph website)
1997.12.18 Blowing smoke “Tobacco is not, properly speaking, a social problem at all, but the growing anti-smoking movement is quickly becoming one.”
1998.03.08 Passive smoking doesn’t cause cancer – official (no longer available on The Telegraph website) Here is the text (by health correspondent Victoria Macdonald):
THE world’s leading health organisation has withheld from publication a study which shows that not only might there be no link between passive smoking and lung cancer but that it could even have a protective effect.
The astounding results are set to throw wide open the debate on passive smoking health risks. The World Health Organisation, which commissioned the 12-centre, seven-country European study has failed to make the findings public, and has instead produced only a summary of the results in an internal report.
Despite repeated approaches, nobody at the WHO headquarters in Geneva would comment on the findings last week. At its International Agency for Research on Cancer in Lyon, France, which coordinated the study, a spokesman would say only that the full report had been submitted to a science journal and no publication date had been set.
The findings are certain to be an embarrassment to the WHO, which has spent years and vast sums on anti-smoking and anti-tobacco campaigns. The study is one of the largest ever to look at the link between passive smoking – or environmental tobacco smoke (ETS) – and lung cancer, and had been eagerly awaited by medical experts and campaigning groups.
Yet the scientists have found that there was no statistical evidence that passive smoking caused lung cancer. The research compared 650 lung cancer patients with 1,542 healthy people. It looked at people who were married to smokers, worked with smokers, both worked and were married to smokers, and those who grew up with smokers.
The results are consistent with their being no additional risk for a person living or working with a smoker and could be consistent with passive smoke having a protective effect against lung cancer. The summary, seen by The Telegraph, also states: “There was no association between lung cancer risk and ETS exposure during childhood.”
A spokesman for Action on Smoking and Health said the findings “seem rather surprising given the evidence from other major reviews on the subject which have shown a clear association between passive smoking and a number of diseases.” Roy Castle, the jazz musician and television presenter who died from lung cancer in 1994, claimed that he contracted the disease from years of inhaling smoke while performing in pubs and clubs.
A report published in the British Medical Journal last October was hailed by the anti-tobacco lobby as definitive proof when it claimed that non-smokers living with smokers had a 25 per cent risk of developing lung cancer. But yesterday, Dr Chris Proctor, head of science for BAT Industries, the tobacco group, said the findings had to be taken seriously. “If this study cannot find any statistically valid risk you have to ask if there can be any risk at all.
“It confirms what we and many other scientists have long believed, that while smoking in public may be annoying to some non-smokers, the science does not show that being around a smoker is a lung-cancer risk.” The WHO study results come at a time when the British Government has made clear its intention to crack down on smoking in thousands of public places, including bars and restaurants.
The Government’s own Scientific Committee on Smoking and Health is also expected to report shortly – possibly in time for this Wednesday’s National No Smoking day – on the hazards of passive smoking.
1998.03.12 Smokescreens “The World Health Organisation is showing signs of allowing politics to get in the way of the truth.”
1998.03.15 Behind the smokescreen (no longer available on The Sunday Telegraph website)
1998.10.07 Multicenter case-control study of exposure to environmental tobacco smoke and lung cancer in Europe “Our results indicate no association between childhood exposure to ETS and lung cancer risk. We did find weak evidence of a dose-response relationship between risk of lung cancer and exposure to spousal and workplace ETS. There was no detectable risk after cessation of exposure.”
1998.10.11 Study fails to link passive smoking with cancer (no longer available on The Telegraph website)
1999.11.04 Rout of the new evil empire “With the release on November 5th of Michael Mann’s new film, ‘The Insider’, big tobacco takes yet another step towards filling the place once occupied by the Soviet Union.”
2000.02.11 Cancer risk from passive smoking ‘less than feared’ (no longer available on The Telegraph website)
2000.07.20 Blowing smoke “Americans’ obsession with punishing tobacco firms is wrong-headed, and an obstacle to rational debate about illegal drugs.”
2000.10.12 The tobacco war goes global “The World Health Organisation is trying to organise an international campaign against the demon weed.”
2002.06.20 ‘Smoking is even deadlier than we thought’ (available on The Telegraph website, note links to earlier articles no longer available)
After 10 years of deliberate and concentrated government anti-tobacco advertising, the average person no longer questions the data. Tobacco has been successfully demonized. Next on the agenda are alcohol, softdrinks, pizza, meat, chocolate bars…
Meanwhile, “Sir Patrick Leigh Fermor, who died last year at the age of 96, was known to everyone as Paddy…. Despite smoking 80 to 100 cigarettes a day and drinking prodigious amounts of alcohol, he remained remarkably fit. At the age of 69 he emulated Byron and swam the Hellespont in Turkey.”
2010.11.06 Dutch U-turn on smoking ban
2011.12.09 Holland stops funding anti-tobacco activists
2012.09.23 Switzerland rejects smoking ban
2012.11.10 Denmark to abolish tax on high-fat foods
2013.03.11 New York court blocks soda ban
2015.01.02 Two thirds of cancers are simply bad luck
In the 2012 BBC Reith Lectures, Niall Ferguson, one of the world’s top economic historians, presents a superb overview of the current state of the global political economy.
Finding a good keyboard these days is almost impossible. Mine, supported now by two boxes of wooden matches, is literally on its last legs. How can it be that in the 21st century, the range of available keyboards is so poor?
It’s not as if my current keyboard is a masterpiece of design either. It is, at best, a compromise. I wanted white; I got black. I wanted international; I got UK. At least the layout is manageable with decent key labeling and, due to a shipping error, I got it for free.
The obvious solution to the keyboard problem – and long overdue – is a flat, keyless, configurable, lighted, touchboard. Choose a language; press select. Choose a layout; press select. Choose a configuration; press select. No moving parts. High visibility. Easy to clean. Problem solved.
The first step down the road to sanity is provided by Minebea Co. of Japan with its Cool Leaf touchboard. Personally, I believe a grid system of key squares would be a major improvement, but I guess we must wait a year for manufacturers to scrap the old clunker boards in favor of the new technology. Apple too is finally waking up. Hooray! I just hope my current clunker board hold out long enough to see the dawn of this new era.
For some strange reason, the Mayor of London is elected according to the French presidential voting system. Instead of employing the tried-and-true English method of marking an ‘X’ in the box beside one’s preferred candidate then adding up the results, the Mayor of London requires two ‘X’s in separate boxes in separate columns on the ballot paper. In the words of an election official at my local polling station, “I can’t explain it to you. It’s how the system works.”
Indeed – and for the second time, the results of the election were delivered 24 hours late. Did all the fiddle-faddle of the second box make any real difference to the outcome of the election? No, well, other than said delay. In other words, the system for electing London’s mayor is sloppy.
There are, in fact, only two credible methods of electing people: first-past-the-post (the person with the most votes wins) and ‘aggregate points’ (n points are allocated to n candidates in n-to-1 descending order).
For those who dislike first-past-the-post, let me elaborate briefly on the ‘aggregate points’ system. Presume seven candidates for a position. The ballot paper would display a box beside the names of the candidates (as is currently done with first-past-the-post), but instead of writing an ‘X’ in the box of one’s preferred candidate, one would write ‘7’ (seven). If there were only five candidates, one would write ‘5’. If there were 12 candidates, one would write ’12’. Following this, one would have the option of writing n-1 in the box next to one’s second preference. In the case of seven candidates, one would write ‘6’ in the box next to the name of one’s second choice. One would also have the option of writing ‘5’ in the box of one’s third choice, ‘4’ in the box of one’s fourth choice, ‘3’ next to one’s fifth choice, ‘2’ next to one’s sixth choice and ‘1’ next to one’s seventh (least preferred) choice. In other words, one ranks the candidates from highest to lowest by allocating points. The number of points to be allocated is determined by the number of candidates.
Presume five candidates. First-past-the-post would look like this:
Aggregate points would look like this (5 is most preferred, 1 is least preferred):
One need not rank all five candidates. One could rank only one’s favorites:
One could even vote for a single candidate:
One could even accommodate first-past-the-post voters who dislike the ‘aggregate point’ system, by counting their mark (X) as their first preference (5). Thus:
Would be read as:
Counting the vote for the ‘aggregate points’ system would be a matter of adding up all the points. The score of each candidate (his number of points) would be updated instantly as each ballot paper is read. The candidate with the most points wins. Needless to say, this process is more complicated than the first-past-the-post system, but it is a significant improvement on the ‘sloppy seconds’ system currently employed to elect London’s mayor.
Generally speaking, it is unwise to build a hard, flat wall perpendicular to high-speed traffic. The emergency lane should have been tapered toward the thoroughfare lanes.
2012.03.14 Belgian coach crash in Swiss tunnel kills 28
My dearest Argentina,
Imagine my surprise to see your name in the newspapers. We had quite lost track of you (the fashion cycle being what it is), but, just as a silent film has taken an Oscar for the first time in 83 years, we once again see your name up in lights. Unfortunately, the negative publicity that dogged your career in the early days remains faithfully at your heels. Since we are old friends – both of us buffeted by winds of fortune – perhaps I might offer some brotherly advice.
While I can understand your desire to own the Falkland Islands, take a moment to remember that Spain would like to own Gibraltar, Morocco would like to own Madeira and the Canary Islands, Canada would like to own Saint Pierre and Miquelon, France would like to own the Channel Islands and, well, you get the point. I’m afraid the world is not clean and tidy, nor can we rearrange the pieces on the chessboard to suit our fancy.
I’m afraid also that you somewhat blotted your copybook the last time this issue came up. Indeed, let us be frank, it contributed greatly to the precipitous decline that marked your unfortunate exit from the stage. I know you are a proud woman, but surely, all those years in the wilderness have taught you that grace is the superior weapon. Petulance may be charming in a young girl, but it wears quickly and certainly has no place in the arsenal of a middle-age woman.
Agitating against the British will not serve your ends. If anything, it will only harden their hearts against you. The Falkland Islands have been British for nearly 200 years (1833) – longer than the existence of a united Argentina (1861). Do you suppose Britain will relinquish the islands without the inhabitants’ consent? All the lawyers in the world cannot rewrite history – a history, I might add, that includes some unpleasantness on your part.
Therefore, take your inspiration not from Shakespeare’s Macbeth, but from Aesop’s Fable of the Wind and the Sun. Did Franco undermine the resolve of the Gibraltarians through blockade? Would the French citizens of Saint Pierre wish to join Canada if they were isolated and harassed by their neighbors in Newfoundland? The harder you blow on the Falkland Islanders, the tighter they will wrap their coats around themselves. Instead, open your arms to them. Invite them into your ports. Flatter them with trade. Imbibe them with good food, song and dance. Make them hunger for your smile, your perfume, your touch. Seduce them, my darling, and bring them to your bed.
2011.12.15 The Amen Break: just a sample
For ten years, I have predicted the demise of the euro on structural grounds. My argument is simple: a currency union requires economic amalgamation within a political union, which, in turn, requires linguistic, cultural and institutional cohesion.
The current attempt to save the euro through greater political centralization and control is folly. Even if a European political union were achieved through military force (as was attempted from 1939-1945 and again from 1945-1989) with a fiscal union imposed from above, the problem remains: linguistic, cultural and institutional heterogeneity. At best, a fiscal union would transfer money from productive members to unproductive members. It cannot reallocate economic resources.
One must face reality. Italy is dysfunctional. Greece, Spain and Portugal will never become disciplined, efficient, nose-to-the-grindstone Germanys. Nor will they suddenly convert their agricultural economies into global centers of advanced engineering. The problem is one of apples and oranges.
How, then, do Texas and Idaho coexist within a political union? Because after 200 years and a civil war, both accept the overarching authority of the Constitution, the Federal Government and the Federal Reserve System. People in Texas and Idaho speak the same language, share a similar culture and operate within a common institutional framework (laws and social norms). It is relatively easy for John to move his family and business from Texas to Idaho. It is almost impossible for Jean to move his family and business from Belgium to Spain.
A Federal Europe will not work, even if it is attempted with all the wisdom of the Framers of the US Constitution. It is for this reason that the euro should never have been created in the first place. It is why European leaders should be striving now to unwind the euro and reestablish independent national currencies. Put simply, race cars and pedestrians cannot use the same road, especially if the cars drive fast on the right, the pedestrians walk slowly on the left and the signs are all in Greek.
Regarding the current debt crisis: normally, a country can grow out of its debt through entrepreneurship, technological innovation and increased productivity. These are infeasible for the Club Med countries (Greece, Italy, Spain, Portugal) because they lack the necessary political-economic infrastructure:
Even with liberal and intelligent leadership, organizational and institutional change is a slow and painful process. The only short-term solution is currency devaluation and lower interest rates. The alternative, to force austerity rapidly on a population accustom to artificially high living standards in order to repay foreign debt is to risk social and political instability. Do we really want another Mussolini, Franco or Papadopoulos? Dissolving the euro would rejuvenate the European economy by enabling member states to achieve equilibrium within the global economic system, assuaging rather than exacerbating nationalist sentiments.
The benefits to Club Med countries leaving the euro are enormous:
The costs are significant, but not insurmountable:
Yet higher import prices depend upon where one is importing from. The Club Med countries would not suffer high import prices for goods and services imported from each other. While Greece, Spain and Portugal are mainly agricultural countries, Italy does have an industrial base. The more countries that leave the euro, the broader the Club Med market of goods and services at equivalent prices. Moreover, there is an entire world beyond Europe with which to trade.
Viewed in this light, the pernicious nature of the euro becomes apparent. The Club Med countries are currently forced to compete with Germany in terms of productivity, they must accept German interest rates and adopt German fiscal constraints. Political union is not a recipe for salvation. Chained to the euro, the Club Med countries face economic stagnation.
The tragedy is that monetary union was never a necessary step for a common European market. The European Union can function perfectly well without the euro. Economic convergence would likely be faster without a single currency. It is ironic, therefore, that the euro now threatens to undermine the whole European project.
The nettle must be grasped. The Club Med countries should default, leave the euro and establish independent currencies. They should, in other words, declare bankruptcy and begin anew with currencies that reflect economic fundamentals. Ultimately, this means a dissolution of the euro as a pan-European currency.
The question is how to unwind the euro with minimal disruption to the global financial system. The problem is that sovereign debt (government debt) is held by private banks. A full Club Med default would wipe out the European banking system.
Therefore, it is essential to prepare an across-the-board, preemptive recapitalization of European banks as was done by the US Treasury for American banks during the 2008 financial crisis. Potential sources of this funding are the ECB, the EFSF, the IMF and sovereign wealth funds. However it is accomplished legally, the end result must be a pool of one trillion dollars (or its equivalent) of capital to be injected into the banks. This could be managed by the IMF, who would take the equity position.
Following the dissolution of the euro and the reestablishment of national currencies, the banks would be free to repurchase their equity from the IMF. Also, national governments would be free to issue new debt domestically to cover their deficits (no longer burdened by high interest payments). Once the Club Med countries are restored to economic health, their governments could again borrow on the international financial markets. Importantly, the incentive for economic and political reform would come not from Brussels and Berlin through treaties and diktats (which have been notoriously ineffective in the past) but from the world at large.
That the euro was politically motivated is now patently obvious. Equally obvious is the mounting threat to the international financial system from a post-war generation who doggedly refuse to abandon the dream of a United States of Europe. Yet only by sacrificing the fantasy of that political dream can the reality of European economic integration be accomplished. Seeking to force political union now is to risk generating the very conditions of the 1930s that the European Union was designed to prevent: depression, political instability, social unrest, demagoguery, autarky and war.
2011.12.22 What really caused the eurozone crisis?
2012.11.06 Greek chaos
On 17 March 2003, Robin Cook delivered one of the greatest speeches in modern British history, receiving an unprecedented standing ovation in the House of Commons. Yet in spite of his formidable reputation, his crystal-clear logic and powerful rhetoric, he was unable to prevent the House of Commons from making one of the greatest blunders in modern British history. He placed a lifetime’s faith in the wisdom of the House and witnessed its failure. For many people, 18 March 2003 (the Commons vote) marked the end of the legitimacy of British parliamentary democracy. It certainly destroyed the career and reputation of Tony Blair. (Commentary added August 2012)
On the 22nd of June, a door opened before us, and we didn’t know what was behind it. We could look out for gas warfare, bacteriological warfare. The heavy uncertainty took me by the throat. Here we were faced by beings who are complete strangers to us. - Adolf Hitler, 17 October 1941
Now the reason the enlightened prince and the wise general conquer the enemy whenever they move and their achievements surpass those of ordinary men is foreknowledge…. What is called ‘foreknowledge’ cannot be elicited from spirits, nor from gods, nor by analogy with past events, nor from calculations. It must be obtained from men who know the enemy situation. – Sun Tzu
The United States wishes to invade Iraq. The stated goal is to remove an imminent threat of strategic nuclear, biological and/or chemical weapons that pose, together or separately, a clear and present danger to the United States, her allies and the world in general.
As of today, it remains unclear whether Iraq possesses such weapons and, if so, to what extent such a threat is credible. Information acquired by United Nations agencies operating in Iraq has so far been inconclusive. Public intelligence briefings by the United States, the United Kingdom and other countries have also been inconclusive. After considerable effort and with considerable motivation to garner international political and popular support for an invasion through persuasive intelligence, the results are, again, inconclusive. In spite of this and facing significant international opposition, the United States is resolute in its desire to invade Iraq.
Something is amiss. Even an undergraduate student of international politics knows that one does not launch military adventures without clear goals supported by conclusive intelligence. Therefore, in the case of Iraq, are there alternative goals, unstated publicly, for which N/B/C weapons is but a secondary consideration? Here is a list of candidates:
The first four are invalid because:
That leaves only two valid reasons for invading Iraq:
Since the primary function of the United Nations is to prevent countries from invading each other to acquire economic, commercial, political, diplomatic and/or military assets, such an invasion must take place under the guise of defense. From all appearances, the United States and the United Kingdom are using N/B/C weapons to construct a defensive story to mask an offensive strategy.
Yet the clumsiness of the defensive story suggests arrogance and incompetence. In my opinion, George Bush and Tony Blair are pursuing their own, personal, moral agendas. Both men are deeply religious. Neither man is particularly intelligent. They often state that an invasion is “the right thing to do,” as though they had a direct line of communication with God. Regardless of my opinion, an invasion must be evaluated in terms of the costs and benefits of achieving desired goals under adverse conditions.
Presume the primary goal is to acquire economic, commercial, political, diplomatic and/or military assets. What are those assets?
What are the costs of achieving desired goals under ideal conditions?
What are the costs of achieving desired goals under adverse conditions?
What are the potential costs of failing to achieve desired goals?
What is the actual situation on the ground?
Iraq, like the former Yugoslavia, is ethnically and religiously heterogeneous. Saddam Hussein, like Tito, is a strong-man who holds the country together. We know what happened to Yugoslavia after Tito’s death. A similar fate awaits Iraq.
Culture matters. The United States and United Kingdom developed their democratic institutions over a period of centuries. The legal, economic and social foundations on which those democratic institutions were built go back a thousand years. Democracy will not take root in Iraq in the space of a few months. At best, it will take 50 years.
Without a comprehensive, long-term plan for the administration and reconstruction of Iraq, the cost of acquiring economic, commercial, political, diplomatic and/or military assets will be high. There is also a risk those assets will be seized by the new regime or otherwise rendered valueless. Moreover, the potential negative externalities of increased, global, terrorist activity, while difficult to quantify, will be borne in part by people with no interest in the conflict.
Based on this calculus, an invasion now, particularly without international support, would be a major strategic error. In brief, the United States would be committing itself to an expensive course of action for which there is only one possible justification: the discovery and destruction of strategic N/B/C weapons. To undertake such a mission, therefore, without conclusive intelligence amounts to a high-stakes roll the dice.
Hugh Trevor-Roper, Hitler’s Table Talk 1941-1944: His Private Conversations, 3rd edition, translated by Norman Cameron and R.H. Stevens. Enigma Books, New York, 2000
Sun Tzu, The Art of War, translated by Samuel B. Griffith. Oxford University Press, Oxford, 1971
The euro, introduced two months ago, is unlikely to survive because it is founded upon political strategies rather than economic principles. Whether the European Union itself survives is an open question.
A European union was conceived in the aftermath of World War II by Jean Monnet and Robert Schuman as a mechanism to:
The political will for European integration was strengthened during the Suez Crisis of 1956 when it became clear that:
European integration proceeded as follows:
At this point, the European Community made a structural choice. The Impossible Trinity states that any group of countries seeking economic union must choose two of the following three options:
The European Community chose fixed exchange rates and free flow of capital, but it meant that member countries would have to give up control over their interest rates and accept a single, one-size-fits-all, European monetary policy. The EMS would have to evolve into a rigid monetary union. The safety valve of floating exchange rates would be removed. Yet in order to succeed, a monetary union between countries requires the following:
A whole requires fluidity between its member parts. Economic agents and their resources must flow quickly and smoothly between opportunities and geographic locations. There can be no legal, cultural, linguistic or institutional barriers. Compartmentalization, the friend of ships, is the enemy of monetary unions. Moreover, there must be a strong, central fiscal authority (central government) to cross-subsidize cold zones (low investment, low growth) with money from hot zones (high investment, high growth).
Unaware of the impending Information Revolution, the European Community sought to revitalize the post-war economic paradigm. Yet the post-war political paradigm was coming to an end:
This unexpected political shock produced an unexpected economic shock. German reunification generated inflation (in former West Germany) as eastern Germans (and other eastern Europeans) frantically purchased goods and services with inflated currency (a subsidized exchange rate of 2:1 instead of a realistic 6:1 Ostmarks to Deutschmarks) from western German businesses. By 1992, the western German economy was hot while the rest of Europe was cold (in recession). The German central bank (Bundesbank) raised interest rates to cool the German economy. The other European countries raised interest rates to maintain the fixed exchange rates set by the EMS. Yet high interest rates in the rest of Europe were stifling economic recovery.
In the midst of this economic imbalance, the member countries of the EC signed the Treaty of European Union (Maastricht Treaty), which established the European Union (political) and European Monetary Union (economic). There would be one European central bank (ECB) issuing one European currency (euro) and setting one European interest rate. The Maastricht Treaty was rejected in a Danish referendum, barely survived a French referendum and was frog-marched through the British Parliament (without a referendum). The economic imbalance was untenable. The EMS collapsed.
At this point one would have expected the European Community to abandon monetary union. The Information Revolution was gathering speed, generating a new, global, economic paradigm, which married inexpensive, decentralized IT systems to new and sophisticated financial instruments. Never before could floating exchange rates be managed so effectively and efficiently by businesses and individuals. The Industrial Era of Customs Unions was ending just as the Maastricht Treaty was coming into effect.
Why, then, did the European Community (now the European Union) persist with its goal of monetary union? Because with the collapse of the Soviet Union, the reunification of Germany and the absorption of former East European (Communist) countries into the former West European (Capitalist) economic system, the West European project became a pan-European project. With the Soviet Union out of the way, there were renewed ambitions to construct a ‘United States of Europe’ to rival the United States of America. Thus, while economic arguments for monetary union were getting weaker, political arguments for monetary union were getting stronger.
Today, 10 years since structural problems undermined the EMS, those same structural problems remain. The Information Revolution is accelerating. The Internet became a global phenomenon in 1995, eBay and Google were founded in 1998. In the 10 years since the signing of the Maastricht Treaty, there have been almost seven complete iterations of the 18-month IT product lifecycle (based on Moore’s Law). As a result, the cost of a unit of 1992 IT-functionality (computer+telecommunications capability) has fallen 99 percent. To put it another way, the 1992 price of a unit of IT functionality now buys 100 units. By 2012 it will buy 10,000 units. This does not include the service industries that will sit atop all that IT functionality.
How long, then, can a socially compartmentalized monetary union survive in a decentralized, global Information Economy? Even if the European Union were to create a fiscal union managed by a strong central government, the only thing to flow across borders will be money. Human capital will remain rooted in the geography of linguistic, cultural and institutional communities. If we rerun the EMS experiment, this time with a fiscal union, would Germany consent to huge subsidies for the rest of Europe?
What, then, is on the horizon? Let us look at some trajectories.
In terms of size and significance, the political and economic changes afoot exceed the collapse of the Soviet Union by scales of magnitude. What happens when this economic tsunami rolls across a brittle European Monetary Union?
The Asia Effect:
The Technology Effect:
The Economic Result:
It would appear that the developed world is facing 20 years of of low inflation and low interest rates. At the same time, China will be importing large quantities of raw materials and machine tools and exporting large volumes of traditional manufactured goods. Producers and consumers in countries that supply China with raw materials and machine tools will benefit. Consumers will benefit in countries that purchase traditional manufactured goods; producers will be undermined.
Across the developed world, sustained low interest rates will generate an investment boom, a consumption binge or both. Those countries that use low interest rates for consumption will sacrifice a once-in-a-generation opportunity to increase TFP growth (through technological innovation, new plant and equipment, more efficient organizational systems, R&D, high-tech entrepreneurship), foregoing the returns necessary to service the debt.
What happens, then, if German producers, consumers and investors seize the opportunity afforded by China’s Industrial Revolution while Spanish consumers binge on cheap imports and cheap loans? In 20 years, Germany could reap the rewards of high TFP growth while Spain goes bankrupt. The only way out for Spain in such a situation is a massive devaluation of its currency, dramatic falls in low-skill wages or a combination of both. Membership in the euro will prevent the first; riots in the street will prevent the second. Will a frugal Germany be willing to bail out a profligate Spain? Unlikely.
How fascinating that within days of the World Health Organisation (WHO) releasing a one-page summary of a 200-page, 10-year, 12-centre, seven-country study suggesting that there is little or no relationship between passive smoking and health, a British government-funded committee of doctors goes on the air with their own counter-study.
Why was the full WHO study not published immediately? The WHO answers that the findings of the study will be published someday in some unnamed scientific journal. Perhaps this is their normal publishing procedure. But by not publishing its report immediately, the WHO has not only suppressed information running counter to its political agenda, but has given the initiative to every anti-smoking group in town to hit the media with its spin doctors.
I listened as the World At One took on Peter Mandelson, spin doctor extraordinaire, and gave as good as it got. But where was the mention of the WHO report? Where were the hard questions about the legitimacy of a government-sponsored committee of doctors doing research into passive smoking?
If a tobacco company produces a report about smoking, the media and the government dismiss it because of ‘vested interest’. But did it ever occur to anyone that the medical community might have a vested interest in the smoking debate?
A Review of Philosophy of Science with Regard to Information Systems Design
Department of Information Systems, Department of Philosophy,
London School of Economics
The original purpose of this paper was to expose, within the artificial intelligence debate, a fundamental dilemma concerning the modern day equivalent of Frankenstein; that is, the fear of many researchers in the field of AI and on the periphery that the success of the AI community to establish consciousness on a non-biological platform will result in a debasement of human life. This is explored particularly well by Robert Nadeau in Mind, Machines and Human Consciousness (1991). It was then my intention to investigate the philosophical roots of AI in order to appraise Nadeau’s arguments and construct a broader intellectual framework.
Early on in my investigation, I became ensnared by a more subtle debate concerning the nature of reality and human knowledge. I am speaking about Winograd and Flores’ attempt in Understanding Computers and Cognition (1986) to undermine the concept of an objective reality with respect to systems development. This is done by subsuming it in a vaguely defined derivation of subjectivist philosophy that borrows heavily from the Continental tradition of ‘idealism’ and the American tradition of ‘pragmatism’. Whether their approach is deliberate or naive is inconsequential. What is important is that their ideology be flushed out in order to expose it to critical evaluation. This paper, therefore, attempts to decipher their arcane work and “situate it on some kind of intellectual map”.
Understanding Computers and Cognition
In investigating Winograd and Flores, we soon become bogged down in a quagmire of imprecise terms, overlapping concepts and what we might call philosophical isolationism (where philosophers fail to cross-reference their ideas with others). For example, the fundamental theme of the book is that Western science is somehow invalid or inadequate because it is rooted in a ‘rationalistic tradition’. They then propose that this ‘rationalistic orientation’ be replaced by a new orientation, which they consider to be more rational in a broader sense. Yet we never get a clear definition of either. We are also bombarded by a plethora of derivations of the word ‘rational’ (rational, rationalism, rationalist, rationalistic) each with their own meaning that we, the reader, must decipher. Winograd and Flores refuse to set their arguments into a broad framework of philosophy, focusing instead on narrow and obscure writers in the field.
Let us, therefore, begin our trek through their book with the ‘rationalistic tradition’.
We have labelled this tradition the ‘rationalistic tradition’ because of its emphasis on particular styles of consciously rationalized thought and action. In calling it ‘rationalistic’ we are not equating it with ‘rational’. We are not interested in a defense of irrationality or a mystic appeal to non-rational intuition. The rationalistic tradition is distinguished by its narrow focus on certain aspects of rationality, which often leads to attitudes and activities that are not rational when viewed in a broader perspective. Our commitment is to developing a new ground for rationality – one that is as rigorous as the rationalistic tradition in its aspirations but that does not share the presuppositions behind it. (WF 1986, 8)
They first claim that the ‘rationalistic tradition’ defines a particular type of rational thought, but that they are not equating ‘rationalistic’ with ‘rational’. In other words, they are not arguing in favor of ‘anti-rationalism’ (mysticism). What this appears to mean is that they accept epistemological rationalism (‘reason’ as the basis of knowledge) in general, but have problems with certain sub-branches of it. What are they?
As a first step we will characterize the tradition of rationalism and logical empiricism that can be traced back at least to Plato. This tradition has been the mainspring of Western science and technology, and has demonstrated its effectiveness most clearly in the ‘hard sciences’ – those that explain the operation of deterministic mechanisms whose principles can be captured in formal systems. The tradition finds its highest expression in mathematics and logic…. (WF 1986, 14)
This is somewhat confusing, for while ‘rationalism’ may date back to Plato, ‘logical empiricism’, a particular derivation of traditional ‘empiricism’, dates only to Russell and the Cambridge School of the early twentieth century. While ‘rationalism’ may be considered the mainspring of Western science and technology, ‘logical empiricism’ is but a twentieth-century manifestation of it. Let us presume that they are equating the ‘rationalistic tradition’ with ‘logical empiricism’. What is logical empiricism?
The claim that Russell hoped to establish was that mathematics could be reduced to logic – a view known as logicism. If logicism is correct, then mathematical truths are basically of the same kind as the truth that all bachelors are unmarried. So we can know mathematical truths independently of experience, but only because such truths are essentially empty – mere linguistic conventions rather than assertions about the world. This approach leads to a new version of empiricism, which could be called logical empiricism. Knowledge is divided into two kinds. The first kind is knowledge of logical truths. This knowledge is indeed independent of experience, but it consists of mere truisms, empty truths-by-definition. Mathematical knowledge is of this kind. The second kind comprises all really significant knowledge about the world, and is based on experience. The first step in developing logical empiricism is to establish that mathematics can be reduced to logic. (Gillies 1993, 12)
There is a connection between ‘logical empiricism’ and ‘logical positivism’.
The philosophical views of the Vienna Circle came to be known as ‘logical positivism’, though really the term ‘logical empiricism’, introduced earlier, is more appropriate. As we might expect, Russell was a major influence. (Gillies 1993, 18-19)
But things are not quite so simple. Winograd and Flores continue:
In some ways, the rationalistic tradition might better be termed the ‘analytic tradition’. We have adopted a more neutral label in order to avoid the impression that we are engaged in a philosophical debate in which philosophers labelled ‘analytic’ take the other side. We are also not concerned here with the debate between ‘rationalists’ and ‘empiricists’. The rationalistic tradition spans work in both of these lines. (WF 1986, 16n)
What, then, is the ‘analytic tradition’? Well, Winograd and Flores only suggest that it is not ‘analytic philosophy’. Here we dip into A Dictionary of Philosophy (Flew 1984) and find no entry for ‘analytic philosophy’. We then try the Harper Collins Dictionary of Philosophy and meet success.
Analytic philosophy. a twentieth-century philosophic movement particularly strong in England and the United States that concentrates on language and the attempt to analyze statements (or concepts, or linguistic expressions, or logical forms) in order to find those with the best and most concise logical form that fits the facts or meanings to be presented. (Angeles 1992, 10)
There follow four definitions particular to Russell, G.E. Moore, Wittgenstein and Carnap, which brings us back to ‘logical empiricism/positivism’. Are we then no further than we were before in that the ‘rationalistic tradition’ equals ‘logical empiricism’ (equals ‘logical positivism’) equals ‘analytic philosophy’?
It would seem that Winograd and Flores have walked into two contradictions. First, how can the ‘rationalistic tradition’ be “distinguished by its narrow focus on certain aspects of rationality” and at the same time “span works in both” the ‘rational’ and ‘empirical’ schools that, taken together, compose the ‘rational tradition’, which Winograd and Flores earlier claim “we are not equating it with” and, second, if they oppose the ‘rationalistic tradition’ so strongly, and equate it with ‘analytic philosophy’, then why do they wish “to avoid the impression that we are engaged in a philosophical debate in which philosophers labeled ‘analytic’ take the other side”?
Here is the real problem: Winograd and Flores are not only unclear about what they dislike about Western science, they have no intention of defining it.
We will make no attempt to provide a full historical account of this tradition, or to situate it on some kind of intellectual map. (WF 1986, 14)
But things become Kafkaesque when we read: “Much of our book in an attempt to show the non-obviousness of the rationalistic orientation and the…blindness that it generates.” (WF 1986, 17) (my italics)
The Philosophy of Science
Before we can come to grips with Winograd and Flores, we need a brief refresher course in the philosophy of science. Perhaps we can politely suggest that had Winograd and Flores undertaken this task before launching their attack on the ‘rationalistic tradition’, we might have been spared the necessity of doing it ourselves.
We begin with the Middle Ages when the predominant epistemology was mysticism – knowledge through divine revelation or faith. This, of course, was promoted by the Catholic Church, the primary institution of theological/philosophical and intellectual debate. This philosophical hegemony was overturned during the Renaissance by a new epistemology of rationalism – knowledge through objective abstraction or reason.
As far as the eighteenth century is concerned, the problem was posed by the great success of the scientific revolution and of Newtonian physics. It seemed to most eighteenth-century thinkers that Newton’s theory was a new type of theory, a scientific theory, superior in kind to previous theories. At the same time, religion was for the first time in Western Europe coming under heavy attack, partly, no doubt, because of the disillusionment brought about by the wars of religion of the sixteenth and seventeenth centuries. The contrast, then, was between science, considered as a sound form of knowledge, and religious beliefs, whose claim to be knowledge was more dubious. (Gillies 1993, 153)
The new (rational) epistemologists soon split into two camps: the ‘rationalists’, who argued that the ‘laws of nature’ were fixed and universal and could be deduced through sheer intellect alone, and the ‘empiricists’, who agreed that the ‘laws of nature’ were fixed and universal, but argued that they had to be abstracted from observation and experience. This debate between ‘rationalists’ and ‘empiricists’ continued without much change until Hume.
David Hume, an ‘empiricist’, shocked adherents to the now-established tradition of rationalism (epistemological rationalism), by demonstrating that logic could not prove the validity of the process of induction. This process, a main pillar of the rationalist epistemology, took two forms: first, that the future will be like the past and, second, that specific observations/experiences could be abstracted into universal laws of nature. He argued that since there was no logical foundation for induction, the rationalist epistemology was still essentially an act of faith. The entire philosophy of science has since, continuing to the present day, been concerned with solving Hume’s problem of induction and thereby restoring the philosophical underpinnings of rationalism. To quote Bertrand Russell: “The growth of unreason throughout the nineteenth century and what has passed of the twentieth is a natural sequel to Hume’s destruction of empiricism.” (Popper 1979, 1)
The first to try his hand at solving Hume’s problem was Immanuel Kant. He argued that there were not two, but three sources of knowledge.
|a priori knowledge|
(via deductive logic)
|a posteriori knowledge
|Analytic (‘necessary’) propositions||intellectual/discursive thought|
|Synthetic (‘contingent’) propositions||pure intuition|
- Euclidean geometry
- causality / induction
The first type of knowledge, according to Kant, which we are already familiar with, is analytic a priori – the realm of the intellect. Kant though, disagreed with the ‘rationalists’ by suggesting that this type of knowledge was sterile, nothing but tautologies (self-evident truths). The second type of knowledge, which we are also familiar with, is synthetic a posteriori – the realm of experience. Here he generally agreed with the ‘empiricists’, but added a stipulation. He then defined a new, third, type of knowledge, which he referred to as synthetic a priori – pure intuition of space, time and causality. Here he placed mathematics and geometry as universal truths which (and this is the stipulation) guided or affected our knowledge from experience. Here too he placed induction and signed QED to Hume’s problem.
Of course, not everyone was happy with Kant’s solution to Hume’s problem. Before we proceed though, it is worth making two important points. First, logic itself, which we now take for granted as an advanced discipline was, at the time of Kant, a crude system essentially unchanged since its formulation by Aristotle. It was generally ignored by philosophers as empty scholasticism. Second, while the ‘empiricists’ sought to abstract general laws from specific observations, a process we now refer to as ‘induction’, the mechanism or vehicle for such abstraction was actually deductive logic. Before Hume, the distinction between deductive and inductive inferences (logical arguments) was not widely recognized. It was Hume who formally derived the distinction.
Now we come to the twentieth century, the Cambridge School under Russell and Whitehead and the Vienna Circle under Schlick and Carnap. These two schools rejected Kant’s third type of knowledge (synthetic a priori) and attempted to demonstrate first, that mathematics and geometry could be defined as analytic a priori propositions (tautologies) and second, that induction could be approached empirically through probability. As mentioned earlier, the two schools evolved the philosophies of ‘logical empiricism’ and ‘logical positivism’ respectively, but whereas the Cambridge School adopted a Bayesian approach to probability, the Vienna Circle preferred a ‘Baconian’ philosophy of probabilistic induction.
|a priori knowledge|
(via deductive logic)
|a posteriori knowledge
|Analytic (‘necessary’) propositions||mathematics|
|Synthetic (‘contingent’) propositions||science|
The other person to tackle induction was Karl Popper. He proposed that Hume’s problem could be solved by viewing our knowledge of the world as a set of mutable conjectures rather than fixed laws of nature. He agreed that induction could not be validated through logic, but proposed that induction, rather than an act of faith, was the process of ‘knowledge evolution’ through conjectures (proposed hypothesis) and refutations (empirical tests). Through this (rational) process, man evolves an objective pool of knowledge that, while incomplete and frequently inadequate, becomes increasingly better at explaining the nature of reality. This presupposes an objective reality (metaphysical realism) and the ‘correspondence theory of truth’. Interestingly, Popper valued metaphysical (non-testable) hypotheses for their heuristic properties. In other words, metaphysical hypotheses may guide an investigation that later results in testable hypotheses.
Thus we have arrived at two possible solutions to Hume’s problem: probability in its various forms and Popper’s ‘conjectures and refutations’. An objection to Popper’s approach was raised by Willard Van Orman Quine who suggested that a hypothesis could not exist in isolation but belonged to a nexus of interdependent hypotheses. For this reason, the testing of a single hypothesis could never be definitive in establishing its veracity. Quine further argued that all knowledge exists in praxis (context) and, therefore, cannot be reduced to ‘scientific’ and ‘metaphysical’ types. Perhaps we have now reached the territory of Winograd and Flores.
Winograd and Flores Revisited
As they made perfectly clear, they are not going to make things easy for us. Not only is the name index incomplete, failing for example to list Kant when Kant appears twice on page 31, but they continually avoid definitions and explanations. We frequently encounter statements such as:
Heidegger’s writings are both important and difficult, and we will make no attempt to give a thorough or authoritative exposition. (WF 1986, 27)
We cannot present here a thorough discussion of Heidegger’s philosophy, but will outline some points that are relevant to our later discussion. (WF 1986, 32)
Nor does the name index list Quine, Popper, Carnap, Schlick, or Hume, and but briefly mentions Russell. So who exactly do they refer to? How much explanation do we get?
We concentrate on the works of Hans-Georg Gadamer and Martin Heidegger. Many other philosophers have explored related ideas, including phenomenologists such as Husserl, Ricoeur, and Merleau-Ponty, existentialists such as Sartre, pragmatists such as Mead and Dewey, current political philosophers such as Habermas and Apel, and even some with a more analytic background such as Wittgenstein. We have selected Heidegger and Gadamer, partly because of the role their writings played in our own learning, and partly because of their intrinsic importance within the tradition they represent. (WF 1986, 9)
Heidegger rejects both the simple objective stance (the objective physical world is the primary reality) and the subjective stance (my thoughts and feelings are the primary reality), arguing instead that it is impossible for one to exist without the other. The interpreted and the interpreter do not exist independently: existence is interpretation, and interpretation is existence. Prejudice is not a condition in which the subject is led to interpret the world falsely, but is the necessary condition of having a background for interpretation (hence Being). This is clearly expressed in the later writings of Gadamer: “It is not so much our judgments as it is our prejudices that constitute our being.” (WF 1986, 31-32)
Perhaps it is time to open our trusty dictionary and discover just what Winograd and Flores are about. First ‘existentialism’.
Existentialism. a philosophical trend or attitude, as distinct from a particular dogma or system. Its origins are attributed to Kierkegaard. It became influential in continental Europe in the second quarter of the 20th century, through the writings of Heidegger, Jaspers, Marcel, and Sartre. Existentialism is generally opposed to rationalist and empiricist doctrines that assume that the universe is a determined, ordered system intelligible to the contemplative observer who can discover the natural laws that govern all beings and the role of reason as the power governing human activity. In the existentialist view the problem of being must take precedence over that of knowledge in philosophical investigations. Being cannot be made a subject of objective inquiry; it is revealed to the individual by reflection on his own unique concrete existence in time and space. (Flew 1984, 115)
The basic idea behind existentialism is that any distinction between the observer (subject) and the observed (object) is artificial because existence should be defined solely in terms of the relationship between them. In other words, existence is defined as the relationship between subject and object, not subject and object per se. (An analogy might be that existence is the spark between two carbon rods of an arc lamp where one rod is the subject and the other is the object. Pull the rods apart and the light of existence is extinguished).
Next we need to put Heidegger into historical-philosophical perspective (the beginnings of the intellectual map that Winograd and Flores avoid). Kant, as we recall, developed the idea of synthetic a priori knowledge – pure intuition of space, time and causality – which is employed by the intellect to guide or affect our knowledge from experience. This description is referred to as ‘transcendental’ or ‘critical’ idealism. Hegel dismissed this form of idealism and offered in its place ‘objective’ or ‘absolute’ idealism based on the idea that mind alone exists (there is no such thing as matter). Hegel subsequently adopted the ‘coherence theory of truth’. Kierkegaard, a Lutheran pastor, in turn, abandoned Hegel’s monistic idealism for existentialism.
On the other side of the ‘family tree’ we begin with Brentano, a Catholic priest who:
…introduced the doctrine of intentionality, distinguishing and characterizing mental events as “the direction of the mind to an object” in (1) perception, (2) judgment or belief, and (3) approval or disapproval. (Flew 1984, 49)
Husserl, a student of Brentano:
…set out to develop the doctrine of phenomenology into a pure, non-empirical science. In Logische Untersuchungen (1900, 1901) he criticized psychologism and naturalism, claiming that a study of the meaningful use of words must rest on insight, not generalizations from experience…. It is of the essence of objects to be correlative to states of mind; no distinction can be made between what is perceived and the perception of it. (Flew 1984, 157)
Thus, Heidegger’s existentialism is an odd mixture of Hegelian idealism and theology.
Although Heidegger did not regard himself as an existentialist, he was influenced by Kierkegaard. Heidegger’s ontology is echoed in existentialist writings, including the works of Sartre. Heidegger adopted Husserl’s phenomenological method in order to examine the data of immediate experience, discarding preconceived epistemological and logical constructions that make a distinction between consciousness and the external world. (Flew 1984, 143)
We can then outline the Continental section of the intellectual map that Winograd and Flores have so casually avoided.
|Auguste Comte (1798-1857)|
John Stuart Mill (1806-1873)
|Brentano (1838-1917)||Dilthey (1833-1911)|
|Husserl (1859-1938) student of Brentano||Nietzsche (1844-1900)|
|Heidegger (1889-1976) student of Husserl|
|Sartre (1905-1980) student of Husserl|
The next major component of the Winograd-Flores synthesis is ‘pragmatism’.
Pragmatism. a label for a doctrine about meaning first made a philosophical term in 1878 by C.S. Peirce. “Consider what effects, which might conceivably have practical bearings, we conceive the object of our conception to have. Then our conception of these effects is the whole of our conception of the object”. The term was soon borrowed by William James, F.C.S. Schiller, and John Dewey, who all in their different ways made pragmatism a theory of truth. (Flew 1984, 284)
James is best known for introducing the ‘stream of consciousness’ concept.
Stream of consciousness. a phrase coined by William James as a characterization of the mind, that it is a process of continuous thought. It can be seen as an attempt to find a middle way between two previous opposing concepts of the mind: the Cartesian (that mind is a special kind of unknown mental substance), and Humean (that it is nothing but a bundle of sensations). (Flew 1984, 341)
Last, we shall look at Maturana, and again Winograd and Flores give us a less than helpful explanation.
We introduce much of Maturana’s terminology, without attempting to give definitions (indeed our own theory of language denies the possibility of giving precise definitions). The network of meanings will gradually evolve as the different ideas are developed and the links of their interdependence laid out. We cannot in these few pages give a complete or balanced account of Maturana’s work. (WF 1986, 40)
Essentially, working in the field of visual perception, Maturana, Uribe and Frenk demonstrate that the properties of objects, for example color, are not inherent to the object, but a function of perception. They demonstrate that color is not necessarily, at least when perceived by a human, a function of the wavelength of light. This leads Maturana to conclude: “From this perspective, there is no difference between perception and hallucination.” (WF 1986, 43)
But is this not the same question that all idealist philosophers have asked? Here, we can now assume, is the real issue. It is not rationality that Winograd and Flores are exploring – after all, Kant, Hegel, Kierkegaard, and Heidegger were rationalists (as in epistemological rationalism), and Maturana is a biologist – it is rather, the issue of ‘reality’ that exercises them.
There is a naive view that takes language as conveying information about an objective reality. Words and sentences refer to things whose existence is independent of the act of speaking. But we ourselves are biological beings, and the thrust of Maturana’s argument is that we therefore can never have knowledge about external reality. (WF 1986, 50)
Philosophy of Science Revisited
Philosophy of science, a branch of philosophy in general, has until fairly recently been based upon certain assumptions. These are first, that there exists an objective reality independent of the existence of sentient beings, and second, that man’s knowledge is a function of reason. This is known as the ‘realist’ position. Lately, though, there has arisen an opposing school that has adopted an ‘anti-realist’ position. In order to comprehend Winograd and Flores, therefore, we need to ascend from philosophy of science to philosophy in general, for it is here that Winograd and Flores begin their attack on the ‘rationalistic tradition’.
Philosophy has five main branches: metaphysics (ontology), epistemology, ethics, aesthetics and logic. Metaphysics, the study of the underlying nature of reality, embodies two main schools of thought: ‘idealism’ and ‘realism’ each divided into three types: ‘monism’, ‘dualism’ and ‘pluralism’. Epistemology, the study of human knowledge, also embodies two main schools of thought mentioned earlier: ‘rationalism’ and ‘anti-rationalism’ (mysticism).
Each of these primary philosophical positions has ramifications in all subsequent branches of knowledge. One’s approach to science, language, logic, psychology, sociology, everything, depends on these primaries. Take truth for example. We have already mentioned the theories of truth adopted by certain philosophers.
Of the three theories of truth, the oldest was the correspondence theory, the theory that truth is correspondence with the facts, or to put it more precisely, that a statement is true if (and only if) it corresponds to the facts, or if it adequately describes the facts. This is the theory which I think Tarski has rehabilitated. The second theory is the so-called coherence theory: a statement is regarded as true if (and only if) it coheres with the rest of our knowledge. The third theory is that truth is pragmatic utility or pragmatic usefulness. (Popper 1979, 308)
The same can be said of language. For Winograd and Flores: “Nothing exists except through language” (W/F 1986, 68). For Popper: “One should never get involved in verbal questions or questions of meaning, and never get interested in words.” (Popper 1979, 310)
Winograd and Flores appear to have no quarrel with epistemological rationalism. What they take issue with is the metaphysical division between realism and idealism – between objective and subjective. Here, now, finally, we have arrived at an understanding of the ‘rationalistic tradition’. It is nothing less than the objective-subjective basis of Western philosophy that they wish to overthrow and, since it is this basis that distinguishes man from the animals – the ability to transcend the moment and abstract or conceptualize existence separately from existence itself – it is man himself – the epistemic individual – that Winograd and Flores wish to debase.
Denying realism amounts to megalomania (the most widespread occupational disease of the professional philosopher). (Popper 1979, 41)
Drawing from Continental philosophy, oriented toward ‘rationalist’ (intellectual) and existentialist methods – as contrasted with ‘analytic’ philosophy, oriented toward ‘empiricist’ (experimental) and logical methods – and appropriating Maturana’s experimental research into perception, Winograd and Flores weave together their own little intellectual empire.
Yet we are wiser than the unfortunate Joseph K. who, in Franz Kafka’s The Trial, meets his end in ignorance and confusion, for we can see through Winograd and Flores’ ambiguous philosophy. We shall not be fooled by the con-man’s trick of deploying vague and misleading terminology to baffle and wrong-foot an opponent. Nor shall we accept Winograd and Flores’ failure to cross-reference their philosophy with the vast body of philosophical thought that extends back to Plato. In fact, we would suggest that they could have promoted their ideas much better through such cross-referencing. The work of Quine and Duhem, for example, lends itself extremely well to Winograd and Flores’ thesis.
Perhaps the saddest part of Understanding Computers and Cognition is that the research into perception performed by Maturana and others (for example: Goethe, Helmholtz, Maxwell, Edwin Land, Donald Hebb, Wilder Penfield, Richard L. Gregory, Oliver Sacks, Gerald Edelman – none of whom are mentioned), a valid and exciting field of research with much bearing on scientific and philosophical issues, is seconded to bolster what verges on a political discourse. Thus is lost a real opportunity – and this is odd considering Winograd’s earlier practical scientific research – to explore perception and cognition within the broad, and actually quite liberal, Western tradition of science and philosophy.
Angeles, Peter A. (Ed) 1992: The Harper Collins Dictionary of Philosophy, 2nd edition. HarperCollins Publishers, New York
Flew, Anthony (Ed) 1984: A Dictionary of Philosophy, 2nd edition. Pan Books, London
Gillies, Donald 1993: Philosophy of Science in the Twentieth Century. Blackwell, Oxford
Nadeau, Robert L. 1991: Mind, Machines, and Human Consciousness. Contemporary Books, Chicago
Popper, Karl R. 1979: Objective Knowledge, revised edition. Clarendon Press, Oxford
Winograd, T. and Fernando Flores 1986: Understanding Computers and Cognition: A New Foundation for Design. Addison-Wesley, New York