Many of those camped in the migrant camp known as “The Jungle” are asylum seekers who have fled their own countries to escape violence. In a number of cases, men will pay to be smuggled out of the country and then risk their lives in the Mediterranean crossing that has been the subject of much press as of late. Following the crossing, a large amount converge on Calais, moving through Europe seemingly unnoticed. Many take such risks in the hope that the UK will offer them a better quality of life and the chance to bring their family with them once they have established themselves in society. Surely their bravery and willingness to search for a better life does not deserve the labels that have been thrown at them such as “cockroaches” by Katie Hopkins and “a swarm” by David Cameron? Such adjectives have highly negative connotations and imply that the motives of those seeking asylum in the UK are primarily negative twisting British public opinion of migrants on a broader scale. In order to fully understand the nature of the situation we must ask ourselves “Why Britain?” and consider the situation as a whole rather than the action of the migrants in Calais and the disruption caused to cross-Channel transport services.
The surge of tobacco products into developing countries began after World War II when the USA developed its “Food for Peace” scheme where tobacco was one of the export items. In the first 25 years of the initiative, the United States exported in excess of $1 billion worth of tobacco. This was the cause of the developing world’s initial exposure to Western-style cigarettes. The evolution of tobacco markets in these regions by TNCs, was continued from the 1960s onwards through the use of a wide variety of targeted and effective marketing schemes to help widen their customer base and potentially induce smoking habits. Smoking prevalence was increased further through the actions of national tobacco companies. In an attempt to counter the increased volume of tobacco being sold by TNCs in their countries, the national tobacco corporations also developed marketing schemes in an attempt to regain lost sales. As TNCs and national corporations went head to head, the overall expenditure on tobacco marketing increased with a corresponding rise in tobacco consumption. 
If Person A, a Blind Man, is Walking Towards the Edge of a Cliff and Person B watches Person A do so Without Stopping him, is Person B guilty of killing Person A?
Please note that for ease of reference, ‘he/she’ will be replaced by ‘he’. This is a legal problem centered on the issue of causation. English law requires the Prosecution to prove the Defendant has two parts to each offence- Actus Reus, the physical conduct of committing a crime and Mens Rea, proving the defendant was aware that his conduct would lead to the crime or should have been aware. Once Actus Reus and Mens Rea have been proved, causation has also been proved connecting the defendants conduct with the resulting effect.
In the case of Person A and B, to help us understand whether Person B is guilty of the unlawful killing of Person A we can use a ‘but for test’ (Sine Qua Non) to determine factual causation. We must ask ourselves, ‘but for what Person B did (failed to act), would the consequences to Person A occurred’. The answer to this is no, and therefore causation has been proved in fact. If Person B had taken reasonable measures to inform person A he was walking off a cliff, person A would not have died. Person B’s failure to act is an indirect cause of Person A’s death. But the factual cause merely establishes a preliminary connection between the act and the consequence. Having now passed that test, we must determine whether Person B is a legal cause of Person A’s death. Legal causation occurs when a person’s conduct is a substantial factor in bringing about harm. I would argue that Person B’s failure to act very much was a substantial factor in bringing about harm to Person A. Person A was a vulnerable man who must often been reliant on the people around him to aid him in his day to day life. It is reasonable to assume that Person A had no intention to walk off the cliff, and therefore it cannot be argued that even if Person B was to intervene, Person A would have continued to walk towards the edge of the cliff- Emphasising the significance of Person B’s omission to act.
It would seem that Person B’s case is looking very weak, and criminal charges of murder or gross negligent manslaughter would not be surprising. After all, the fact that there would have been a time gap between Person A walking towards the edge of the cliff and then falling off suggests that there was malice aforethought by Person B which would give a prosecution team grounds to open a murder trial. In addition, I would hope that the fact that Person A was disabled would suggest anybody including his family or member of the public would feel they have a duty of care to safeguard him. Seen as this duty of care was breached, should Person B not be criminally punished?
Despite all this, Person B cannot be found guilty of any unlawful killing. The Actus Reas element cannot be proved as the act of omission only applies if you do not do something that the law requires. For example, a parent who fails to feed their baby, resulting in the baby’s death would be guilty of unlawful killing as parents are required by law to look after their children. In this scenario however, under English law, there is no liability for a failure to act to prevent harm or wrongdoing or a crime being committed. This means Person B was under no legal duty to try and stop Person A walking over the cliff edge, even if he was close enough to do so. The reason for this is because the UK does not have a ‘Good Samaritan’ law whereby a defendant would commit a crime for not doing what he should have, as France does. In the eyes of English law, as Person B took no positive action to cause Person A’s death, B did not commit a crime or become a party to it just because he could have took reasonable action to prevent it.
To conclude, Person B has a moral obligation to take at least reasonable care to prevent the death of another man. Legally however, there is limited scope under English law to enforce any punishment, if any at all. Seen as from the information given, we can assume Person B was not under any contractual duty, voluntary duty, or duty derived from statute (to name a few), Person B was not obliged to prevent the death of Person A. This could change very soon in the UK following a press release on the 2nd of June this year when Justice Secretary Chris Grayling announced ‘Good Samaritans and community heroes will have the law on their side in the future’ with new legislation proposed to come into place at the beginning of 2015. Good Samaritan laws offer legal protection to people who give reasonable assistance to those who are in peril, and encourage people to offer assistance. The purpose of the protection is to encourage people to offer assistance who may be hesitant for fear of being sued or prosecuted for unintentional injury or wrongful death. In the future, this may be accompanied by a duty of rescue as there is in France and Germany. In France, ‘deliberately failing to provide assistance to a person in danger’ can be punished by up to 5 years imprisonment and a fine of up to $100,000. While I cannot foresee a duty of rescue entering the English legal system anytime soon, the newly proposed Good Samaritan laws are a step in the right direction to prevent any future real life stories similar to that of Person A and B; and common sense will become the order of the day opposed to hesitant bystanders fearing legal liability
Contributed by Joe Timmins
The British Equality Act 2010 was based on British principles that gender should never have any influence in employment. Amendments to the Equality Act in April 2011 allowing positive discrimination, or “positive action” as our politicians like to say to defer attention away from the fact that they are encouraging discrimination against groups in society and people in their own constituencies, contradict British values and the amendments should be repealed.
Positive action, for those that are not familiar, is the act of giving preferential treatment to under represented groups in society. While I will be focusing on positive action to correct the disproportionate ratio of men to women in senior positions, please note that this is not the exclusive use of positive action. This is because the Equality Act replaced the Race Relations Act 1976, Sex Discrimination Act 1975 and three major statutory instruments protecting discrimination in employment on grounds of religion or belief, sexual orientation and age. Therefore positive discrimination can also be used to give preferential treatment to ethnic minorities being called to the bench for example, a white male dominated profession.
I was inspired to write on this topic after reading in the newspaper that the Judiciary Appointment Commission was beginning to enforce positive action. As it stands, it is not unlawful for a women to be appointed to the bench ahead of a man if both candidates are of equal merit. Imagine this scenario: Person X has 12 A’s at GCSEs, 3 A levels (AAA), and a Bachelor of Law from Kingston university. On top of this , person X has had a successful career working in partnership with person Y as a barrister. Now imagine Person Y has 12 A’s at GCSEs, 3 A levels (AAA), and a Bachelor of Law from Kingston. On top of this , person Y has had a successful career working in partnership with person X as a barrister. Who would you hire? You can’t say. They’re seemingly indistinguishable. But what if I told you person X is me and person Y is Sophie, a women, and therefore I do not get the job. This is not the solution to getting more female judges. Instead we should be encouraging women into the profession through childcare arrangement schemes, offering longer maternity leave, offering greater protection against preexisting prejudice through legal reforms and that’s just to name a few options. At the moment we are setting the impression in young peoples’ minds that it is acceptable to consider gender during employment and corrupting the generation of people that are much more liberal than ever before. The April 2011 amendments enforced on us by the European Commission are radical and unnecessary when we are seeing a rise in the number of women in senior positions anyway. Antagonising the rest of the population and women who achieved their position under the old system based exclusively on merit is equally unnecessary as a result.
In reality the use of positive discrimination is limited, but likewise to many laws in the English legal system, it is the principle of the laws that count and this often sets our legal system apart from those, in continental Europe for example, that employ Roman law. Therefore the limited use of positive discrimination does not hinder the argument that the April 2011 amendments should be repealed. The use of positive discrimination in the Equality Act also presents an interesting argument that the European Commission perhaps has too much influence over our legal system. The changes in 2011 directly opposed the original Equality Act which explicably stated ‘merit’ is the only legal determinant when employing someone. I hope that positive discrimination will be outlawed very soon in the same way every other form of discrimination has been; and then stay that way.
Facebook currently has over 1 billion users worldwide, and a market capitalisation of around $130 billion. It might seem implausible to many that this giant company might collapse so rapidly. But this is exactly what happened to MySpace, the most visited website in the world from 2005 to early 2008. By 2011, it had all but disappeared. Even giant firms fail. This is the lesson of economic history, ever since companies operating on a truly global scale began to appear in the late 19th century. Of the world’s top 100 non-financial companies a century ago, in 1914, all with market capitalisations of many billions in today’s prices, most have either gone bankrupt or are mere shadows of their former selves. Some of them disappeared very quickly. But while other companies have failed, as a Facebook user it seems ridiculous that Facebook will face the same fate in just a few years. It seems so ridiculous that it’s hard to believe Princeton University are even associated with it. Admittedly Facebook has become increasingly frustrating as it becomes the new ‘YouTube’, as your newsfeed becomes flooded with vines and other videos, but the number of active users on Facebook is ever increasing.
So how can Facebook die? Economists may refer to the observable attributes of the alternatives, such as price and quality. As Facebook is fantastically popular, it must be because the product not only provides features which many people want, but because it does so much more effectively than its rivals. A dominant market leader can only be displaced, on this view of the world, if either a superior rival emerges, or if there is some unforeseen and sudden shift in consumer tastes. This was the cause of MySpace’s death, which was killed by facebook and twitter which were substitute goods. But what’s killing Facebook? Not snapchat. Not Google+. Therefore we can rule competition out as a suspect for the cause of Facebook’s proclaimed death, as it remains dominant.
However surprisingly, these were not the arguments which John Cannarella and Joshua Spechler outlined in their report. They did not take into account Facebook’s features; rather they look at user behaviour based on network contacts. The researches stated, “… every user that joins the network expects to stay indefinitely, but ultimately loses interest as their peers begin to lose interest. Thus, a user that joins early on is expected to stay on the network longer than a user that joins later. Eventually, users begin to leave and recovery spreads infectiously as users begin to lose interest in the social network. The notion of infectious abandonment is supported by work analyzing user churn in mobile networks, which show that users are more likely to leave the network if their contacts have left.” However the mathematical and economical modelling used to come to this conclusion is totally flawed. That’s because they used a tool called Google Trends to see how often people searched for “Facebook” on Google over the years. They saw Google searches for “Facebook” decline, noted that when MySpace and Bebo waned, so did Google searches for those terms, and came to their dire conclusions.
So why is this a big deal? Because when a big news outlet like NBC News run;s a story, people believe it. They don’t stop to read the original paper and if they did, the academic jargon would make it incomprehensible to most. But it has an effect. Facebook is a publicly traded company. A story like this could affect the stock price and change the valuation by millions. It could also be the sort of thing that creates the end it predicts by convincing people that Facebook is dying and they should go somewhere else. Therefore the natural reaction of an economist is to make the prediction that John Cannarella and Joshua Spechler were motivated by the possibility of buying cheap shares. The incentive of making a fortune outweighed their loss of reputation or credibility after producing such a flawed conclusion.
Facebook’s response to the Princeton study was quite amusing, mocking the outrageous modelling they used to come to their conclusion. They used Princeton’s Facebook “likes” and “peer reviewed articles” and evaluated them over time. They noted an alarming downward trend of Princeton’s Facebook “likes” and made a conclusion based on the similarities between student enrolment and Google Trends index. From this, Facebook concluded that Princeton would also suffer a decline in enrolment of 50% by 2018.
According to CNN, “Princeton received 24,498 applicants for its current freshman class and accepted only 7.4% of them, ensuring its status as one of the nation’s elite universities. And as of September 2013 Facebook had 1.2 billion monthly active users.” Neither institution is really in any danger of disappearing any time soon.
By Sam Timmins
Even with the political and cultural arguments ringing around the referendum, the most important argument is that of the economics of independence – almost fitting for the birthplace of Adam Smith! One poll found that only 21% of Scots would favor independence if it would leave them £500 a year worse off, and only 24% would vote to stay in the union even if they would be less well off sticking with Britain. Almost everyone else would vote for independence if it brought in roughly enough money to buy a new iPad, and against it if not.
Opinions on the economics of independence are starkly divided. Nationalists argue that, mostly thanks to North Sea oil and gas, Scotland would be better off alone to reap the benefits of its natural resources. However this view has a sell by date. The richest reserves have already been exploited, leaving inaccessible oil that becomes uneconomic when prices fall. North Sea production has been falling by about 6% a year for the past decade leaving an eventuality that the oil will run out. Many fields will stop producing in the 2020s; by the 2040s oil is likely to be dribbling rather than gushing forth. Tax revenues from oil and gas are highly volatile; prices are high now but due to its unreliable nature, Scotland would depend on oil for some 18% of its GDP, making it subject to shifts in global commodity prices and the heavy reliance on natural resources could lead to dire economic conditions in time to come.
What will remain Scotland’s biggest problem as an independent state is its currency. Due to the UK being the largest trade partner for Scotland, it would be beneficial for them to keep the pound as their currency to allow easier cross border trading and therefore more economically integrated with the UK. Given that 60% of all Scottish exports are to the rest of the UK, a separation could hit it hard. However, in keeping the pound, gives away any right to its governing. Interest rates would be kept under the Bank of England’s control, leaving many to argue whether a nation with no control over their currency, can truly be called independent. Furthermore there is no guarantee that England and Wales would accept a currency union which would lead to Scotland being forced to join the euro and then be inevitably sucked into the struggling eurozone. In an era when most of the discussion is about leaving the euro rather than anyone new joining up. With problems in eurozone countries reverberating through the entire single currency area, Scots would be wary to vote yes when this is a strikingly clear possibility.
The Scottish vote would also have knock on effects for the rest of the UK. It is hard to compute exactly how much the Scots cost the English. But according to figures published today by the Institute of Fiscal Studies, total public spending was around 11 per cent higher per person in Scotland than in the UK as a whole in 2011-12. Scotland’s welfare bill alone is huge and utterly unsustainable without some form of external funding. Its pension’s bill is £13.3?billion a year, health care costs £11?billion and social security £8?billion. To many Englishmen, an independent Scotland may be in their best interest. As English tax payers are propping up many a Scotsman, the reduction in public spending in Scotland would encourage the increase in spending in the rest of the UK which would introduce more government subsidies and fund the rejuvenation of UK businesses; thus raising GDP and raising employment.
On the other hand in the eventuality that Scotland remains in an economic union with the rest of the UK would avoid disruption to UK firms selling north of the border, there may be a cost to exporters in the remaining UK since Scottish oil exports probably cause the pound to trade at a higher level than otherwise. However, due to independence from the rest of the UK, Wales and England would not benefit from a share of the tax revenues from exports such as gas and manufacturing. This would leave a considerable hole within the British economy that would be difficult to replace as Scotland is both rich in natural resources and production capabilities.
For many, it appears that the independence of Scotland represents a large and daunting risk to the Scottish economy. The Scottish sentiment surrounding the vote will inevitably sway many Scots to the independent vote, but the overwhelming risk associated with the “Yes” vote will be difficult for Alex Salmond to ignore much longer. The aim of SNP is for Scotland to gain independence and carve a place for Scotland within the European and Global market place. For the degree of economic independence a small European country can enjoy in a global marketplace is inescapably limited. It is unlikely that Scotland from now up until the vote will have resolved the flaws of their over reliance on finite resources for revenue or their uncertain stance on currency. It is likely that the vote will end in failure for Mr. Salmond as in a global climate already full of economic uncertainty it would be doubtful that the people of Scotland would want to take a risk that for many, seems senseless.
Contributed By Jack Albert Editor-in-Chief
In November of 2013, Viktor Yanukovich, the then President of the Ukraine, announced the abandonment of a trade agreement with the EU, and that he wanted to seek closer ties with the ex KGB, AK-47 wielding, chest revealing Vladimir Putin. This caused considerable outrage in the Ukraine, especially Kiev, as the western parts of the country are strongly pro-EU and are abhorrent of the Russian regime, a country devastated by the Stalinist purges of the 1930’s. Ukraine was liberated only 23 years ago when the Soviet Union broke up. On December the 1st, 300,000 people gathered in Independence Square in Kiev to protest, and the City Hall was seized. By January the 16th anti-protest laws were introduced which were immediately described as ‘draconian’ as they took away the human right to protest in a similar manner to Tony Blair’s anti-terror laws took away the right to dance around naked in front of your webcam without GCHQ taking photos of you. It was then that people started to die…
On January 22nd Ukrainian police fired upon the crowd with live ammunition. Two died and another followed when he clashed with riot police. Dmytrov Bulatov, an opposition activist was found outside of the city having been imprisoned and tortured for eight days by pro-Russian groups. On the 16th of February it seemed like a ceasefire between the Government and its people was in sight when protesters returned control of city hall in exchange for the release of 234 imprisoned activists. Two days later clashes re-erupted after changes to constitutional reform were stalled. 18 people died and over a hundred were injured. Another two days later and within 48 hours, 88 people had been killed by Government snipers shooting into the crowds.
President Yanukovich then fled the capital after a vote in Parliament determined his removal and the release of his previous political opponent, Yulia Tymoshenko from prison. With the now wavering ties he had with the country now disappearing, Putin started to play. On the 27th armed men seized governmental buildings in Crimea, the Crimean parliament determined May 25th as the date of referendum and the fleeing pro-Russian ex-president Yanukovich was granted refuge in Russia. Simferopol international airport, in Crimea, was then seized along with Sevastopol naval base by further armed men in unmarked combat fatigues, Russia denied that they were theirs. By the 1st of March the Russian Upper House had approved the use of military force in not only Crimea, but in the whole of the sovereign state of Ukraine.
“There can be one assessment of what happened in Kiev and Ukraine as a whole. This was an anti-constitutional takeover and armed seizure of power.” These were words spoken by Putin himself which, do not imply he is having any thoughts of stopping at taking just Crimea, but Ukraine as a whole, if not more.
Within the next week, convoys of hundreds of Russian soldiers marched towards the regional capital of Crimea. The Russian Black Sea Fleet ordered the Ukrainian Navy in Sevastopol to surrender to them, or face a military assault. Putin then stated that “The legitimate president, purely legally, is undoubtedly Yanukovich.”, surprising as that was also Putin’s favourite candidate, and that “We reserve the right to use all available means. And we believe that this is fully legitimate.” when speaking on the issue of ‘protecting’ the people of eastern Ukraine.
The notorious referendum was then held in Crimea, by order of Putin, which contravened many principles of what a referendum actually is. The vote received 96.77% of the vote, apparently, making it one of the most successful referendums ever. There have been some questions over the integrity of this unquestionably truthful referendum, that included the fact that hundreds of Ukrainians, left for security reasons or were kicked out and that the indigenous Tatars suffered widespread intimidation. Furthermore a Russian journalist living in Crimea told them that she was Russian and only lived in Crimea for a very short time, was positively encouraged to vote, even though this was not legal.
After the vote, Russia recognised Crimea as a sovereign state and no longer part of Ukraine.
The crisis, although with less coverage in the news, is not over. Many towns and cities along Ukraine’s eastern border have been seized, Donetsk, Luhansk, Kharkiv and Slavyansk as well as further naval and military bases have been taken by pro-Russian activists who have asked Putin to send military force. The conflict is showing little sign of abating and with Putin’s already expressed views of his contempt for the fall of the Soviet Union, it is unlikely it will do any time soon. We no longer know whether it is just Ukraine on the menu, or whether he would like to try a taste of Finland or Estonia, who have both reported fear in the knowledge that Russia has had extensive military exercises along their borders. And being the superpower that they are, not even America has the balls to come out and tell them where to go. Maybe the Cold War was not over, maybe it was just waiting.
Contributed by Daniel Gibbs
Nationalism and patriotism were significant factors in causing World War I. If a person is nationalistic, then they have a strong support of the rights and interests of their country.Each of Europe’s Great Powers developed a firm belief in its own cultural, economic and military supremacy, creating a fatal misconception that any war would produce a victory within a matter of months. This arrogance and over-confidence was fuelled by the press in each country promoting extreme nationalism. Various forms of propaganda, including newspapers and banners, were packed with nationalist rhetoric and ‘sabre-rattling’. It could also be found in other cultural expressions, such as literature, music and theatre. For example, even well-known songs made the people of countries like Britain, Germany and France more bellicose – the British sang ‘Rule Britannia’ declaring Britons ‘will never be slaves’ and the Germans sang ‘Deutschland uberalles’, portraying Germany ‘above all, over everything in the world’ . Songs like this produced nationalistic spirit and were almost catalysts for promoting nationalism in some countries. As each nation became more convinced of the integrity of its position and the prospects for victory, the likelihood of war increased. Politicians, royals and diplomats did little to deflate the public appetite for war, and some actively contributed to it by making provocative remarks themselves.
In an age where countries were becoming more nationalistic, all nations wanted to assert their power and independence. This led to colonies and similarly countries under foreign rule aiming for independence, and this was ultimately a reason for World War I as the assassination of Archduke Franz Ferdinand was fuelled by nationalism and the Slavs wanting to break free from Austrian rule. On 28th June Ferdinand visited Sarajevo, capital of modern day Bosnia, which had just been taken under Austrian rule. The Black Hand gang were a group of nationalistic terrorists who wanted independence. After failed bomb attempts in the morning, Gavrilo Princip shot Archduke Franz Ferdinand and subsequently killed him. Austria then blamed Serbia for ‘supporting’ the terrorists. They, supported by Germany, send Serbia an ultimatum which included taking full responsibility for the assassination. Serbia failed to accept all of the points, which led to Russia mobilising its troops to protect Serbia. Germany then declared war on Russia and this led to the alliances coming into action, World War I had begun. Ultimately, the new found nationalistic beliefs of countries under foreign rule were always going to lead to attempts to claim independence. This is what led to the actions of the Black Hand gang and also tested the strength of the alliance for the first time. For these reasons nationalism must be considered as a factor for the start of World War I
Imperialism is when a country increases their power and wealth by bringing additional territories under their control, frequently in order to maintain or start an empire, or a collection of colonies. The imperialist nation – sometimes benignly called the ‘mother country’ – acquires these new territories by military conquest, political pressure or infiltration. This often requires skirmishes, or even a fully-fledged war against the local population. The British, for instance, had to seize control of South Africa away from hostile native tribes like the Zulus, and then the Boers (white farmers of Dutch extraction). Both conflicts were more difficult than they had envisaged. Nevertheless, the strategic and economic benefits of new colonies usually outweighed the risks. Once control was established, the region became a colony, the primary purpose of which was to benefit the imperial power. Usually this involved the supply of precious metals or other resources, cheap labour or agricultural land. The British Empire, for example, was largely based on trade, particularly the importation of raw materials and the commercial sale of manufactured goods. Military advantages also arrive when obtaining a colony, such as strategic locations for naval bases or troops. By 1914 there were relatively few parts of the world still open to imperial conquest. The ‘scramble for Africa’ saw much of that continent already claimed by European powers. Imperial competition, layered atop intense nationalism, contributed to the tension and rivalry of the pre-war generation.
The increased sense of imperialism led to two ‘crises’, both based in Morocco. In 1905 Morocco was one of the few African states not occupied by a European ruler. France hoped to conquer Morocco and add it to their ever growing list of colonies. In an agreement lasting four years to finalise in 1904, the French Foreign Minister at the time, Théophile Delcassé, it was concluded that Morocco would come under French control. Originally in November 1901 an agreement of this was signed with Italy, but Spain was unsure and insisted on informing the British government. Originally the British refused to support Delcassé but changed their minds in April 1904 and in October 1904 France got the agreement of the Spanish. However, France hadn’t asked Germany, and on 31st March 1905 Kaiser Wilhelm visited Morocco and promised them protection against anyone who threatened them. The French were outraged and Britain saw it as yet another attempt by Germany to build a German Empire to rival Britain’s empire.A Conference was held at Algeciras starting on 16th January 1906 to settle the dispute. Of the 13 nations present, the German representatives found that their only supporter was Austria-Hungary, while then others including Britain and Russia supported France. Germany was forced to promise to stay out of Morocco and France agreed to yield control of the Moroccan police, but otherwise retained effective control of Moroccan political and financial affairs. The Agadir Crisis, or Second Moroccan Crisis took place in 1911. At the start of 1911 a rebellion broke out in Morocco and subsequently France sent in an army to quash it. On 1st July, a German gunboat ‘Panther’ was sent to the port of Agadir under the pretext of preserving German trade interest. In the middle of the crisis, Germany was hit by financial turmoil. The stock market plunged by 30 percent in a single day, the public started cashing in currency notes for gold and there was a run on the banks. Faced with the potential of being driven off the gold standard, the Kaiser backed down and let the French take over most of Morocco. France and Germany underwent negotiations on 9th July and ended with Germany accepting France’s position in Morocco in return for territory in the French Equatorial African colony of Middle Congo (now the Republic of the Congo). The crisis led to Britain and France making a naval agreement where the Royal Navy promised to protect the northern coast of France from German attack, while France concentrated her fleet in the western Mediterranean and agreed to protect British interests there. The ultimate outcome of these two crises was the strengthening of the alliance between Britain and France and German embarrassment, increasing the likelihood of them wanting to bounce back stronger and the pathway for war was almost set.
Another factor in the developing mood for war was militarism, the attempt to build up a strong army and navy in order to give a nation the means and will to make war, and its incipient arms race. Powerful new weapons were produced in the decades before 1914, capable of killing on an industrial scale. Utilising new mass-production techniques, the Western nations could churn out these weapons and munitions in great quantities and at a rapid pace. But the descent into war was not only driven by new weapons; it was also fueled by militaristic cultures and attitudes. Military elites strongly influenced, and in some cases, dominated the governments and aristocracies of the Great Powers. The government became plagued with admirals and generals whose only focus was to expand the country’s military force by demanding increases in defence spending and promoting military solutions to political and diplomatic problems. War plans were also drawn up and so corrupted governments like this made war even more likely. As the former German army officer Alfred Vagts would later write, militarism was “a domination of the military man over the civilian, an undue preponderance of military demands (and) an emphasis on military considerations.”
It’s natural for military leaders to be obsessed with modernising their forces and equipping them with new technology, and the decades prior to 1914 saw no shortage of this. One of the most significant examples of weapon development was the heavy artillery. Marked improvements were made in the calibre, range, accuracy and portability areas of this powerful, but previously slightly unreliable weapon. The changes meant that artillery shelling and bombardments would become standard practice, particularly after the emergence of trench warfare. Millions of metres of barbed wire, an invention of the 1860s, would be mass produced and installed around trenches to halt charging infantry. Various types of poison gas including chlorine, phosgene and mustard were developed. In the naval areas, the development of the dreadnought – a large battleship, the first of which was launched in 1906 – prompted a flurry of ship-building and naval rearmament.European military expenditure catapulted between 1900 and 1914. In 1870 the combined military spending of the six great powers (Britain, France, Germany, Austria-Hungary, Russia and Italy) totalled £94 million. By 1914 this had quadrupled to £398 million. German defence spending during this period increased by a colossal 73%, dwarfing the increases in France (10%) and Britain (13%). Russian’s defeat in the Russo-Japanese war of 1904-5 seemed to be a catalyst for their defence spending to rise by more than a third after the loss prompted the Tsar to order a massive rearmament program. By the 1910s, 45% of Russian government spending was allocated to the armed forces, while just 5% went on education, which shows the great effect militarism can cause.
As previously mentioned, it wasn’t just the amount and quality of weapons that was improving. Of the Great Powers, all of them except Britain had conscription. By 1914, Europe’s powers had increased their armed forces dramatically. Germany, France and Russia had over 1,000,000 soldiers while Britain, Italy and Austria-Hungary had between 710,000 and 810,000 men. In order to keep the amount the amount of soldiers increasing, the main countries of Europe started to train their young men as backup, so that if there a war broke out they could call, not only on the standing army, but on huge numbers of trained reservists. It was once estimated that the total number of men (including reservists) that the countries could thus call upon totalled as high as 8.5 million for Germany and 3-4 million for the other powers. This is an example of the ‘knock-on effect’ of militarism and why the countries were so eager to get the largest army as possible. As one country increased its armies, the others felt obliged to increase their armed forces as well in order to not fall behind and to keep the ‘balance of power’. Overall militarism was important in starting WWI as it not only made countries strengthen their armies, but it also increased suspicion and hatred between nations as well as giving nations the wherewithal to wage war.
Overall nationalism, imperialism and militarism all played a key role in starting WWI. Militarism made nations switch their focus to military needs, they all wanted to have the strongest army, and so gave the resources for war in 1914. Imperialism created the early tensions between the main European powers. France’s imperialistic aims led to them trying to invade Morocco and in doing so, created the early sparks between them and Germany and fortifying the alliance with Britain. Finally, nationalism formed the ‘last piece of the puzzle’. Though it could be said that war was always likely, the assassination of Franz Ferdinand was fuelled by nationalism and ultimately that it was lead to Germany declaring war. Overall, it would seem that militarism was the main reason for the outbreak of World War One. Without the strong sense of militarism, my nations wouldn’t have had the resources to spark a world war. However as each nation got stronger, they wanted to expand their territory and ‘prove their strength’, which is what led to imperialism and the invasion of independent countries. Though nationalism did set off the war, without militarism, which led to imperialism, there would be no beliefs to become independent and eventually with the high amount of resources the great powers had, war would’ve been inevitable without the actions of the Black Hand gang.
Contributed by Freddie Carty
However, the role of Rasputin in the fall of the Romanov dynasty was less significant than other factors, namely the impact of the First World War. There is evidence to substantiate the claim that Rasputin was merely a symbol of Russian despotism and not a crucial character in its downfall or construction, for his murder resulted in little change in the governing of Russia, ‘nothing was changed with Rasputin’s removal; nothing improved either in affairs of the State or in the Tsar’s situation’.
Instead, staggering losses on the battlefield played a definite role in the revolution. Rampant discontent lowered morale which was further undermined by series of military defeats like the Battle of the Massurian Lakes in 1915 and the failure of the Brusilov Offensive in 1916. This crisis in morale, as argued by Allan Wildman, ‘was rooted fundamentally in the feeling of utter despair that the slaughter would ever end and that anything resembling victory could be achieved.’
However, the war devastated not only soldiers and, by the end of 1915, it was clear that the economy was collapsing under the heightened strain of wartime demand. The root of such issues was the combined destructive nature of food shortages and inflation. The most affected region was the capital, St. Petersburg, a result of the distance from supplies and poor transportation networks. The initial outcome of this was growing criticism of governmental administration not war-weariness and disillusionment. However, increasing heavy losses strengthened revolutionary notions. A report by the St. Petersburg branch of the security police, the Okhrana, in October 1916, warned bluntly of ‘the possibility in the near future of riots by the lower classes of the empire enraged by the burdens of daily existence.’ Nonetheless, little response was taken.
However, the Tsar was a symbol of morality in their lives while all catastrophes originated from meddling bureaucrats, functionaries and nobles. But, from the commencement of the First World War, the Tsar took active participation in government, tactics and administration. Therefore, he was personally blamed for many later crises and royalist support crumbled. In the summer of 1915, the Tsar became the new Commander-in-Chief of the army, in defiance of almost universal advice to the contrary. The result was disastrous: firstly, it associated the monarchy with the unpopular war; secondly, Tsar Nicholas II was an incompetent leader, vexing his commanders with interference and thirdly, while at the front, he was unable to govern. This left the reins of power to his wife, the Tsarina Alexandra, and Rasputin, both ostracised and detested by the Russian people.
As discontent grew, the State Duma issued a warning to Tsar Nicholas in November 1916. It stated that, inexorably, a terrible disaster would grip the country unless a constitutional form of government was adopted. This was ignored. While the Tsar was at the front, the Tsarina was left in charge of governing. She proved to be an ineffective ruler in a time of war, announcing a succession of Prime Ministers and angering the Duma. ‘From Liberty to Brest-Litovsk’ (1918) by Ariadna Tyrkova, a Constitutional Democratic Party member, states that rumours stated ‘Germans were influencing Alexandra Feodorovna through the medium of Rasputin and Stürmer.’ It, also, describes the Tsarina as ‘haughty and unapproachable’ and a ruler who ‘lacked popularity’. Although such rumours may not have been true, it would have inevitably damaged the reputation of both the Tsar and Tsarina.
Moreover, the Tsarina’s trust on Rasputin on all matters, state or personal, was ruinous. Rasputin was hated by the people for his influence: ‘Russia and History’s Turning Point’ (1965) by Alexander Kerensky, describes ‘the Tsarina’s blind faith in Rasputin led her to seek his counsel not only in personal matters but also on questions of state policy.’ Such actions of the Tsarina would have alienated the majority of Russia, even loyal subjects. In addition, unfulfilled aspirations of democracy from the 1905 Revolution fuelled anti-imperialist revolutionary ideas and violent outbursts. The Tsar sought to quieten such political surges and mitigate social unrest though patriotic war against a common adversary of the Triple Entente, supporting its ally Serbia. Instead of restoring Russia’s political and military standing, the First World War undermined both the monarchy and society to the brink of ruin.
However, all these causes are interrelated: without the influence of diplomatic pressure, Russia would have not entered the First World War which itself worsened the internal stability of the state. Moreover, without poor social conditions, due to the collapsing economy and rapid urbanisation caused by the Industrial Revolution, revolutionary ideas would not have gained traction or without the obstinacy of the Tsar, the revolution would not have occurred. Different historians apply different emphases to each cause: liberal writers would prioritise the turmoil of the war while materialist histories would highlight on the irrevocability of change. However, it can be said with some certainty, that the character of Rasputin did not play a crucial part in the downfall of the Tsarist regime in 1917.
Contributed by Ali Qureshi
The British Monarchy is not just of traditional value to the United Kingdom, but brings millions to the UK each year through the Crown Estate and tourism. The Crown Estate is the land ‘surrendered’ to the British Government at the beginning of each Monarch’s reign, in return for the their annual salary. Since 2011, the annual profits from the Estate have totalled £240.2 million annually, and with only 15% of that forming the Monarch’s annual salary- the Sovereign Grant- it all adds up for the tax payer. If the Monarchy were to be removed however, the state would most probably lose the Crown Estate along with it; meaning to completely remove the Monarchy would be a bad decision not just for British culture, but economically as well.
Tourism is a massive sector in the UK; accounting for £96 billion of England’s GDP (8.6% of the economy) as of 2009 and employs around 2 million people- 4% of the work force. Much of what makes the UK the 8th biggest tourist destination in the world is our Monarchy. The Tower of London is the most popular UK attraction, with three castles featuring in the top 15. Although it is argued by many that these buildings would still be there if the Monarchy were removed, what draws the 3.5 million North American and 21.5 million European tourists each year is that the history is still very much alive today in Britain- something that would be irreplaceable if the Monarchy were to be removed. It was estimated by consultancy Brand Finance in 2012 that the net value of the Monarchy is £44bn, though the methodology used to draw this conclusion has been questioned by many. This is not to say, however, that the Royal Household’s spending and management is not in need of reform.
In 2012-13 the net expenditure of the Royal Household was £33.3 million, a £2.3 million overspend from their budget, a major argument behind Chairwoman Margaret Hodge’s report appealing for the Household to improve its long term planning and management of the budget. Hodge stated there was ‘huge scope for savings’, as the body managed to escape much of public sector austerity; reducing spending by only 5% in the last six years and maintaining the same staffing levels at a time when many public sector jobs have been cut. Meanwhile, the report praised the Household’s increased income of £11.5 million- an increase of £4.9 million from 2007/08- but said it was not ‘looking after nationally important heritage properties adequately’ as 39% of the Estate was deemed to be under the acceptable condition. Moreover, the PCA argued the Queen could do a lot more to increase her income, for example opening Buckingham Palace for longer than the 78 days a year it is open at the moment to increase the number of tourists and therefore revenue created from the Palace.
The Treasury, meanwhile, is supposed to be overseeing Royal spending but has of yet been inefficient in this capacity. The PAC’s report demands the Treasury to have a more active role in scrutinising spending, offering advice on key challenges for the Household and giving it clear incentives to become a more efficient body. It is hoped the department will also deal with the great transparency issues with the Queen’s personal income as, although the introduction of the Sovereign Grant has improved this, there is still not nearly as much transparency as there is with other public spending. British pressure group Republic argues that until the monarchy’s exemption from the Freedom of Information Act is removed, transparency cannot be achieved. Republic also stated that the ‘MPs on the Commons Public Accounts Committee have missed key issues in their report on Royal funding’. Their ‘Alternative Budget’ estimating the real cost of the Monarchy to the taxpayer is well over £200 million once hidden costs have been taken into account.
Whether the cost of the British Monarchy is the £33.2 million of the Sovereign Grant or Republic’s Alternative Budget of over £200 million, the statistics still suggest that a state funded Monarchy is a viable expense for the taxpayer producing £240.2 million from the Crown State and contributing massively to the tourist industry, even in times of economic hardship.
Contributed by George Waddell
In modern politics, the popularity of an election candidate can depend on their religious views. In big secular societies such as the UK, if a politician was to declare that they were a Christian, depending on the constituency they were looking to represent, this could have either a positive or a negative impact on the voters. However, the direct opposite can be seen in America where a President has to declare himself a Christian to even stand a chance of running for office. This is due to the prevalence of Christianity in American society and results in religion having a significant impact on American politics. How often has a President’s speech ended with the strap-line of ‘God bless the United States of America.’ The difference between the two countries attitudes towards religion shows the psychological diversity that can exist between different secular states.
Although religion plays an integral part in American politics, it does not necessarily influence their policy. For example, in California, gay marriage has been legalised however the state of California is under a Republican Senator and Republicans are often associated with conservatism whilst Democrats are associated more with liberalism. This shows that although America is secular most Americans conform to the same religious belief. An explanation for this could be that religion is used to garner votes rather than guidance for governing.
A prime example of religion in recent British politics is that of Tony Blair. When he was Prime Minister he made no reference to religion at all but it was only when he stood down as Prime Minister that he made his religious beliefs as a catholic known to the extent that he had an audience with the Pope. It would have been interesting to see the public’s reaction to Tony Blair had he revealed his religious belief whilst he was Prime Minister. Suffice to say, it would have made justifying some of his decisions harder.
Saudi Arabia is a polar opposite to the U.K. as it has an Islamist government. Conservative religious values provide the foundations for its internal politics and this is due to Sharia law. However, there is no freedom of speech in Saudi Arabia and although this would seem to not have anything to do with religion, history shows that many religious establishments had the same problem. Religious establishments historically have worked with many previous rulers claiming that they have been appointed by divine command however the direction of travel in modern society indicates that divine command is no longer a legitimate justification to rule by. In Saudi Arabia, along with other Middle Eastern countries every aspect of their way of life is governed by religion.
In conclusion, it is evident that there is an unhappy balance between religion and politics although in theory, they should both be able to co-exist. It seems to be nearly impossible for there to be a good and peaceful balance for people living under an establishment with religious political views as can be seen by the uprisings in both Syria and Egypt over the past few years.
Contributed by Gregory Lobo
The 9th of February 2001, the leader and founder of the Thai Rak Thai party, Thaksin Shinawatra is elected as the 23rd Prime Minister of Thailand. He formed a coalition with three other parties, the Chart Thai Party, the New Aspiration Party and the Seritham Party creating a government that should’ve been representative and loved by all. It wasn’t. Unfortunately for Shinawatra, he was elected in a country in which the rich are very rich, and the poor are desperately poor. Yet more unfortunate for the new leader, he represented the people that in developing countries always have the measliest of voices in the political arena, the working class. When he was elected in 2001, 21% of the populous of Thailand lived below the poverty line. Representing them however angered the highly influential and diabolically dangerous urban elite. By 2006 the extravagant chattering classes came knocking at his door, and they brought with them a tidal wave of damnation. His party was faced with allegations of corruption, authoritarianism, conflicts of interest, acting non- diplomatically, muzzling the press and treason. Thaksin himself was accused of tax evasion, selling Thai assets to international bodies and most detrimental of all, due to his astonishing power, was the crime of lèse majesté, which is insulting King Bhumibol. With the backing of the King himself, the most revered and respected entity in the whole of Thailand, it would’ve taken a miracle to save Shinawatra. His miracle didn’t come.
While he was abroad on Prime Ministerial business, a coup, sanctioned by the King himself overthrew his government and left him in the political equivalent of being a gay athlete in the Winter Olympics, out in the cold. His assets were frozen and he was left wandering the globe in search of asylum. He even bought Manchester City football club in an effort to forget his failed leadership by managing something even less successfully. By March 2010 the country had turned back into his favour and there were massive demonstrations in which the army clashed with the public killing dozens of people. However this did not silence the public, and within the short space of a year, Thaksin’s sister, Yingluck Shinawatra was duly elected as the Prime Minister.
A comparatively spacious and happy period was enjoyed by Yingluck but by November of last year, while Ukraine was falling into political turmoil, there were yet again the appearance of anti-government protests. By December, an election was called, but the South-east Asian country still shows no sign of settling down, much like a night-out in a Thai strip club with oddly square jaw lines. Foul political play is now in session, like a malevolent episode of The Thick of It, the Constitutional Court has called an annulment of the election. The party has said that the election and the polls have disregarded the Thai constitution by such trivial matters as the fact that the polls were conducted over more than one day. In the constitution it says that if an election is disrupted then it can be recalled, this means that if a party does not believe that they can win a majority in an election then they can simply arrange riots in order to delay further.
This begs the question to why the Thai political scene is in such a permanent state of uproar. The point at which Thailand was finally tripped over the precipice, was when the Thai Monarchy which has been the may pole around which their society has danced for thousands of years, started to meddle in the affairs of politicians and corrupted itself. Democracy needs to be balanced around a point of unmoving reverence. In the United States there is the near religious admiration of the Stars and Stripes. In Italy there is the Catholic Church holding all of politics together. Where we find a lack of democracy, such as in China, we do not find such a symbol. The desanctifying of the Constitutional Monarchy on Thailand, means that we cannot expect the democratic nature of the country to last for much longer. Therefore it is highly unlikely that the stability of the country will endure the next few bouts of rallies and protests, and we are more than likely to see the fall of what is normally quite a militarily peaceful country, will be plunged into a bloody and brutal civil war.
Contributed by Daniel Gibbs
Every worker who joins a Trade Union pays a fee, a small amount of which goes to the Labour party. But given that there are almost 6 million people represented by Trade Unions in the UK, these affiliation fees add up to as much as 50% of the party’s annual income. A Trade Union member is able to opt out of paying this money, but many aren’t aware of this – which is one of the things Miliband is keen to reform, with a requirement to actively opt-in to affiliation. In return for the funding that allows the Labour party to compete with the Conservatives (whose primary source of income is from donations), the Unions and the people they represent are afforded a number of privileges within the party. 12 of the 32 members of the National Executive Committee, the primary policy-making body, are selected by Unions. 50% of the delegates to the Labour Party Conference are elected by Unions. And most importantly, they have votes in any party ballot – including the leadership elections. Every Trade Union member has a vote, and the Unions’ votes make up one third of the total.
The 2010 leadership election, after the resignation of Gordon Brown, saw the Unions’ block of votes prove crucial in deciding the next leader of the party. The election is done using the alternative vote electoral system, with three groups each having a third of the vote: the party MPs and MEPs, the party members, and the Trade Union members. The two most popular candidates were the Miliband brothers, Ed and David, although there were five in total. To understand the details of the results, you have to understand that it is not simply one vote per person; each of the three blocks are attributed 33.33% of the total vote, and so each individual vote merely affects the percentage for its block. A vote from one of the 266 MPs/MEPs counts for a great deal more, therefore, than one from one of the 200,000 affiliated Trade Union members. In the first round of voting, David had the support of 27 more MPs/MEPs and 18,000 more party members, but Ed’s lead of 30,000 in the Trade Union votes stopped David from getting the majority he needed, despite winning a plurality of votes. It wasn’t until the fourth and final round of voting that Ed took the lead. He still had 2.3% less support than David from MPs/MEPs, and 2.9% less from party members, but his 6.5% lead in Trade Union votes took him past 50%, with a majority of just over half a percent. Had the Trade Unions not had a vote, David would’ve won by a majority of almost 4%.
While the Unions technically don’t have control over how their members vote, they are allowed to support a candidate. Ed Miliband’s campaign was reportedly given contact details for every Union member, while other candidates were denied these lists, and some Unions even sent letters along with the voting form, in breach of party rules, asking their members to vote for Ed. The undue influence of MPs in this voting system has been criticised before, but this was surely worse – a leadership election decided by people who weren’t even part of the party. We wouldn’t let the French decide who our Prime Minister is, so why would a party allow themselves to be dominated like this? Essentially, it comes down to tradition – and finances. Many in the party still feel a strong connection to the Unions that founded it and the impact should the Unions withdraw funding would be immense. Given the uproar that would inevitably greet any suggestion of changing this system, it was somewhat of a surprise when Ed Miliband, champion of the Trade Unions, announced his plan to ‘mend, not end’ the symbiotic relationship that tethers them to one another.
Unfortunately, Miliband has since shown an inability to enforce his ideas. First he cleared the Unions after accusations of electoral malpractice in the now infamous Falkirk candidate selection, and then backing down over the majority of proposed reforms after the threat of losing as much as £4 million a year in funding, a compromised change to the leadership elections is now his major channel of reform. The significant change is the movement away from block voting, to a ‘one member, one vote’ system. On the face of it, this would massively reduce the impact of MPs on the election, and increase that of the union members, who were 200,000 of the 310,000 voters in the 2010 leadership election. However, there are further changes being suggested to counter this. The move from the complex opt-out system affiliation fees to opt-in is expected to dramatically reduce the amount of Trade Union members eligible to vote (as well as put a significant dent in party funding). However, Miliband hopes those loyal to the party will instead pay £3 directly to the party, in exchange for the right to vote in leadership elections. The Unions will be allowed to keep their other privileges, such as deciding members of the NEC and Party Conference (for now), and the MPs anger about losing influence alleviated by making them the sole source of nominations for candidates to stand in the election.
This appears to be an excellent idea. The MPs and Unions are happy(ish), and the party can move back to its traditional place as the people’s party, with their leader decided by supporters, not benefactors. But there are still serious questions over whether this reform alone is enough to address the dangerous levels of influence the Unions will still hold, not to mention doubts about whether enough people will ‘opt-in’ to the basic party membership to replace vital income lost in affiliation fees. While some still question Miliband’s legitimacy in decreasing the influence of Unions in leadership election given that it was this same influence that helped him pip his brother to the post, it is widely agreed that this relationship was not healthy – neither for the Labour party or for British politics in general – and something had to be done. The consequences could well be disastrous for the party, and Miliband is a brave man to be putting his head on the line by forcing it through – but if it comes off, he will have addressed an issue that has long clouded the Labour party’s integrity, and this reform could be the first step in moving the Unions away from Parliamentary influence, and instead to doing their job, representing the workers. In many ways Ed Miliband still seems to be lacking the leadership skills required to potentially be running our country from 2015, and incidents such as Falkirk only enhance that feeling – but he is undoubtedly doing the right thing in attempting to reform this poisonous relationship, before Labour’s position as a Parliamentary party, and upholders of democracy, is compromised.
Contributed by Charlie Worthington
It is hard for any politician to make it out of such a quagmire of scandal and hatred into a successful career, the equivalent to going on a pleasant yachting holiday off the coast of Somalia and then being invited onto the pirates’ boat for a cup of tea and a picnic. Unluckily for Ed, we have become a world of vain materialists, obsessed with image, of which he has none. Furthermore his party was bankrolled by the malevolent and mysterious force that is the Trade Unions, suspected by many to have a far larger part in politics than is let on. Expectations were that he would make an inevitable and imminent flop into a think tank. However, something happened, that was unexpected to the Unions, the public, and the Press. Ed decided not to conform.
Modern politicians focus on their image and how they look in the public eye hence the arrival of the beautifully polished David Cameron and the ‘down with the kids’ Clegg. We see all too often these ‘figureheads’ of parties that have good hair and good teeth but no actual political intelligence. Miliband has a nostalgic whiff of Eau de Thatcher about his person, not for his policies but for his unsurpressable enthusiasm in improving the country. Miliband could be the first we see in a fantastical marching return of real politicians, ones that think for themselves and actually listen and care about the electorate of this country. Ed did what no Labour politician had the guts to do; he stood up to the Unions and severed their tentacles of puppetry from a party that was in danger of falling back into their grasp since the departure of Blair. He also took a stand on behalf of every single one of us against the ‘Big Six’ energy companies who charge families an average of £1,412 a year with prices forecasted to do nothing but rise further in the coming years. He has also turned around the disastrous results from the previous election headed up by one of the worst smiles in politics, Gordon Brown, who managed to lose 91 seats. Compared to the last election, predictions are now at a 56 seat Labour majority in the upcoming election in 2015.
All of these things have been done in little over three years of party leadership, more than most politicians could hope to do in a lifetime. His branding of ‘Red Ed’ by his critics does not connote communism as they imply, but it is an expression of his regality, and that a new visionary and political royalty has been born into the sparse world of Westminster.
Contributed by Daniel Gibbs
Human beings are generally rule-following by nature and they conform to the social norms that they see around them. A democratic government would therefore normally make laws based on popular opinion – especially if this supports the governing party gaining re-election – which would be followed by the majority of people in the country because that is what their innate nature would tell them to do. However, in America, people’s ideological views can be seen by where they choose to live. Therefore a democratic government across the entirety of the country would be less beneficial than a system of state governments which make decisions for the local citizens. This results in more satisfaction and agreement amongst the local population as it would be aimed directly at their needs. Overall though, only social and limited financial policies can be dictated on a state by state basis with (for example) foreign and military policies requiring a national decision as it would affect the entire country and therefore some form of national government would be needed.
The positives of a democratic government can be seen in Denmark where good political and economic institutions have been put in place. It has a stable government, is democratic, peaceful, prosperous, and inclusive and has low levels of political corruption. However, it is not clear whether Danish political order could be implemented into different cultural contexts where technology is less advanced and people have been under the rule of a dictatorship all their lives. A prime example of this is when the US administration was under the impression that once they had removed Saddam Hussein as President of Iraq, conditions would automatically revert to a democracy with a free market economy and were surprised at the levels of looting and civil conflict that resulted. Therefore, although a democratic government would be ideal for most countries, it is a laborious process that must take place with technological improvements and many countries are not at the stage where this transformation can take place. The recent political unrest in Egypt is a prime example of this.
Location can also play an important part in whether democracies are necessary. In Europe, most countries are full democracies though there are a few flawed democracies, particularly some of the countries which were part of the former Soviet Union. However, when looking at East Asia, successful authoritarian modernization is commonplace with countries like South Korea, Taiwan, Singapore and China benefiting. However, the question must therefore be asked as to why similar systems are not successful in Africa and the Middle East. Overall, it is therefore evident that different locations are seemingly better suited to different forms of government.
In conclusion, democracy seems to be the best form of government only in areas where technology is advanced enough and also where there are high levels of entrepreneurship that will benefit a free-market economy and where a robust law-and-order regime is in place. However, in other locations authoritarian governments are more suited because it is a system where the economy can develop more effectively when it is regulated, such as China.
Contributed by Gregory Lobo