Other scientists across the centuries have been similarly inspired by creative imaginations of their own. Professor Stephen Hawking, famed for his work on gravity and black holes, is known for urging us to “remember to look up at the stars, and not down at our feet”. But these quotations, while they pertain to the natural sciences, can just as easily be applied to the social science of economics – which is the science that I have chosen to discuss in this article. Without constantly “raising new questions, new possibilities, and regarding old problems from a new angle”, the study of economics, and indeed the world’s economies, would not be where they are today.
When thinking about the future of physics (or conversing with other would-be scientists looking to go into chemical or biological fields) it is often hard to think of where the world/life changing discoveries, inventions and associated funding will come from. Upon pondering this topic for a while (and wishing to talk about both the two fields) I came to conclusion that physics has a lot of room to move, but quantum physics and astrophysics are areas with the greatest potential.
Phone manufacturers are a particularly bad example and not entirely due to their actions. In the West, people will often buy new phones every two year or more often. These phones will seldom be manufactured in the country of use (the Motorola Moto X 2013 was a rare example which was manufactured in the US company’s home country, but was closed once it was succeeded) and therefore (as with almost all technology) shipping is a frequent affair. Rare earth elements are frequently used in smart devices as features and components need their respective chemistry in order to produce their desired function. However, 16 of the 17 are used in phones and it is questionable whether they are retrieved in recycling and whether manufacturers are making sure to use sustainable sources. Original equipment manufacturers, or OEMs for short, have little economic incentive to be more sustainable other than to not deplete supply of resources which would increase prices in the future.
Asteroid-mining is simply the idea of ‘digging up’ asteroids in order to obtain metals such as iron, aluminium and titanium which are growing in demand. Although the technology for this sort of undertaking does not yet exist, the technological developments are rapid. Deep Space Industries Inc. is the more recent of two asteroid-mining companies. Having announced in early January their plans to launch a fleet of prospecting spacecraft’s by no later than 2016, chairman Rick Tumlinson said in a public interview, “We can make amazing machines smaller, cheaper and faster than ever before.” Deep Space Industries Inc. is the second of the two asteroid-mining companies, the other being Planetary Resources. This raises the question amongst many members of the public, ‘Is space really big enough for two asteroid-mining companies and is it really worth the money?’
Many people believe that the money would be better spent on medical research or searching for other forms of renewable energy. However, it is argued that asteroid-mining could be the answer to our energy needs as the technologies that arise from it could soon lead to further space exploration; this includes uncovering already well-known sources of energy such as natural gas. Titan for one of the many moons of Saturn is known to have hundreds of times more natural gas and other liquid hydrocarbons than all the known oil and natural gas reserves on Earth. The question that faces scientists is, ‘How many near earth objects could hold these types of precious materials?’ A further advantage of asteroid-mining as suggested by officials from Deep Space Industries Inc. is that it will provide thousands of jobs; each of these 32kg space probes that intend to build, fireflies and dragonflies, will need an aviator via a live feed to direct the space probes from Mission control. With over 900 asteroids sweeping past Earth every year, the total number of potential ‘mining sites’ is massive!
Deep Space Industries Inc. will also focus on extracting iron and asteroid water (which can then be broken down into Hydrogen and Oxygen) which are the two main constituents of rocket fuel. Although it is for now just another of many ideas, it is hoped that this particular type of mining could one day lead to the formation of space ‘gas stations’, which would consequently allow journeying spacecraft to stop and refuel as part of a greater journey to unimaginably distant planets. Even more so, this may one day lead to the expansion of the human species beyond Earth and far into the cosmos.
Contributed by By Darshan Desai
According to medical ethicists, these cases raise serious ethical questions – not least concerning the value of the lives of disabled people – and fears that medical professionals may become overly cautious in their interpretations of diagnostic tests, which would then possibly result in the terminations of healthy pregnancies. One ethicist also claims that lawyers, looking to drum up business, are trawling communities with high rates of inbreeding and genetic disease.
“Wrongful birth” cases in which parents seek compensation for the costs of raising a disabled child, are well documented. Last year, a couple in California won $4.5 million in damages after Doctors ad sonographers failed to pick up that their son, now ages 4, had no legs and only one arm. One medical ethicist from the Hebrew University-Hadassah Medical School in Jerusalem poses the question, “What are the physiological effects on the actual children who are born?” He examines this by saying that he finds it extremely difficult to understand how parents can go to the witness box and tell their children that, “it would have been better for you not to have been born”.
The trend in Israel is now for the children themselves to sue for wrongful life, which generally carries a bigger award designed to compensate for a lifetime of suffering. Steinberg himself sits on the Matza committee, set up by the government to investigate the issue. One of the first successful wrongful life cases was brought before the California Court of Appeal in 1980, when a child sued for damages for being born with a neurodegenerative condition called Tay-Sachs disease (deadly disease of the nervous system, passed down through families). In Israel, the rate of lawsuits has been rising, since a legal precedent was put into place in 1987, whereas wrongful life litigation is not accepted in many other countries, including the UK and all but four US states.
The problem is exacerbated in Israel by a very strong pro-science, pro-genetic testing culture. “There is an entire system fuelled by money and the quest for the perfect baby. Everyone buys in to it – parents, doctors and labs. Parents want healthy babies, doctors encourage them to get tested, and some genetic tests are being marketed too early. Genetic testing has enormous benefits but it is overused and misused,” says Carmel Shalev, a human rights lawyer and bioethicist at the University of Haifa in Israel. The popularity of these tests is partly due to the fact that genetic disorders such as Down’s syndrome, deafness and cystic fibrosis are prevalent in Israel because of high rates of consanguineous marriage, which are unions between second cousins or closer relatives and are usually more common in the middle-eastern world, including many parts of Asia too. Since we all carry a smattering of defective genes, marrying a blood relative increases the risk of autosomal recessive disorders, in which two healthy partners – each carrying a single faulty copy – produce a child who inherits the double whammy of two defective copies. A common example within ultra-orthodox Jewish communities is Tay-Sachs disease, as stated before. It manifests itself in people carrying both faulty copies of the gene. If both parents carry a single copy, children have a one in four chance of having the disease.
Couples at high risk have the option of being screened before making the decision to marry or try for children. In Israel, in particular, a wide range of prenatal testing is offered by the state. Even private genetic testing is extremely popular with Israeli couples and its permissible to terminate viable pregnancies due to health reasons, with permission from an abortion committee. In the past five years, many wrongful life cases have been won, on behalf of the children with cystic fibrosis and spina bifida and typical payouts for these cases are around £500,000 all the way up to several million pounds, dependent on the severity of each individual case. However, due to a handful of wrongful life lawsuits in the past few years, the Israel society of Obstetrics and Gynecology advised the country’s Supreme Court that not every undetected birth defect should be the basis of a malpractice suit. Extreme learning difficulties or around the clock assistance with basic tasks such as eating and going to the toilet, for example, should not be equated with milder impairments such as missing fingers.
In culmination of all the issues related to the abortions of so-called “wrongful lives”, it is important to examine the possible moral effects of these cases. With such distinctions of various diseased or healthy fetuses, specific doctors and sonographers, who fear being sued over the birth of a disabled or diseased baby, might become over-cautious in their risk assessments, leading to couples aborting perfectly healthy fetuses. Steinberg, as mentioned previously, says that, “Physicians are doing a lot more defensive testing now, but more testing means more false positives – and that means more abortions, because geneticists don’t always know if results indicating the possibility of chromosomal abnormalities are meaningful.” He also goes on to say that a study should be conducted to see how many of these aborted fetuses actually are diseased.
Speaking at a genetics conference last year, Steinberg voiced his concerns that some lawyers would seek to exploit the high rates of genetic defect sin some communities by trawling them for cases. “To go to these villages and look around for people willing to file a claim that it would have been better not to be born is going to far,” and, in my opinion, goes beyond the scope of humane nature and educated thinking. Although, another ethicist named Posner, who is part of the Matza committee, would argue that only a small number of these cases exist as a result of rogue lawyers targeting villages with high rates of inbreeding and that most clients are of couples who believe that they have been let down by the medical profession following genetic testing. Many Doctors gain a tremendous amount of money from prospective parents who are willing to pay for private tests, such as ultrasound and these doctors should not complain when, if they are negligent, someone comes after them. Without any criticism, I believe the medical profession would become corrupt.
Contributed by Varundeep Singh Khosa
Nicholas Carr, author of “The Shallows: What the Internet is Doing to Our Brains“, told BBC World Service’s Digital Planet program that although technology has made complicated tasks easier, the human brain learns more effectively when it is challenged with something difficult. If you do think about it, what do you consider as your best lesson? Was it from being patted on the back and told ‘you are excellent’ or was it that time that you really screwed up and you said to yourself,’ Well I am not going to do that again!’
Why do we need an education when we have a device which could do everything for you? Take a look at some of the ways technology has improved the human life. The cell phone; something which took 1 minute to dial a telephone number, yet having to dial it all again if just one digit went wrong, least forgetting the incapability to store numbers. Now you can have everything with a touch of a button, dial 3 numbers at the same time and even video call with someone on the other end of the world.
Backtrack half a century ago whereby people used to go to the library and come back with a rucksack full of books, ready to learn something new, and prepare for their assignments, now one may now ask what can be done without “GOOGLE” – over 80% of all users use this as their primary search engine, according to Ofcom.
Keeping in contact with friends used to be challenging, making sure you’re up to date with them, share your experiences and even wish them happy birthday. Mark Zuckerberg eased this with his idea of a book of faces, which is more commonly known as “Facebook” – one of the world most viewed websites.
So can there really be a device that could do everything? What about wireless energy transfer, which would get rid of all the long wires and could mean wires medical treatment from home, unlimited fuel supply to devices and energy sharing which will drive the economy in selling their energy ‘wirelessly’.
Teleportation is a mode of transport which people have been dreaming about, and now it is here, or should we say ‘quantum teleportation’, the key to the technology is controlling this phenomenon. It is no easy task but Chinese researchers recently teleported a photon’s state nearly 100 kilometers. Once perfected, the technology will revolutionise computing and communication speeds.
A robot which carries out everything for you; with so much technology, is this driving the world crazy or is this revolutionise the way we live? Can this really lead to a device, a computer or a “thing” which is smarter than the human brain?
Watching live football on your phone, sending an important email on your tablet, and surfing the web on your notebook; just some of the ways we take full advantage of the technology around us. Everyday we all hear about the latest technology where a new phone is released, and the world goes crazy; camping outside 5 days before but what they do not know is that the only improvement is that one new feature was added. This is how advertisement and product design causes a consumer frenzy. It is not fair to say technology has aided the human life but instead the human life has become so dependent with technology that humans are becoming “dumb”. What people do not realise is that with humans using technology means that the technology does all the work and the brain remains inactive. Now, technology has helped us a lot, it has improved our lifestyles and has made everything quick, quick enough for our brains not to register. I feel that technology should be used to assist the human brain and not as a primary source. After all do we really want to turn into self destructive robots who would not be able to say a sentence before taking a look at that vibrating device in their pockets? Do we really want technology to do everything, for one day, this very same article to automatically change to “How technology has taken over the human life”?
Contributed by Ehsaan Ahmad
The technology for 3-D printers has been around for a while yet in the last two years, 3-D printers’ sales, for the household, has risen dramatically. This indicates that 3-D printers are manufactured with a long term aim; 3-D printers should be available to all consumers. The implications of a 3-D printer as commonplace in the household are quite substantive. For example, any physical possession could be purchased to print, just as easily as one could download music to be listened to only moments later.
A 3-D printer works much like the laser printer currently does. Just as nozzles spray toner onto paper, in a 3-D printer a nozzle squirts molten plastic resin in horizontal layers onto a base plate, which then cools, sometimes aided by exposure to UV light. This section drops away from the nozzle and the process is repeated, building up layer-by-layer until the model is finished.
There are many potential problems which remain unsolved. For instance, complicated models would take hours to finish and might need different materials to strengthen certain parts. Furthermore, just as if the quality of the printed page is not sufficient, the quality of the object printed by a 3-D printer may not be sufficient. However, while a page can be reprinted in seconds, a model that has taken hours in the making would be very irritating if it was produced with a defect.
One possible purpose of a 3-D printer could be its role in printing replacement parts. For example, instead of buying popular consoles, many people use their PC as a platform to play video games. This improves their PC’s invariably with different parts such as graphics cards, which enhance their gaming experience. Using a 3-D printer, these parts could be easily printed and used. On one hand, this would require printing an object made of more than just the spool of plastic found within these printers. On the other hand, it may be possible to develop 3-D printers in conjunction with automated circuit board etchings to print electric components. Creating these “hybrid” printers could give birth to anything being printed as long as the printer was used in conjunction with the right object.
On a final point, if anything could be printed and these printers became used by the majority of people, the chances are that copyright laws would be broken very quickly because it would be very possible to design, on a computer, a component identical to a component designed by a corporation, which is then manufactured by the 3-D printer. However, this problems occurs with all revolutionary technology, and ‘revolutionary’ is an adjective that should be ascribed to a 3-D printer, considering all the possibilities this offers us. With this technology available, it would be a shame to let politics and law prevent this fascinating technology becoming part of our lives.
Contributed by Zia Farooq
Why is it so difficult to program a machine to do these things? Normally, a programmer will start off knowing, what task they want a computer to do. The skill in AI is getting the computer to do the right thing when you don’t know what that might be. In the real world, uncertainty takes many different forms, it could be an opponent trying to prevent you from reaching your goal or it could be that the repercussions of one decision do not come so apparent until later on – you might swerve your car to avoid an accident, without knowing if it is safe to do so – or that new information becomes available during a task. An intelligent program must be capable of handling all this input and much more. To approach human intelligence, a system must not only model a task, but also model the world in which that task is undertaken. It must sense its environment and act on it, modifying and adjusting its own actions accordingly. Only when a machine can make the right decision in uncertain circumstances, can it be said to be “intelligent”.
The roots of AI predate the first computers by many centuries. Aristotle described a method of mechanical logic called syllogism that allows us to draw conclusions from ideas. One of his rules sanctioned the following argument: “Some swans are white. All swans are birds. Therefore, some birds are white.” This concept can be applied to many situations, to arrive at a valid conclusion, regardless of the meaning of the words that make up the actual sentence. Consequently, according to this formulation, it is possible to build a mechanism that can act intelligently despite lacking an entire catalogue of actual human understanding. Aristotle’s proposal of this ‘idea’ set the stage for extensive enquiry into the nature of machine intelligence. Although, it was not until the mid-20th century that computers finally became sophisticated enough to test these ideas.
In 1948, Grey Walter, a researcher at the University of Bristol, built a set of autonomous mechanical “turtles” that could move, react to light and learn. One of these, called Elsie, reacted to her environment for example, by decreasing her sensitivity to light as her battery drained. This complex behaviour made her unpredictable, which Walter compared the behaviour of animals, being unexpected and more importantly, unpredictable dependent upon the situation they are placed in. In 1950, Alan Turing suggested that if a computer could carry on a conversation with a person, then we should, “by convention”, agree that the computer “thinks”. But it was not until 1956 that the actual term, artificial intelligence was coined. At a summer workshop held at Dartmouth College in Hanover, New Hampshire, the founders of nascent field laid out their specific vision based on intelligence, liked to machines: “Every aspect of learning or any other feature of intelligence can, in principle be so precisely described that a machine can be made to stimulate it.” From then, the expectation had been set for a century of rapid progress. Human-level machine intelligence seemed inevitable…
The notion of super-intelligent machines – one that can actually surpass human thinking on any given subject – was introduced by a mathematician, I.J. Good in 1965, and who also worked with Alan Turing, as stated above previously. They worked together at Bletchley Park, the UK’s centre for coding and code breaking, during the Second World War. Good noted, “The first super-intelligent machine is precisely the last invention that man need ever make,” because from then on, the machines would be designing each other, ever-better machines, and there would be no work left for humans to do.
The whole idea of “artificial intelligence’ poses further questions, and possibly even being referred to as either a technological singularity or the tipping point at which super intelligent machines so radically alter our society that we cannot predict how life will change afterwards. In response, however, some would fearfully predict that these so-called intelligent machines could dispense with the useless humans – mirroring the plot of ‘The Matrix’ – while others could see it as a utopian future, filled with endless possibilities and most importantly, endless leisure. Focusing on these equally unlikely outcomes has actually distracted the conversation from the very real societal effects already brought about by the increasing pace of technological change. For over 100,000 years, we have relied on the hard labour of hunter-gatherers. A scant 200 years ago we moved to an industrial society that shifted most manual labour all the way to machinery. And then, just one generation ago, we made the transition to the, digital age. Today, much of what we manufacture is information, not physical objects. Computers are ubiquitous tools, and much of our manual labour has been replaced by calculations and formulae. This acceleration can be seen in parallels, alongside, robotics. The robots on sale today, to vacuum you floor may only appeal to technophiles, but within a decade there will be an explosion of uses for robots in the office and home. Some may be completely autonomous, while others may be tele-operated by a human.
To conclude, the last invention we need ever make is the embodiment of the partnership between humans and tools, or in other words robots. Comparable to the move from mainframe computers in the 1970s to personal computers and ‘hand-helds’ today, most AI systems went from being standalone entities to being tools that are used in a human-machine partnership. Our tools will get ever better as they embody more intelligence as time progresses. In turn, we will become better equipped to access ever more information and education. We may hear less about AI and IA, “intelligence amplification”. For now though, in movies we will still have to worry about machines taking over the world, but in real life, humans and their sophisticated tools will move forward together because without humans, there are no machines.
Contributed by Varundeep Singh Khosa
There are many benefits to Facebook. For example, we can catch up with people who we have lost contact with in the past, organise events and, more importantly, increase our social interaction which has been proven to reduce stress and tension. However, there are disadvantages which we sometimes overlook. How often is it that people simply pass other people in the corridor or street, who on Facebook have been given the term “friend”? Surely a more appropriate label would be “acquaintance”. How many others could this label apply to? Admittedly, many people do not get lulled into the false idea that these people really are their true friends. However, some people end up diluting their relationships with their true friends or even losing them altogether. This leads to them having lots of “friends” both online and in the flesh but few, if any, ‘true’ friends. We should question whether we would really be friends with these people if it wasn’t for the digital connection between us all. How many of these people really have our best interests at heart?
This then begs the question of how this will change in the future. Some things are certain, for example, almost 700,000 new people have access to the internet every day and this is likely to continue for the foreseeable future. The speed and ease of using the internet is also increasing daily. The obvious benefits of social media are increasing all the time. Furthermore, with 4G (faster mobile internet*) being released shortly in many major cities in the UK, the internet will become even more accessible on the move and who is to say that it will not be normal for people, in a decade or two, to have friends who they will never meet face to face in their entire life time. Yet, there is one thing that all Facebook users will do well to remember which is that social media should only be used to supplement our real life relationships rather than be a complete substitute for face to face interaction, as proper social interaction is a necessity for us all.
*faster than most houses in the UK get now through a wired connection (up to 30Mb/s).
Contributed by Jack Pearce
Coronary heart disease has been one of UK’s biggest killers, affecting around 2.6 million people. It occurs when the blood supply to your heart is stopped or blocked by fatty substances clogging up the coronary arteries. Before modern technology dominated, there was only a 5% chance of surviving a cardiac arrest. Now, due to the expansion of technology, it has made it possible to make ‘coronary stents’. Coronary stents are tubules, constructed out of artificial technology, whose purpose is to keep coronary arteries open. Such technology has raised the survival rate to a staggering 98%.
Technology has also developed surgical procedures. For example, a century ago, you had patients that were in recovery time for more than a year, not forgetting the actual surgery itself, which could have lasted for more than a week. With new technology, it has allowed us to make surgical procedures much more efficient which reduces the average recovery time of major surgical operations to around four weeks. In short, technology has enhanced the effectiveness of surgical procedures.
The technology in the 21st century means that the medical industry can benefit from the vast techniques of imaging. Take the inventions of X-ray & MRI (Magnetic resonance imaging). X-rays now allow us to safely capture effective images of your bones by using radiation. It allows us to detect any broken bones, or examine organs to detect any problems. MRI uses strong magnetic fields to produce detailed images of organ, tissue and bone. The two inventions are examples of many other medical apparatuses which have been created.
As technology progressed, more techniques were being developed which lead to easier diagnosis and fewer deaths. Yet, to further our progression improving diagnosis and medicine, as a whole, it is argued we should research the use of ‘Nanoparticles’? Nanoparticles are particles which are 10-9 in size; they are bigger than atoms but smaller than bacteria. An economic advantage of ‘Nanoparticles’ is that we will save substantive amount of money. This is because ‘Nanoparticles’ have a large surface area and because it has a large surface, it follows we can afford to produce fewer units of medicine. Nanoparticles will be enable doctors to prevent the disease, not just the symptoms. Nanoparticles can be used as an ‘antimicrobial’, which prevent biofilms and bacteria because it does not stick to surfaces. As consequence, this then can be used to coat hospital wards and therefore, prevent any bacteria from sticking.
As much as medical imaging has helped the medical industry, it still has disadvantages. For instance, X-rays only show hard structures, however with ‘Nanoparticles’, it would enable us to see soft tissue quicker. Quantum dots are the smallest ‘Nanoparticles’ which have a unique electronic structure. This means they can be used for detection, be able to target antibodies and migrate to the illness; this can lead to passive targeting to cancer cells (these are cells that do not work function properly) and thus, enable doctors to detect cancer tumours around 50% earlier. This will result in a quicker medical treatment because you will be able to prevent the cancer cells from replicating and becoming too large. As much as pharmaceutical companies claim that their medicine targets a specific part, it is not true. Instead, it just treats the whole body. However, with ‘Nanoparticles’, we may even be able to produce ‘Nano’ devices and ‘Nano’ bots; this can be in the form of a chip, which will be able to detect changes in the body. Even with the genes on cancer, antibodies can bind. To counter the problems they could cause, we can use mini chips to bind little specific pathogens; these will oxidise and then, release a current to kill the harmful cells.
Technology plays a vital part in medicine; it is used for diagnosis, treatments and further research. It has reduced the death rates for many diseases; an example is Coronary heart disease where technology made tubules can be placed in the arteries to prevent it from clogging. Nanoparticles can change the world; they will be able to improve the detection of diseases, drug delivery and therapy. There are challenges such as the substantive financial cost, in the short run, to research and develop ideas, whether or not Nanoparticles will cause long term side effects, will it leave our bodies and what happens to it once it has left the body. Yet, technology is constantly improving and, in the long run, the benefits will outweigh the costs to the medical industry.
Contributed by Ehsaan Ahmad
Snakes on a plane may sound crazy yet this technological feat could one day be used on aircraft all over the world. The British jet engine manufacturer Rolls-Royce are developing snake-like robots that can manoeuvre their way inside the intricate engine of the aircraft to seek out and repair damaged components using highly powerful ultraviolet lasers. The snake passes into the engine and takes images of the internal structure, which are then sent to a computer device and assessed by a specialist. In some cases, the snake itself can be used to repair the damage, however, in other situations an engineer may be required depending on the magnitude of the damage. If successful these ‘snakes’ could be used to fix problems within minutes, which would otherwise take hours if not days, it would reduce flight delays at airports. This development will not only prove to be highly efficient, but also extremely economical. Millions of pounds every year could be saved by leading airline companies, such as British airways and IATA (which spends 43% of its yearly expenditure on engine maintenance), due to the reduced need to strip down the engines in an attempt to find and fix the problem – the robotic snakes could instead inspect the structure and amend the issue, meaning planes will not need to be taken out if service so frequently.
Currently, damage in the internal structure of the engine, such as that caused by bird-strikes is inspected using an instrument called a ‘Borescope’. This tool is similar to an endoscope which is used by doctors to give an internal view of the body. The Borescope is inserted along the inner rim of the engine and is then used by engineers or mechanics to look for internal damage; however the problem with this is that the resolution of the image is not particularly high. Furthermore, the number of trained Borescope experts around the world is very low and so, this process is time consuming because it may require calling in an expert from halfway around the world. Pat Emmott, a senior vice-president from Rolls-Royce said “We don’t have enough specialists to go around so we need to automate this capability”. This is what led to the development of the robotic snake, which requires no special skills to use in order to carry out a diagnosis.
Although this product is not yet available to airlines, it is believed that the prototype, which is hoped to be 60cm in length and 12.5 mm in width, will be completed by July 2014 and the actual product will be made available by 2018 at the latest. The research and development of this type of technology, being carried out by Rolls-Royce and the snake robot is just a small part of a £2.5 million project, which includes many other products such as a camera chip which can withstand temperatures of 2000oC.
Contributed by Darshan Desai
This story demonstrates society’s reliance on gasoline. Although there are constant warnings that gasoline will run out and that we must resort to other forms of energy, the recent events in New York suggest otherwise. Have we run out of options? The discovery of producing gasoline from air, at Air Fuel Synthesis (AFS), may give us another option.
In an experiment, AFS extracted oxygen, carbon and hydrogen, from carbon dioxide, and water, from the air, to form methanol which is then converted into gasoline. As consequence, this process produces half a litre of purified gasoline per day, using just air as a raw material. To explain further, carbon dioxide is extracted or “snagged” by passing air through a sodium hydroxide mist, causing sodium carbonate to form. Next, water is taken from this same air through a condenser. Then, in order to produce methanol (CH3OH), electrolysis is used to extract the required hydrogen, from the water, and the required carbon and oxygen, from the sodium carbonate, to create methanol. This methanol is then converted to gasoline through polymerisation (adding many molecules together).
The implications of this method are staggering because not only is the air around us readily available, but the process involves extracting carbon dioxide from the air. This means that any carbon dioxide produced by burning the gasoline would be extracted in making more gasoline. Therefore, using this gasoline would be carbon neutral, so long as the machinery involved in this process was powered by renewable energy sources such as solar or wind power.
However, before this method of gasoline synthesis from air is touted as the future of energy, the energy efficiency of this process must be considered. Substantive energy is required for this process and to ensure the process is carbon neutral, this energy must be generated by renewable energy sources, which have high capital costs. Therefore, this venture would only be heavily invested in, if the energy produced is used more frequently and if there is good profit return for investors, within the process i.e. energy efficiency.
In the final analysis, it is too early to know how to optimise the energy efficiency of this gasoline synthesis method. The energy efficiency currently is quite poor because the use of electrolysis within the process, as a cost effective and efficient way of splitting hydrogen from water, has not yet been found. AFS themselves say that further testing of efficiency is needed and that they need bigger factories for more research. Yet, AFS claimed the motorsport industry is taking an interest in their work because they are keen to reduce their own fossil fuel dependency. Despite the problems, this long term prospect of creating gasoline from air is an exciting one.
Contributed by Zia Farooq