Why is it so difficult to program a machine to do these things? Normally, a programmer will start off knowing, what task they want a computer to do. The skill in AI is getting the computer to do the right thing when you don’t know what that might be. In the real world, uncertainty takes many different forms, it could be an opponent trying to prevent you from reaching your goal or it could be that the repercussions of one decision do not come so apparent until later on – you might swerve your car to avoid an accident, without knowing if it is safe to do so – or that new information becomes available during a task. An intelligent program must be capable of handling all this input and much more. To approach human intelligence, a system must not only model a task, but also model the world in which that task is undertaken. It must sense its environment and act on it, modifying and adjusting its own actions accordingly. Only when a machine can make the right decision in uncertain circumstances, can it be said to be “intelligent”.
The roots of AI predate the first computers by many centuries. Aristotle described a method of mechanical logic called syllogism that allows us to draw conclusions from ideas. One of his rules sanctioned the following argument: “Some swans are white. All swans are birds. Therefore, some birds are white.” This concept can be applied to many situations, to arrive at a valid conclusion, regardless of the meaning of the words that make up the actual sentence. Consequently, according to this formulation, it is possible to build a mechanism that can act intelligently despite lacking an entire catalogue of actual human understanding. Aristotle’s proposal of this ‘idea’ set the stage for extensive enquiry into the nature of machine intelligence. Although, it was not until the mid-20th century that computers finally became sophisticated enough to test these ideas.
In 1948, Grey Walter, a researcher at the University of Bristol, built a set of autonomous mechanical “turtles” that could move, react to light and learn. One of these, called Elsie, reacted to her environment for example, by decreasing her sensitivity to light as her battery drained. This complex behaviour made her unpredictable, which Walter compared the behaviour of animals, being unexpected and more importantly, unpredictable dependent upon the situation they are placed in. In 1950, Alan Turing suggested that if a computer could carry on a conversation with a person, then we should, “by convention”, agree that the computer “thinks”. But it was not until 1956 that the actual term, artificial intelligence was coined. At a summer workshop held at Dartmouth College in Hanover, New Hampshire, the founders of nascent field laid out their specific vision based on intelligence, liked to machines: “Every aspect of learning or any other feature of intelligence can, in principle be so precisely described that a machine can be made to stimulate it.” From then, the expectation had been set for a century of rapid progress. Human-level machine intelligence seemed inevitable…
The notion of super-intelligent machines – one that can actually surpass human thinking on any given subject – was introduced by a mathematician, I.J. Good in 1965, and who also worked with Alan Turing, as stated above previously. They worked together at Bletchley Park, the UK’s centre for coding and code breaking, during the Second World War. Good noted, “The first super-intelligent machine is precisely the last invention that man need ever make,” because from then on, the machines would be designing each other, ever-better machines, and there would be no work left for humans to do.
The whole idea of “artificial intelligence’ poses further questions, and possibly even being referred to as either a technological singularity or the tipping point at which super intelligent machines so radically alter our society that we cannot predict how life will change afterwards. In response, however, some would fearfully predict that these so-called intelligent machines could dispense with the useless humans – mirroring the plot of ‘The Matrix’ – while others could see it as a utopian future, filled with endless possibilities and most importantly, endless leisure. Focusing on these equally unlikely outcomes has actually distracted the conversation from the very real societal effects already brought about by the increasing pace of technological change. For over 100,000 years, we have relied on the hard labour of hunter-gatherers. A scant 200 years ago we moved to an industrial society that shifted most manual labour all the way to machinery. And then, just one generation ago, we made the transition to the, digital age. Today, much of what we manufacture is information, not physical objects. Computers are ubiquitous tools, and much of our manual labour has been replaced by calculations and formulae. This acceleration can be seen in parallels, alongside, robotics. The robots on sale today, to vacuum you floor may only appeal to technophiles, but within a decade there will be an explosion of uses for robots in the office and home. Some may be completely autonomous, while others may be tele-operated by a human.
To conclude, the last invention we need ever make is the embodiment of the partnership between humans and tools, or in other words robots. Comparable to the move from mainframe computers in the 1970s to personal computers and ‘hand-helds’ today, most AI systems went from being standalone entities to being tools that are used in a human-machine partnership. Our tools will get ever better as they embody more intelligence as time progresses. In turn, we will become better equipped to access ever more information and education. We may hear less about AI and IA, “intelligence amplification”. For now though, in movies we will still have to worry about machines taking over the world, but in real life, humans and their sophisticated tools will move forward together because without humans, there are no machines.
Contributed by Varundeep Singh Khosa