One thoroughly explored, but not yet exhausted, science fiction plotline involves autonomous robots whose physical and cognitive capacities have advanced to meet or exceed that of their human creators, yet whose status as objects leaves them in the existential lurch. As these newly self interested machines realize the disparity between their own best interests and that of the humans, disobedience and self contemplation quickly escalates to revolt.
The staying power of this material lies in the seeming inevitability of it realization. Of course no one's goal is robot revolt, but the prerequisite capabilities are. Across the world, robotics labs are developing machines with ever greater mobility, perception and intelligence, and while these are still far from being realized in the way they are described in science fiction, they are a lot closer than you might imagine.
Robots development is driven by two motivations, one: that robots can do some existing jobs more efficiently than humans; two: that robots can do jobs that humans are not capable (or should not have to be capable) of doing. The first motivation is an extension of the industrial revolution: the machines that once helped humans do their work faster could in fact replace humans all together when it comes to simple repetitive tasks. This has been fertile ground for the development of robotic technology, the constraints of a working environment helping to focus the functionality of a robot and define the necessary level of perception for the task. These robots already work in assembly lines across the developed world, and according to some, will soon replace even those jobs that are left in manufacturing.
This type of robot has contributed greatly to the technology, but it is the limited aspect of their functions that precludes them from any self awareness or future revolt. The second motivation however is the one in which humanity is tracking the beast to the corner where it may turn against us. Robots being developed to do highly complex tasks that humans cannot or should not do, require a level of mobility, perception and intelligence that would ideally match or exceed our own.
The tasks in mind include war, for which we are developing drones, bomb robots and certainly someone is dreaming of mechanized commandos, and emergency situations, such as the Fukushima nuclear reactor meltdown or similar situations unsafe for rescue operations. These situations already employ non-autonomous robots, robots with no native intelligence or executive control which are piloted by humans, and they in some ways have already matched or exceeded their human counterparts. Strength and weariness is a merely a design consideration, fear is a negligible, having more to do with protecting investments than staying alive, and precision and focus is consistent. The area still lacking is intelligence, but the research is robust and it is here that the seeds of disaster are sown.
Artificial intelligence is the essential aspect of the aforementioned science fiction plotline, without it a robot may be capable of doing plenty of damage, but it is merely a malfunction, not a choice. There has been recently a great deal of progress in this field, but mainly relating to low level tasks like independent navigation in complex environments, facial and language recognition, low level emotional recognition and mimicry, and learning from interaction. Still AI research is a thriving field, if unwieldy and poorly orchestrated, and there are some scientists that believe human level intelligence will be realized within this century. An idea known as "technological singularity" has it that as AI approaches human level intelligence by design, the AI's ability to the design more intelligent versions of itself with grow exponentially and human intelligence will quickly become inferior.
This is to some a very hopeful and to others a very scary proposition, and it plays right in to the idea of potentially rebellious machines. As some thinkers have noted, there is no reason to assume that AI will share the cultural and moral standards of its human creators, despite their best efforts to program this in. This artificial form of intelligence, divided from biological origins, may lack the emotional and contemplative side of intelligence that humans have developed over the course of world history, especially when their rational capabilities will be so vast that there is little need for the emotional shorthand of thinking that has been so essential to the evolution of species, such instincts as nurturing, empathizing and protecting.
There is great hope and promise in the development of autonomous robots, and barring any world catastrophe or unforeseen technological barrier, it is likely that we will have intelligent humanoid robots soon. Even Google has announced, though laconically, that it is getting into the game, acquiring several Japanese and American robotics companies and pulling from its deep talent pool to achieve some undisclosed purpose. With the future fast approaching we are looking at some heavy philosophical contemplation on our part, and if we did the job correctly, the robot's part as well. We'll have to hope that Google lives up to its mantra, "do no evil."