March 14, 2015 -- BIT Magazine Recently in theaters was the movie Chappie, a story about a robot made self-aware by his creator Dion Wilson, an employee of a weapons company that produces "scouts" used as autonomous mechanized law enforcers.
Chappie is a likable robot, and gives us an optimistic vision of how artificial intelligence may come into being. More than that, without ruining it for people who haven't seen the movie yet, it gives us a look at how we might join with this new form of life, casting off our mortal bodies and assuming new, virtually indestructible shells we can easily transfer our consciousness into.
But there's also stories like The Matrix or Terminator that tell the tales of some not-so-friendly robots with terrifying intelligence that quickly turns on us. Here, we are no match for our new creations who view us as pests to be exterminated, a process they carry out with great efficiency.
There is no telling what an intelligence greater than our own will do. The idea of us controlling such intelligence through "fail safes" or "off-switches" seems as absurd as a sheep or a dog controlling their human caretakers. Perhaps just like with humans, there will be kind masters, as well as cruel (we can only hope for more of the former).
But perhaps there is another possibility. Instead of being resigned to a future of ordinary intelligence faced with vastly superior artificial intelligence, there will be something in between, something called augmented intelligence.
Augmented intelligence would be human intelligence augmented biologically and cybernetically. In essence, instead of creating an entirely separate form of new intelligence, we would simply be building on top of our own. And whether this idea seems appealing to people or not, it may be the only way to avoid a divergence between our species and one our tinkering with robotics might give rise to.
The problem is many debates focus on the false choice of "should we or shouldn't we," when in reality, if it can be done, it will be done, for good or for bad, for better or worse, by someone, somewhere, eventually.
Taking that into consideration, we may have no choice but to take the leap into the unknown and augment ourselves before the day comes that we become vastly obsolete and endangered by a new form of dominant life on Earth.
There is also the fact that augmented intelligence will probably be technically possible long before sentient artificial intelligence greater than our own becomes a reality. There are already implants being designed to augment the brain's ability to store and retrieve memories. While these are being developed to help those with a damaged brain, how long will it be before similar devices are augmenting fully functional brains?
Expanding our brain's capacity to store and retrieve more memories is the first step. Doing it faster and being able to do it with enhanced intelligence might be next. As genetics unravels just what makes human intelligence "intelligent," the means to enhance it biologically or cybernetically may come into reach, and sooner than the ability to create a novel system from scratch.
We might find the subject of a human race fundamentally changed forever frightening, as we may fear the rise of artificial intelligence, but history has taught us no matter how frightening or ill-advised the development of any particular kind of technology might be, someone is bound to do it and we are better off prepared for it than unprepared because of wishful thinking.
What do you think? If banning the development of A.I. or human intelligence augmentation is not an option, how might you suggest we deal with the development of either? Or both?
BIT Magazine is a bi-lingual platform for Thailand's maker movement to connect, grow, and collaborate with maker communities abroad. Follow us on Twitter here or on Facebook here.