"As this report shows, robots with complete autonomy would be incapable of meeting international humanitarian law standards. The rules of distinction, proportionality, and military necessity are especially important tools for protecting civilians from the effects of war, and fully autonomous weapons would not be able to abide by those rules."
--"Losing Humanity: The Case Against Killer Robots"
Doing their part to slow down the coming Robopocalypse, the group Human Rights Watch has issued an unprecedented 50-page document asking governments around the world to "pre-emptively ban fully autonomous weapons because of the danger they pose to civilians in armed conflict."
Called "Losing Humanity: The Case Against Killer Robots," the document (which can be downloaded in full here) pretty much solidifies many decades-worth of trepidation over our artificially-created "friends" -- a sentiment that, for the most part, has only gotten "aired" through the veil of sci-fi novels and movies. Let's face it -- a serious researcher or organization writing a paper called "The Case Against Killer Robots" even a few years ago would have been laughed at as a kook.
But with the rise of drones, this has all changed. Maybe it's because the triangular-shaped drones don't quite have the personality of Rosie from "The Jetsons" or Data from "Star Trek: The Next Generation." Though a SNL animated skit that just aired this weekend envisions a way to market drones as more "cool":
"The reason that brain reverse engineering has not contributed much to artificial intelligence is that up until recently we didn’t have the right tools. If I gave you a computer and a few magnetic sensors and asked you to reverse-engineer it, you might figure out that there’s a magnetic device spinning when a file is saved, but you’d never get at the instruction set. Once you reverse-engineer the computer fully, however, you can express its principles of operation in just a few dozen pages.However, though he sees a human-like AI to be an increasingly possible happening, he believes that is more likely that we will merge with the robot:
"Now there are new tools that let us see the interneuronal connections and their signaling, in vivo, and in real-time. We’re just now getting these tools and there’s very rapid application of the tools to obtain the data."
"But the most salient scenario is that we’ll gradually merge with our technology. We’ll use nanobots to kill pathogens, then to kill cancer cells, and then they’ll go into our brain and do benign things there like augment our memory, and very gradually they’ll get more and more sophisticated. There’s no single great leap, but there is ultimately a great leap comprised of many small steps.Of course, there's no way that such technology would ever be used by an unscrupulous government in such a way as to override one's basic humanity.
"In The Singularity Is Near, I describe the radically different world of 2040, and how we’ll get there one benign change at a time. The Singularity will be gradual, smooth. "Really, this is about augmenting our biological thinking with nonbiological thinking...we'll get to the point where bio thinking is relatively insignificant...it’ll take time before we realize how much more powerful nonbiological thinking will ultimately be."
The funny thing here -- if there's indeed anything funny about all this -- is that we are truly and actively on the road to such situations, scientists staring down the barrel of the possible "assimilation" of humanity or "overthrow" by artificial intelligence with all the enthusiasm and forethought of Herbert West in "Re-Animator."
By the way, in case you're interested: "Robopocalypse" is set for a big-budget movie adaptation in 2014, directed by Steven Spielberg, who seems to really be hip on these trends.
0 comments:
Post a Comment