Theirs Is the Glory: Robots in 2015

I have really high hopes for Chappie, the new Neil Blomkamp movie dropping in March next year. The trailers look great. Most importantly, the movie appears to be steering away from the increasingly tired “robot rebellion” cliche and focusing on actually creating a new robot character. I’ve heard some people making comparisons to Short Circuit, but I have a feeling that Chappie is going to be quite different – deeper, darker, and more emotional.

Check out the trailer below.

We’re actually in the middle of a little golden era for robot fiction.

IDW is doing phenomenal work elevating the Transformers franchise through their comic releases (especially Robert’s work on More than Meets the Eye bringing some of the strongest conceptual additions to the franchise since Furman’s original UK run). Archie Comics, likewise, is doing great work with their Mega Man line, creating a sharp, sophisticated and surprisingly depthful comic that legitimately works for all ages of readers. Automata, terrible as it was, came out this last year. Almost Human, though ultimately a failed project, did get robots and futurist concepts onto mainstream television for the early part of 2014.

And in the real world, Aldebaran is branching forward from Nao and Romeo to launch Pepper, its third generation social robot. Boston Dynamics has been dominating humanoid robotics research with Atlas and sending Big Dog into military operations. Google has spent 2014 buying robotics companies, including Team SCHAFT, whose robot completely crushed its competition at the DARPA Grand Challenge last year.

Of course, in 2015 we have the highly anticipated Age of Ultron dropping – the next installment of Marvel’s Avengers franchise is sure to be a gigantic smash – and Ultron is already showing great promise as a new fan-favorite villain just from the first teaser trailer. It’s going to be an interesting box-office battle between the childlike, innocent Chappie and the worldly, bitterly malevolent Ultron.  Then we have the next round of DARPADRC in June.

Now is the time to start getting really serious about robots. There’s never been a better time to jump in. Now is the time to dream about where we go next and imagine what we can still become, together with our mechanical friends.


Artificial Intelligence and Ethical Agents: We’re Still A Long Way Away

I got excited about this too – at first – but it actually points out some significant flaws we have in developing adaptive intelligence systems.

Let’s use the fictional robot Mega Man X for our case example of the ideal ethical robot, versus what we currently have in the real world. Mega Man X was designed by Dr. Thomas Light to be able to be a ‘true individual’ – to ‘think, feel, and make decisions on his own’. This means he has an incredibly robust and dynamic ability to learn on his feet as situations happen, just like humans do. Because Light was concerned about both human abuse of X and X’s potential for catastrophic malfunction (or poor decision making combined with onboard weapons capabilities), X was sealed into a training capsule intended to perfect his AI over a 30 year period, with a warning not to disturb the capsule until the training program had been able to fully run its course.

In fact, X’s 30 years of burn-in training in the capsule was designed to counter this problem indicated in the article – the indecision and hesitation – precisely. He basically sat there learning (probably through brute-force repetition of the same events and challenges over and over) “right” from “wrong”. Over time, his learning algorhythms would have enabled him to start making better choices as he experienced the same events repeatedly and had more context for his choices.megamanx_intro

This is not an unrealistic way to train a robot, but Light is also correct that the scale of this learning for a truly sophisticated AI (especially a monstrous AI of the complexity of X, designed to FULLY emulate a human person) would take decades and not days. And even after 30 years, when X came out of the capsule, he STILL tended to hesitate at critical decision junctures, causing him to be considered less efficient.

Coming back to our real-world robot as described in the article above… Without context (a back-end database in the person or robot that consists of memories/previous connections to stored experiences, and a vast array of other variables that can be called upon at a moment’s notice) all decision making is inherently flawed. Among other issues, this robot was unable to decide what decision to make because it had no context and no way to weight its decisions.

A weighted decision is like this: “Saving one human is more important than saving none.” The machine was given one goal of equal weight – “save humans”. It was not given a goal of “save ANY human you can.” These, in AI, are VERY different functions and have to be spelled out explicitly. We would have seen different behavior if the program was explicitly told to:

a) save the most convenient/closest human
b) save a human even if you can only save one
c) save both humans efficiently without touching either  – and then give it a number of options including “fill the hole” or “place a barrier in front of the hole”.

Of course, then you would have to weight those options in terms of efficiency – and build the program to take the most efficient option each time. This still isn’t intelligence, this is basically giving the robot a blueprint and then having it mindlessly obey.

This line from the article is key: Though it may save others according to a programmed code of conduct, it doesn’t understand the reasoning behind its actions. That’s not AI, and it’s not true learning.

We are very far away from the goal right now. At best, we are crawling toward it, like an infant who still falls on its face when it can’t coordinate its hands and knees. Right now, our robots can ONLY do exactly, precisely, hilariously, fatally, what we tell them to do – and nothing more.

Hey, Read This! #2

The Economist just dropped a 14 page special report on the current state of robotics in their March 29-April 4, 2014 issue. This is a plain-English overview on state of the art models like SCHAFT, robot ethics, drone concerns, military uses, and home care uses.

Although I find the header section titles just a bit creepy and right-wing dog whistle, the information in the article is solid. The article is not so much for tech-heads, though; it doesn’t get into well-worn problems known within the field such as:

We’re a long way from true AI, and still even a long way from self-directed drone-level humanoids in the home, but what was that saying? “The future is not set?”