Friday, December 19, 2014

Giving It a Think #3: The Singularity

People tend to fall into two camps when you talk about The Singularity: those that think it's the next inevitable step in evolution, and others who think it's impossible. It's hard to think of any other scientific concept that's so fundamentally divided... which is why I thought I should think on it for a while.

The Singularity is a name -- popularized by futurist Ray Kurzweil -- to describe the inevitable moment when humans switch on a computer that is smarter than a human. He, and others who think similarly, believe that people will thereafter find themselves obsolete, surpassed in every way by the ever-increasing intelligence of machines they've created. And while they are certain that human life will be fundamentally different after that point, they are equally uncertain in what way those changes will take place. Will the computers decide to eliminate us? Or will we allow them to take over, relegating us to leisurely existence while they assume all creative, mechanical, and cognitive work?

I have to say, right off the bat, that I don't fall into this latter camp. I think that there is something fundamentally unreplicable about the human mind, and I doubt if there's any amount of microchips that can quite match its flexibility. So I guess what I'm trying to do today is explain why I think that is.

I think it all comes down to what you believe "intelligence" means. If you're looking for a computer that can hold more factual information than the human brain, they're not that hard to come by. But as far as taking that information and formulating answers to questions, the most highly developed one is probably Watson, IBM's supercomputer that dominated over two humans on the game show Jeopardy! in 2011.

While it's true that Watson won the competition overall, it's more telling to see what kind of questions it missed... context clues, for the most past. Watson couldn't distinguish the difference between "the '20s" and "the 1920s", and it ignored the name of a Final Jeopardy category when it answered "What is Toronto?" -- the category was "U.S. CITIES", although that particular fact wasn't restated in the clue.

Okay, so maybe pure factual recall isn't the benchmark we should be using... there's also the famous Turing test. In the mid-20th century, Alan Turing postulated that if computer can convince you that you're talking to a person, then that computer should be considered a person, in every practical sense. It's the flip-side of that junior-high philosophy freak-out question everyone ponders at some point: How do you know everyone around you isn't a robot? The answer, of course, is that you don't... every human life is purely subjective. Thus, computers only have to live up to the same "human" standards that you hold every other person in the world to.

This task, however, hasn't been lived up to all that well, either. In a 2014 AI contest, a computer convincingly named "Eugene Goostman" by its inventor, Kevin Warwick, convinced one-third of the judges that it was a thirteen-year old, mostly by answering their questions vaguely. Any grammatical or factual slips were glossed over by the backstory that the child was raised in Ukraine.

So was this a real pass of the test? There's a lot of debate about it. But for myself, until a computer convincingly fools a majority of people that are specifically looking to determine its authenticity, I doubt we can definitively say that it has.

Keep in mind that Watson (and Eugene Goostman) were built by some of the brightest human minds, provided with millions of dollars and years of research and equipment -- for *one* specific purpose -- and even then, they couldn't quite mimic a human brain doing that same task.

I personally think that the really unique aspect of the human brain -- and the one that it's going to be hardest for computers to ape -- is its ability to take all its factual and emotional recalls and weave them into a story, extrapolating either into the past or the future.

In his excellent and hilarious science book "What If?", Randall Munroe unwittingly illustrated my point with a drawing... A tall figure is standing with its hands on its hips, looking down at a small figure who is wearing a cowboy hat and holding a rope at its side. A table is nearby, and next to it lies the broken shards of a vase. Munroe points out that while it's easy for us to synthesize exactly what has happened here and what is probably going to happen (with only minor differences in detail), a computer would have a devil of a time trying to do it.

And when you think about it, it's not that surprising. Let me try to outline all the information you need to piece together in order to make a coherent story out of this image:

- basic sequencing of cause and effect
- relative size of humans, based on age
- likely relationships between tall humans and short ones
- human body language (the tall figure's akimbo stance can mean many things, but in this case probably connotes frustration or anger)
- the cowboy hat on the child's head suggests character play, which children are known to engage in
- the recognition of the rope as a lasso, based on the shape of the child's hat and cowboy lore
- info on the use of a lasso, and in what way it might become out of control in the hands of a child
- understanding of gravity
- the typical structural makeup and integrity of vases, along with the ability to determine the object *is* a vase when only partially intact
- likely reactions when certain materials (i.e. vases and floors) come into contact
- relative monetary or sentimental value of objects (i.e. vases) held by adults, and likely emotional reactions when that vase is destroyed

I realized, even as I was writing that list, that I was glossing over whole layers of information and intuition that our brain does, instinctively, all in a fraction of a second. Never even mind the fact that we were looking at a *drawing* of an incident, and one with stick figures in it to boot. That adds a whole new levels of image recognition and conceptualization.

Even if it got everything else right, a computer would get totally hung up on what was responsible for the broken vase. Of course, we immediately assume it's the kid with the lasso, but that's only because we've heard enough stories to know that if it weren't, that would make a lot of the details in the picture irrelevant. And since this drawing was made by a human, we assume that the details *mean* something.

That's the sticking point, isn't it? So much of our intuitive understanding is based on the fast that we're humans communicating with other humans. There's a common baseline understanding that derives entirely from developing as a human in a human society.

Think about it from a different angle... aliens visiting our world wouldn't understand our music, and it wouldn't resonate emotionally with them, simply because they haven't grown up with it. They wouldn't have a genetic predisposition to enjoy it, and they wouldn't have been indoctrinated with it since before birth like we have. Even if they studied it extensively, it wouldn't truly be a part of them, and thus forever beyond their ability to comprehend it fluently.

I think that's the trouble I have with the assumption that, once computers have the capability to be more intelligent than us, that they will be. Human intelligence as we know it requires one to have lived as a human, to have grown and experienced humanity from the inside. If you don't have that, then you don't have that instinctive baseline that all we humans do. The best you'd be is a good mimic.

I also don't know what the advantage to building a computer that is smarter than humans would be, anyway, when it will probably turn out to be easier to augment human thought itself. We already know there are places in the brain that can have vastly improve cognition and reaction times if you stimulate them electronically. Improving the human brain itself seems like a better way to go (unless, of course, you care about controlling the improved mind that results). But why build something from scratch when you can improve the original?

My opinion: Hacking the brain is the future we should be thinking toward. It's the original computer, after all.

No comments:

Post a Comment