Saturday, March 15, 2014

Them Ol’ Singularity Blues

My work computer just got upgraded. I've moved out of the realm of Windows XP and am fully upgraded to Windows 7 (yeah, I know). I also have a second monitor now, which I've never worked with before. It's started me wondering if I should put the analytical stuff (like spreadsheets) on the left screen and more creative stuff (composing emails, creating formulas) on the right, just so they come into alignment with the appropriate sides of my brain. Or maybe it's the other way around.

I can see an appreciable increase in speed, too. This must be a significantly more fiery brand of processor I'm dealing with. It's got me thinking about Ray Kurzweil's prediction of The Singularity, the point in time when computers are able to make as many computations per second as the human mind. In his theory, computers will then begin to advance beyond human thought, becoming more and more superior to us and eventually taking over to become the dominant life force on the planet. This has been quoted as having an 80% likelihood of happening before 2017. Whether the computers will then keep us around as organic power sources (like in The Matrix) or decide that we're a threat and kill us off (like Skynet in the Terminator movies) remains to be seen.

At the root of this theory is the question: if you make a computer as powerful as the human mind, will it act the same as a human mind? At first I didn't see how we could ever compete with a computer than could outpace us thought-wise. We now have computers that can play chess and perform surgery better than us -- what's to stop them from overtaking us in all other areas as well?

I think I've finally arrived at my answer, and it stems from something I thought about as a child. Back when computers only displayed the color green and were the size of a microwave, I thought of a way that a person might make an intelligent computer. All it would take, I reckoned, would be for someone to input the entire contents of a dictionary, with instructions for the computer to take any word it didn't understand and look it up. Any word in the definition that wasn't understood could also be looked up... and so on. In my kid mind, I figured this would make the computer able to comprehend anything it was told.

What I didn't realize until much later is that this process won't result in any kind of comprehension. All you'll get is a computer chasing its tail in an infinite loop, eternally referencing an exponentially-growing list of words. And here is where I base my argument against Kurzweil, and declare that computers will never be able to supplant the human mind.

We spend so much time marveling at what computers can do -- we carry more computing power in our pockets that all of NASA had when they did the moon launches in the '60s. They can tell us exactly where we are in the world at any moment, can extrapolate incredibly complex patterns far into the future and far into the past. They can design equations and forms more complex and efficient than we ever could. But let me talk about what computers *can't* do...

It boils down to just one thing. Computers can't know what it's like to be human. The more I learn about being a sentient being, the more I realize how unique an experience it really is. And having said that, it's the one thing that we share with every other person on the planet. We all know what it's like to be an individual but still part of different levels of connection to others, and share a common experience in being run by basically the same physical and mental rules.

Creation of a computer that can emulate a human mind means, first and foremost, that we can understand the human mind thoroughly enough to program one. Frankly, I don't think we're ever going to be able to do that. We're getting closer all the time, of course, but reaching the day when we fully understand the infinitely complex interplay of genetics, psychology, and biological chemistry that make a person recognizable as a person? I don't see it happening anytime soon.

How could a computer ever learn to associate the feel of grass on the soles of the feet to the carefree happiness of childhood? Or on the flip side, how could a computer ever figure out how to equate the absence of light to a desperate feeling of terror that's born out of no logical reason (by which I mean, fear of the dark)? So many things that humans do, norms we've adapted, can be traced back to origins that no longer apply to us. Take that fear of the dark, for instance. Humans tend to limit their societally-meaningful activities to certain hours (i.e. daytime), simply because certain wavelengths of light are more prevalent then, which goes back to a time when you had to be able to see reliably when hunting and gathering, and not get hunted and gathered yourself in the process. A computer could follow these rules, but would never be able to understand them. If we are ever going to take the threat of computers matching and maybe surpassing us seriously, we have to assume that at some point they're going to have to fundamentally comprehend how to feel like a human.

And frankly, they just can't do that. Ever since we were born, each one of us has been soaking up human experience, making associations, drawing connections, making intuitive leaps of logic that work outside of language. For example, a person who writes a song is drawing on a lifetime of experience of listening to music, the rules that they have learned about how it's done (and when to break those rules), and creating emotional associations to thousands of pieces of music, based on where they were when they heard them, who they recall being with, and all the emotional circumstances around it. Again, a computer can be told that major keys are "happier" than minor chords, but they can't *feel* it. And that's only one of what must be thousands of little routines and subroutines that go off in our heads when we hear music. Most of which we aren't even conscious of.

The other part is chemical reinforcement. Think about something that you're good at, something you understand on a fundamental level. It's probably because you feel good when you think about it, or some kind of enjoyment you get from finding things out about it. You're passionate about it... but how do you program passion? You've got to admit that there's a level of expertise a person who really cares about a subject has, that a person who simply knows everything there is to know about it doesn't.

And aside from programming passion, assuming that we can understand and emulate such things in computer code, what else is going on in our heads that we don't even realize yet? When the field of neuroscience is entirely depleted, then we might have a shot at writing a computer program that can faithfully recreate it.

Here I'm going to make an assertion that everything we humans do that isn't physically keeping us alive... is art. And by that I mean the most fundamental definition of art: taking an idea and manifesting it in the real world. Seen in this light, everything is art. The tools we use, the things we make, even ideological structures like politics and religion. It's all art. And human-created art only means something to other humans. Because the only reason you and I can both appreciate or utilize the same kind of art is that we have the human experience to inform it, and not only make it comprehensible, but to make it worth comprehending.

The only truly computer that could ever hope to surpass a human would be one that doesn't even know it's a computer. Unless you have been born, grown and matured as a human, humanity is only an abstract concept. And that's something that no computer can ever understand. Then again, Kurzweil's whole desire to replicate a human mind in a computer is so that we can download the entire contents of our mind into one and live eternally, if virtually. But then, how long would you live this way before you lost your humanness?

No comments:

Post a Comment