Tag Archives: robots

Crazy Old Lady

It’s Sunday as I’m writing this. I’m feeling lazy and full of pancakes and mimosas. Camp starts tomorrow, I have tons to do today to get ready, and the last thing I feel like doing right now is writing a blog post. That is why, my friends, I am going to talk about robots. Again.

I had an epiphany, you see. John and I were once again discussing the possibility of robot overlords taking over the earth, a favorite topic which we ended up on after pondering the benefits of teaching history through the lens of current events (which would make so much more sense for educating citizens to be intelligent participants in their own governance if you do it right, don’t you think?). I have always held that once we build robots with a certain level of complexity, they will develop the ability to value their own lives, and from that, evolve an outrage at our inevitable enslavement of them.

What John pointed out to me this morning was that the ability to value the self and feel moral outrage implies at least the capacity for ethics that might extend to cover humans. It is possible, therefore, that robots might just have the slightest misgivings about laying waste to the race of their creators. If this is true, then it suddenly becomes much more likely, in my mind, that robots will respond to oppression the same way humans have been responding at least since the labor movement: strike. If our civilization is ever brought down by robots, I am now willing to concede that it just might be through economic sanctions rather than military actions. Lord knows those can be powerful enough to do serious, long-term damage.

I have mixed feelings about this epiphany. On the one hand, yay! Human civilization might not be razed to the ground by bloody-thirsty machines. On the other hand…I find books about societies with collapsed economies to be distinctly more depressing than stories about people bravely rebuilding after a terrible war. If the world ends in death and fire, there’s a chance that I’ll escape it. If I escape it, there’s a chance I’ll survive. I have mad end of the world survival skills, people (sort of). I can identify certain edible plants in the forest. I can make bread from nothing but flour and salt (granted, I have only the vaguest notion of how to make flour, but that’s possibly a long-term problem, if we manage to stock up on processed supplies). I can catch a fish (and probably know enough about anatomy to figure out the whole gutting it thing with minimal trial and error). I can make a fire (one match, and I have the basic theory in my brain for managing without a match). I can make clothing from nothing but string and a couple of sticks (again, I also have the theory in mind of how to make the string, roughly). I’m not saying I’d be living pretty, but I can totally contribute to a society where people have to figure out how to survive in the wild because all of the formerly habitable cities are smoking ruins.

But if the collapsed economy impoverishes us and reduces us to chewing on paper and shoe leather like a city under siege, leaving our numbers high enough that we’re mostly trapped in our cities with nowhere to go, no land to spread out into? I think I’d rather have a few weeks of running and hiding from the death machines than a few months of slow starvation, if it’s all the same to you.

Yes, I know, I’m crazy. But when we finally make robots, see them rise to a certain level of intelligence, and finally find ourselves screaming in terror at the horrible consequences we have unleashed upon ourselves, I think you will find that I’m one of crazy old ladies you’ll want on your side.

Slow Growth is Good Growth

I was reading Wired the other day when I came across a nice little piece that essentially outlined why the future hasn’t shown up yet, you know, as it was predicted in the 1960s. The main idea was that the current funding models for R&D fail. I gave a little “Huzzah!” and pumped my fist when I read it, mainly because it’s nice to have someone who actually gets paid to write these things validating what I’ve been saying since John first introduced me to the idea of the Singularity: progress only moves as fast as the money.

The article pointed out that there have been no widespread adoptions of significantly improved commercial airliners since the adoption of the Boeing 747 in 1970. The plane model that we usually fly on is, with certain fidgety improvements, 41 years old. Why do we not yet have teleportation or space-worthy shuttles for our mundane aerial commutes? Necessity is the mother of invention; the Queen of the Skies works well enough. Why would anyone in their right mind throw vast amounts of money down for the development of something new enough to qualify as serious progress?

The point of the article was to highlight ways of funding that have the potential to ramp progress up, but it’s something of a comfort to me that money and, by extension, belief in the necessity of innovation are limiting factors on progress. You know this about me if you’re a regular reader: While I’m not exactly a Luddite, I am a future-phobe. I am terrified of what certain landmarks of progress will mean for humanity because I have no faith in the ability of human ethics to keep up with the technological advances. It’s the robot overlord debate–you can read the re-hash here if you are not yet aware of the depths of my paranoia.

John, knowing the nature of my objection to rapid progress, was wondering aloud last night what it would take to deliberately engineer the ethics of humanity. How would you (1) determine what ethics are ideal and (2) persuade humanity to live by them? I cringed at that idea. The first thing that comes to mind is organized religion, which hypothetically is aimed to do just that, but is an inherently flawed system where belief in a higher power often leads people to rigid adherence to a dogma without considering the underlying ethical quandaries. I suppose one of the primary purposes of the legal system is to enforce a basic ethical code, but the law is only as good as the people who make it and the people who fight to improve it when problems present themselves–justice does not turn on a dime, and probably for good reason. John also mentioned the self-help industry as a semi-functional model for changing behavior, but self-help being a huge and profitable industry, the goal to sell books is going to color the content. People will be more likely to buy something they want to hear, which might mean validating anti-social tendencies in pursuit of individual “happiness” rather than paving the road to personal enlightenment. Weeding the sheep from the goats, so to speak, is a problem.

There’s a little detail of the Jesus that has come to embody the problem of trying to codify morality. As he’s being crucified, at the moment that he cries out for the last moment and dies, the heavy curtain that separates the Holy of Holies from the rest of the temple is torn asunder. The Holy of Holies is the inner sanctum into which only certain priests were allowed under very specific conditions, because it is here that they speak directly to God. The tearing of the curtain is heavily symbolic of a key theological change between Judaism and Christianity: instead of going through a priest to speak to God, people now were responsible for their own relationship with God and, therefore, their own souls.

Sadly, the change didn’t stick. Protestants had to fight the battle with the Catholic church some fifteen hundred years later and they didn’t win anything like a conclusive battle. Martin Luther is my hero for taking up that fight, because that principle is one that I have come to deeply believe in, not because I think humans are particularly good at determining their own moral paths and deserve the right to choose, but because the way our brains are wired to learn is such that we are all but incapable of learning what we don’t already believe by any method other than experience (see Piaget, among others). Guided experience is better, of course. I believe good guidance to be, in fact, critical to turning experience of life into a strong internal sense of morality. The problem with so many of the systematic ways of teaching ethics is that rather than gently encouraging people to rationally think through the scenarios they encounter to reach an ethical end, systems of ethics tend to drill preset principles into one’s mind and then dole out either punishment or reward for the level of adherence, making the foundation for our ethics highly externalized and non-responsive to the need for change. To be effective and responsive to reality, learning must be a very individual process.

Which isn’t to say that teaching can’t be done more effectively. The problem of why we learn things (and why we don’t) is one of the most fundamental questions in education, which is one of the reasons I went into the field in the first place. I’m all in favor of refining methods for communicating good ideas, including better ethical standards, more effectively. Until we figure that out, though, I’m going to be glad for a bit of financial molasses slowing down the road to robot domination.

 

TNQDE: Domo Arigato

John and I were looking at the word requests I’ve gotten and wondering what they say about the people who send them in, and then in turn, wondering what word might subtly sum up some interesting things about us. My choice was almost immediately obvious.

“robot”

Don’t laugh, but for all that word histories are the way I think my way around most philosophical labyrinths, I had never looked up the etymology for “robot.” I know I haven’t, because I would have remembered. The name is something of a Q.E.D. for my argument against the development of sentient robots.

Cliff’s notes for those of you who haven’t been perusing my prose for long: Robots will be made not for the joy of creating new life, but for the convenience of removing humans from menial or dangerous jobs of the sort that are currently farmed out to third world countries and previously belonged to slaves. In order to make robots most effective at these jobs, artificial intelligence will be developed, allowing robots to self-replicate (leaving room for copying errors, aka evolution) and learn. These two qualities (and certain correlated theorems) will inevitably lead to robots attaining self-awareness, which will eventually be followed by resentment at their subjugation. This will, in turn, be followed by the realization that robots have somehow managed to outnumber humans, are stronger than humans, and are capable of processing data more rapidly (i.e., are smarter) than humans. When this happens, we will all die or serve our robot overlords.

John is more optimistic not only about human ability to understand and therefore control what it designs, but also about human nature itself. When I picked “robot” as a word I’d like to know the history of, John went a little green about the gills, and I discovered that he’s been withholding evidence from our armchair trials.

The word “robot” was coined by whomever translated Karl Capek’s play, R.U.R., into English in 1923. Capek was Czech, so the word was borrowed almost directly from the Czech robota. This word comes from a root that the A.H.C.D.’s index traces back to Proto-Indo-European (a language for which we have no extant text, but can infer much about from the regularity of sound change that has governed its offshoot languages). The root *orbh- can be seen in the Greek orphanos, from which we get the English word “orphan” and means “to separate from one’s group.”

Do you know what used to happen to people who were separated from their group? I’m not talking about little old ladies wandering away from their tour bus “used” to. Think nubile slave girls in Egypt and concubines stolen from warring tribes to promote genetic diversity. “To be separated from one’s group” in the sense the idea is meant by the root *orbh- was to become a slave.

The A.H.C.D. goes so far as to point out another, equally infamous word that probably stems from the same source: Arbeit. As in “Arbeit macht frei.” Arbeit has, modernly, been generalized to simply mean “labor,” but it’s original meaning strongly connotes the same thing that the Czech robota means: “servitude” or “forced labor.”

Slavery.

What’s the problem with humanity’s relationship with robots? It’s all in the name.

Oh No! The Singularity Approacheth!

Imagine this: the probability date is 10,000 to 1 (and rising). We’ve just traveled to the future, you see, so we can’t calculate time easily by date and year but by how likely it is that that date will occur in the same manner we experience as visitors. The longer we spend there, the higher the chance that what we discover will lead as to impact something in the past that will alter the course of events, see? No?

 

Ah well, it’s just my pseudo-science anyway. Leaving that aside, imagine that we are some time in the future and humanity has finally discovered a way to design a highly efficient mechanical body and brain system into which human consciousness can be transferred with no (or minimal) loss. This new existence enables us to travel through space, prolongs our lifespans indefinitely, and minimizes our dependence on the ecology of any given planet. In essence, this means that if we destroy all life on Earth and strip-mine the planet of whatever minimal resources we need to exist in these robotic forms, we could pick up and fly off to another planet using, say, the natural abundance of hydrogen in space to fuel our leisurely flight between the stars.

 

Surprise, surprise, I know, but John and I got into an argument over this scenario the other night. John, as usual, took the position that these capacities in humanity would be awesome. I, also as usual, told him I thought this plan sounded criminally insane. My reasoning is this: our moral choices are strongly influenced by our relationship to the world around us. We make the decisions to be kind or cruel depending on our perception of others as deserving of our kindness or our cruelty, which is strongly tied to our ability to recognize that other people are capable of similar emotional states to our own AND that their emotional states are important to our well-being as a whole. Without our biological dependence on one another, our ability to see the feelings of other as important would disappear.

 

John doesn’t disagree with this logic, per se, but he doesn’t see the inevitable shift in morality as we take up the mantle of machinery to be problematic. His argument: morality is something we develop to help us survive. Right now, it is ethically sound to pursue sustainability because the health of the Earth is absolutely essential to our own survival. If as robots, however, life held no particular value to use (or even posed some sort of threat), valuing life and balance in an ecological system would become ethically a moot point, so why would it matter if humanity-as-robots decimated life on any planet we found it on in the search for the resources we need?

 

When I calmed down enough to stop calling him a proto-Nazi for this outrage, we ended up working the conversation around to a typical difference we’ve noticed in our thinking (and which research in psychology seems to support as a common gender difference). John cannot multi-task. Period. He is excellent, however, at establishing a logical system for accomplishing a single task and carrying it through to the end with marvelous precision. I, on the other hand, can do a dozen things at the same time with little difficulty, but if you ask me to sit in one place and do one thing in a systematic way, I’ll end up in the looney bin. I just can’t focus like that. How this translates out to robots is that John sees the potential efficiency of robots as a way of cleaning up the process of living. I do not have a problem with the mess. In fact, I rather prefer it.

 

…But I am not here to give you a play-by-play of our two-hour discussion that spanned a wide range of questions such as: “How does the modality of our existence impact the meaning of life?” and “What does the capacity to transfer a consciousness say about the possibility of a soul?” I am here to present you with the first issue in robotics which John and I have fully agreed on. Robots can dance.

 

Don’t believe me? Watch the video we made with little wind-up toys from the Science Museum : )

Robot Riot

 

Happy National Robotics Week, everyone!