Androids, Cyborgs, & Transhumanism. Oh My! — To Err

Uncategorized

I think that this will be my last post as part of this series Androids, Cyborgs, and Transhumanism. Oh My! … at least for a little while. It is not for a lack of content; honestly, my room and data troves on the Airship are full of it. I’m drowning in it! There are many things that I want to include, but will take some time to get through, process, and research further.

But before I put a temporary cover on this rabbit hole, I want to do a recap of everything I have looked at so far — connect the dots, if you will.

So let’s see what we have.

With this series I decided to explore what happens to the essence of humanity in semi-artificial (or fully artificial) beings. Is it preserved, or is it lost? I conceded that there were actually a lot of different facets to this question that we could look at, because there isn’t just one thing that makes someone “human.” And before moving on, I would like to speak to this point a little more.

I chose to look at one thing, one aspect, when looking to see if an essence of humanity was preserved. Compassion. But other aspects were mentioned throughout, most frequently was the capacity for dreams and aspirations. A self-derived goal that one tries to achieve.

As I went through the various stories and characters used in this series though, I found that at its core, this issue is very subjective, but not necessarily in the “eye of the beholder.”

Here’s what I mean by that:

Take Do Androids Dream of Electric Sheep? for instance. The replicants in the story thought that they were human, or at least should be treated as such. It was their prerogative to act according to what they thought that it meant to be human. So who am I to say that they aren’t?

“Come now, Bookeeper!” I hear the internet say. “Just because someone says they’re Batman, doesn’t make them Batman! But you’re saying that if they believe they are human, then why can’t they actually be human?”   

To address this statement fabricated just for the sake of argument, I want to bring in a new story — Bicentennial Man. Without getting too deep in it, Bicentennial Man is the story of a robot who strives to be human. Throughout his “life” he undergoes multiple procedures to try and replicate being human, both biologically AND emotionally. By the end of the story, he has real blood pumping through his veins and is with the woman he loves. In this story, there is the personal struggle the main character has with what it means to be human, but there is also an external struggle with being recognized as human by the rest of humanity.

While this is a work of fiction, it still speaks to a very real problem. Let’s face it, for most of civilization, groups of humans have been VERY strict in what it means to be human, so much so that it allowed them to categorize certain groups as less-than human by denying them basic rights that they would allow other humans who are just like them.

It happened with slavery and later with civil rights movements. It happen(s)ed with conquistadors and imperialists towards native populations. It happen(s)ed with women. It happen(s)ed with religion. It could be argued that it still occurs today (notice the placement of the “s” in parenthesis).  

The point I’m getting at here is that humans have a pretty terrible track record when it comes to deciding what makes someone “human” or at least human enough to be treated with the same dignity as you would treat another human. But that also speaks to another facet of humanity — to err is human, right? Could it be possible that an aspect or essence of humanity is the ability to forget that you have it?   

To bring it full circle: thinking you’re Batman doesn’t make you Batman, necessarily. But if historically I have trouble distinguishing who Batman really is, maybe I shouldn’t be telling you that you aren’t.

Moving on.

Let’s get back to compassion for just a moment. I chose this “essence” of humanity because I felt that it could not be easily replicated or programmed as someone would other, more simple emotions. Compassion, as an extension of sympathy and empathy, is in my opinion a much more complex emotion. As is love. As is hope. As is dreaming (aspiring, might be a better word, however I don’t know if it can be considered an emotion).

As discussed in my last post, it is easy to represent compassion, or any human emotion, existing in fictional androids and cyborgs. The true test will come in the near future when these things are fully realized scientifically. But we see similar things in real life today — prosthetics that can be controlled with actual signals from the brain. We would still consider these people as human without question. But the similarity I see with the fictional characters and the real examples we have today is this: the brain.

Most of the characters we looked at throughout this series had some sort of “brain” that controlled all of their functions, both physical and emotional. But what if the brain that shows compassion, love, hope, or aspiration is artificial? Is the only difference between a “real human” and a “fake human” the presence of an “organic brain?” Or is the essence of humanity just sentience?

Even if the artificial brain (or computer, to push the boundaries of this a little further) is contained in a box rather than a body, if it has be ability to feel or perceive emotions, process, and act upon those things accordingly, could we still consider it human?

I could keep asking pages upon pages of questions about this. Hopefully, what we have looked at and the stories that have been discussed have shown you that maybe this issue is not as simple as we would like to make it. Or maybe it showed you the opposite, that it is really more simple than we thought.

As with most things, I do hope to revisit this in the future once I digest more stories that pertain to this. In the meantime, if you have any suggestions for stories that I should look at that are relevant, please leave a reply!

 

— The Bookkeeper

 

 

    

Share