Ghost in the Shell Paper and Analysis

 If we are part human/part machine, do Asimov’s rules still apply?

 

 

                                               I.         Introduction

 

Through the creation of the three laws/rules of robotics, Isaac Asimov, a science-fiction writer, explored the relation and possible ethical code of humans and robots within the idea that the future prospects of technology could bring some of his “fictional” ideas to reality. The first of these laws states, “a robot may not injure a human being or, through inaction, allow a human being to come to harm”. This rule, specifically, describes the dangerous aspects AI and advancing technology bring to the human race through the possible opposition they potentially would develop for their creators. The second law states, “A robot must obey orders given it by human beings except where such orders would conflict with the First Law.”. This law implies that any possibility of AI would counter justification through the sole idea that they could possess “thought” outside of their human counterpart. Finally, Asimov’s third law, “A robot must protect its own existence as long as such protection does not conflict with the First or Second Law”, describes the concept that there is some software put in place to prevent harm done to the robot/AI itself. Through these laws, one can gain a better grasp of the moral judgments involved in the creation of robots and the possible dangers they bring to human civilization.

 

                                       II.         Thesis

 

One could argue that Asimov’s laws do not apply due to the lack of separation a human and robot would have. While Asimov’s three laws of robotics do discuss the position of a human within their place as the creator, the lack of rules established for the human species provides evidence that his laws remain null. Additionally, through the idea that a human exists in unison with their robotic components, they eventually would establish an awareness of their actions and being and therefore would negate Asimov’s laws.

 

 

                               III.         Antithesis

 

Contrasting with the thesis, one could argue that if one was part human/part robot the rules Asimov constructed would still apply through the idea that the new being and both their human and robot parts would act in complete unison and thereby must still follow and exist under the moral code set by Asimov. 

 

 

                              IV.         Synthesis

 

Within the various films we have viewed throughout the progression of this course, the idea that robots and humans can exist in harmony has been proven problematic through the violence and chaos that eventually culminate within the duration of each film. Specifically, one can argue Asimov’s laws are not upheld through different textual examples found through the analysis of Blade Runnerthe Love, Death, and Robots series, and Ghost in the Shell

In Blade Runner, one can use Roy’s character as an example of a replicant/robot that did not follow the laws within Asimov’s guide. Although he originally followed the demands of his creator, after becoming self-aware and realizing the lack of permanence within his life, he then proceeds to follow his own orders and motives through violent actions directly in correlation with the injuring or killing of other humans and robots. This goes against each of Asimov’s laws therefore describing the ways in which a robot/A.I. alone would not follow such rules.

In Love, Death, and Robots, viewers saw that the incorporation of advancing technology with human civilization eventually became its downfall. Specifically, in the episode that displayed the robot cleaner attempting to kill its owner within their own home, viewers can establish that this directly violates the first and second law established by Asimov.

Finally, within Ghost in the Shell there is a clear depiction of a part human/robot. Through this representation one can identify the violence found throughout the film goes against the idea that any part human/part robot would be completely without independent intellectual thought and eventual violent behavior. Though one can identify that this textual example still applies to a portion of the third law stating, “a robot must protect its own existence ”, the law, in its entirety cannot remain applicable. Additionally, throughout the film Major displays the continuous wish to keep advancing beyond the capabilities that have already been established within her robot parts. This idea leads the audience to the assumption she is not content with her current arrangement/orders and wishes to change them. In this way she goes against Asimov’s second law. 

 

              V.         Conclusion

 

In closing, through the examination of the textual evidence provided through the screening of various science fiction films, one can argue that Asimov’s laws of robotics do not apply in the case of an individual being part human/part robot. Through the various acts of violence and the eventual downfall of civilization, any moral/ethical code put in place to support the existence of an intellectual being with programed/mechanical features continues to be contradicted. Furthermore, many robotic characters found within science fiction films eventually achieve self-awareness become conscious of their ethical and moral decisions. This does not align with the idea that they will always follow the rules given to them upheld within Asimov’s laws.

Comments

Popular posts from this blog

Three Robots: Exit Strategies Review

Interstellar/ Time Travel Video

Space Odyssey and Siri