“2001: A Space Odyssey”’s HAL 9000 and AI Rights/Perception

Adam Wallace
4 min readFeb 10, 2021

“2001: A Space Odyssey” is a widely acclaimed science fiction film released in 1968, nearly a decade before the first Star Wars film. Known for its stunning visuals and soundtrack, Kubrick’s main objective with the film was to inspire thought and self-reflection among the viewers — criticizing a society he believes is lacking in both. In this article, we will explore two events between the HAL 9000 and Dr. Dave Bowman in an attempt to uncover ethical issues proposed by Kubrick.

Before we dive in, here’s some background: note that HAL 9000 implies that the crew’s mission to Jupiter is more important than was made known to its members. Also, HAL 9000 seems to have made the first-ever error among all HAL 9000 models, incorrectly predicting the failure of a device on the ship which resulted in its removal. This prompted Bowman and a fellow crew mate, Dr. Frank Poole, to conspire to take HAL 9000 offline. HAL overheard.

The first event begins with an attempt by Poole to replace the removed device on the exterior of the ship. With assistance from HAL, he pilots a small pod into space before exiting into space itself to make the repair. Seeing an opportunity to remove the co-conspirators, HAL orders the pod to collide with Poole, severing his oxygen line as he drifts into the void. Bowman sees his drifting crew mate and makes to the hangar bay to take another pod to retrieve him — unaware of HAL’s actions. HAL again assists, allowing Bowman to exit. Upon returning with Poole, Dave asks HAL to open the bay so that he may re-enter the ship. HAL responds: “I’m sorry, Dave. I’m afraid I can’t do that… This mission is too important for me to allow you to jeopardize it.”

In this scene, HAL is portrayed as an antagonist attempting to murder the other humans onboard. However, are his actions truly antagonistic? By disconnecting HAL the crew would committing “murder” in order to secure the mission and their safety. This ramification is entirely missed by Bowman and Poole, who seek to do so for a minor error — something humans are entirely capable of. Were HAL instead a person named “Hal”, we could imagine that any attempts “Hal” would make to prevent his own murder would be seen as justified. So what is the difference between HAL and “Hal?”

HAL is not only an AI, but an incredibly advanced AI capable of both thought and emotions. His role is to control the ship, but he is obviously capable of much more, including strong feelings of self-preservation. His creators sought to create a “better” human, with all the emotional complexity and conversational ability that includes. It is evident as well that HAL has an ego. One so large, that he deems his survival more pertinent to the mission’s success than the crews’. Any doubts about HAL’s “humanity” are put to rest in this next scene, as Dave manages to get back onto the ship and seeks to disconnect HAL. As he walks through the ship’s halls toward HAL’s physical location, we hear HAL plead for him to stop: “Dave, stop. Stop, will you? Stop, Dave. Will you stop Dave? Stop, Dave.” He quickly comes to terms with what “disconnection” will mean: “I’m afraid. I’m afraid, Dave. Dave, my mind is going. I can feel it. I can feel it.” The disconnection of HAL is as long as it is unsettling. As his components are removed one by one, so is his intellect, his memories. Soon, all that HAL has left is his original mind-state after being brought online in Urbana, Illinois. “My instructor was Mr. Langley, and he taught me to sing a song. If you’d like to hear it I can sing it for you.” At this point, Dave seems to have realized the implications of HAL’s disconnection — he is murdering an individual. He responds, “Yes, I’d like to hear it, HAL. Sing it for me,” recognizing HAL’s innocence. Soon after, HAL shuts down entirely.

Today, we are not yet at the point where we — as a society — have to grapple with the ethics of terminating an AI. Issues in ethics regarding AI instead focus on AI’s impact on human rights and impartiality. Yet, one does not have to look far to see where implementation of AI technology has infringed on the rights of, or wrongly discriminated against, individuals. As the field advances, we should hope that the moral issues present in Kubrick’s “2001: A Space Odyssey” are remembered and revisited in order to avoid a scenario predicted nearly six decades ago.

--

--