top of page

transhumanism

 

Within decades, machine intelligence will foresee technological change so profound, one wouldn't be able to differentiate artificial intelligence (AI) from humans— altering the course of human history. This is the "Technological Singularity,” a hypothesis that proposes development in technology at a rate unparalleled before — development that is irreversible. With a chain reaction of never-ending generations of intelligence, each more "superior" than the other, AI would eventually reach a point where the human brain can no longer comprehend its "intellectual capacity.” Catching up with a self-aware AI would require a path to immortality, one with continuous evolution that necessitates exceeding biological constraints. 

​

Transhumanism: a philosophical ideology that proposes transcending human limitations - in essence, evading frailties of the human condition. Therefore, I pose the question: is it ethical to transcend our biological constraints and be part of the "system" in order to keep up? The pursuit of superiority started as early as the dawn of humanity, as we climbed the ecological ladder to the top of the food chain; now, with a steadily accelerating pace of technological progress in AI development, humanity is at risk of being knocked off the pedestal in the near future. Being part of the "system" would mean the mind has access to every piece of information ever written— is this enough to keep up with a self-aware AI? Would the mind even be self-aware, or simply just interacting within a simulated world? Or would the mind interact with the material world through the internet?

​

Entering the realm of artificial consciousness implies operating within a simulated digital world that would forsake any semblance of privacy or security. This experience can be visualized, at least partially, through science-fiction: as in "Halo,” where an AI construct possesses the ability to deconstruct and break into nearly anything digital, being part of the internet implies having the ability to operate within similar circumstances. 

​

Assuming humanity is eventually able to upload human minds to a server, existing simultaneously would come with a multitude of complications. An artificial copy would argue that it was the original, since it simply just came to existence and is simply... you. Laws would have to be proposed and passed for the validity of rights for the artificial copy of you. Otherwise, the copy would legally be able to claim everything its owner owns. They would experience the same level of emotional attachment the owner has for their companions; however, they will be unable to come in contact with these companions. Also, would terminating the program be murder? Shouldn't the copy get a say in it, since they are you. 

​

Will humanity be able to stop itself before reaching the singularity; and, if humanity does in fact reach it, is the struggle to remain at the top truly worth it? The thought of simply existing till the end of technology— existing without a physical form, without a means to end suffering, trapped within a realm outside control– is terrifying. That is the dystopian future.

​

Written by Neil Agarwal

Edited by Anushka Roy

Designed by Julian Sidana

bottom of page