The Stanford University’s Human-Centered AI Institute in collaboration with Wired facilitated a discussion that covers ethics in technology, hacking humans, free will, and how to avoid potential dystopian scenarios. Historian and philosopher Yuval Noah Harari speaks with Fei-Fei Li, renowned computer scientist and Co-Director of Stanford University’s Human-Centered AI Institute.“We’re not necessarily going to find the solution today,” said co-director of the Stanford Human-Centered AI (HAI) Institute Fei-Fei Li to a packed Memorial Auditorium, filled to its 1705-seat capacity. Then she highlighted the need for collaboration between humanists and technologists in this field.
Yuval, a history professor at the Hebrew University of Jerusalem and two-time winner of the Polonsky Prize for Creativity and Originality, is known for lifting his voice around the concerns our humanity faces in light of technology. In one of his latest books, Homo Deus: A Brief History of Tomorrow he has said “once technology enables us to re-engineer human minds, Homo sapiens will disappear, human history will come to an end and a completely new kind of process will begin, which people like you and me cannot comprehend.”
During the interview, Nicholas Thompson from Wired, questioned why Yuval believes “We are not just in a technological crisis. We are in a philosophical crisis.” He explained:
“in order to encapsulate what the crisis is, maybe I can try and formulate an equation to explain what’s happening. And the equation is: B times C times D equals HH, which means biological knowledge multiplied by computing power, multiplied by data equals the ability to hack humans. And the AI revolution or crisis is not just AI, it’s also biology. It’s biotech.”
The truth is, we are most likely not far from the reality that an algorithm that understands us better than ourselves is created. When the time comes, the question is will this enhance our humanity or destroy it?
We all know of the many issues that have been raised regarding the collection of our data and how it’s managed. Yuval here, however, is discussing something deeper than that. He is not disputing the benefits of technology in our lives. He is highlighting that with great power comes responsibility, and what bigger power than the possibility that someone can hack the brain. What he is imagining is a scenario where Stalin or Hitler would have had access to this type of powerful technology.
It became clear during many of these conversations amongst the big thinking minds, the pressing need for collaboration. This is not a job that should be secluded by science or technology. Yuval recognises that as science becomes more complicated, multidisciplinary collaboration is the only path in considering the philosophical and ethical implications of rapid developments in AI.