Earth, 2050 –
In a world where robots and humans co-exist, the level of artificial intelligence (‘AI’) has exceeded the capacity of the human brain. Whereas twenty years ago, machines could perform discrete tasks like calculating chess moves and composing music; today, robots can feel emotions, engage in complex conversation, and engage with humans in a deep psychological level. Soon, we will trust robots enough to allow them to assume leadership roles. These machines, with brains made of cables instead of neurons, may guide the future of humanity.
We are not at the stage where humans and artificial intelligence are co-existing on equal footing in terms of intelligence, but we could be approaching that stage in the near future. Today, robots can perform increasingly complex, discrete tasks. This can range from piloting a drone, to performing speech recognition. Some argue the true test of the capabilities of artificial intelligence lies in the Turing test; whether an artificial intelligence can equal or surpass human intelligence.
This article explores the pace of change in artificial intelligence and what this means for the development of human-level AI, known as artificial general intelligence. Part 2 of this article will identify some of the flaws in conceptualising human-level intelligence, and some of the clear barriers that prevent artificial intelligence from becoming truly ‘human’.
Moore and Huang’s laws, moving towards a singularity
In recent years, artificial intelligence has made leaps and bounds towards engaging with humanity on a social and creative level. For example, in 2016, Google released its AI-power Assistant, capable of engaging in a two-way conversation with a human user. Additionally, in 2018, buyers at Christie’s auction purchased AI-created paintings for USD 400,000. These capabilities show great promise for the future of AI. One of the main constraints in achieving human-level AI is hardware. Without hardware that is capable of processing and interpreting information at the speed of the human brain, human-level artificial intelligence is potentially out of reach.
According to Moore’s law of future technological development, the speed of AI development will double every year until technological singularity is reached. After one year, the processing power of the system will have doubled, and so on, until a computer is capable of self-improvement at the level of humans. Arguably, technological singularity is constrained by physics. As Jeff Hawkins states, there are limits to any system, including a self-improving computer. There will be no singularity in theoretical terms, but not all is lost for the human-level AI of the future.
On September 19, 2020, Christopher Mims proclaimed that ‘Huang’s Law Is The New Moore’s Law’. He was writing in reference to Jensen Huang, the co-founder of the graphics-processor company Nvidia. Nvidia and other similar companies have developed silicon chips that accelerate the processing power of AI systems. Huang’s law claims that every two years, the processing power of these silicon chips will double as transistor density grows. Using these chips which combine hardware, software, and artificial intelligence means that technological singularity and human-level AI could be attainable goals.
Unlike Moore’s law, which focuses on pure processing power, Huang’s law relies on smaller advances within computer architecture, memory technology, and computer algorithms. Both have been criticised, as it is too soon to determine whether Huang’s law will lead to human-level AI. As with Moore’s law, Huang’s law is prone to periodic shifts and breakthrough moments in the industry. No one can truly know when the next milestone will be reached. Nevertheless, Nvidia’s lead in this area opens the door for new advancements in AI, raising the bar for how complex an AI system can become.
To continued on ‘The Pace of Artificial Intelligence and Prospects for Human-Level AI Part 2‘.
- How AI Based Drone Works: Artificial Intelligence Drone Use Cases | by Vikram Singh Bisen | VSINGHBISEN | Medium
- What is the Turing Test and Why Does it Matter? | Unite.AI
- “Tech Luminaries Address Singularity”. IEEE Spectrum. 1 June 2008. Archived from the original on 30 April 2019. Retrieved 2 February 2020.
- Huang’s Law Is the New Moore’s Law, and Explains Why Nvidia Wants Arm – WSJ
- 10 Years of Artificial Intelligence and Machine Learning (simplilearn.com)
- Eliezer Yudkowsky, 1996 “Staring into the Singularity” Archived 2018-03-30 at the Wayback Machine
- IIC-AI-Report-2020.pdf (iicom.org)