In a recent interview, British-Canadian cognitive psychologist and computer scientist Geoffrey Hinton, also known as the “Godfather of AI,” warned about the dangers of artificial intelligence. Hinton’s notable achievements have earned him the title “Godfather of AI,” but what did he do?

Hinton was an avid pioneer in the AI sphere. Specifically, his groundbreaking research on neural networks and deep learning paved the way for current AI systems like OpenAI’s most popular AI-based language model, ChatGPT.

In artificial intelligence, neural networks are systems similar to the human brain in learning and processing information. They enable AIs to learn from experience.

The release of OpenAI’s ChatGPT changed how AI-based tools were used previously. The platform allowed more human-like conversations than any tool to date. ChatGPT also paved the way for several other AI-based chatbots, each with its own set of unique features.

ChatGPT’s unique selling point was its most human-like interaction with the user and its cognitive abilities. However, it is this feature that concerns Hinton the most. According to Hinton, “… they’re not more intelligent than us, as far as I can tell. But I think they soon may be.”

Specifically, Hinton’s primary concerns have to do with the misuse of AI and that it could fall into the wrong hands. He also adds that there is a high chance that further developments in AI will result in the job displacement of many workers, the spread of misinformation, and the inability to distinguish between real and fake.

Hence, Real Research, an online survey app, launched a survey on Geoffrey Hinton issuing warnings about the dangers of AI to gauge public opinion on the dangers of AI.

Highlights:

  • 42.41% somewhat agree that the benefits outweigh the dangers of AI.
  • 25.43% believe that developing ethical guidelines for AI is a way to prevent the potential dangers of AI.
  • 40.22 strongly agree that AI could surpass human intelligence.

In an interview with the New York Times, Hinton said that he was leaving because he was concerned about the potential dangers of AI. He specifically mentioned the possibility of AI being used to create autonomous weapons that could kill without human intervention, as well as the potential for AI to be used to spread misinformation and propaganda.

Hinton is not the first AI expert to raise concerns about the potential dangers of AI. In 2015, Elon Musk and Stephen Hawking co-authored an open letter warning about the dangers of AI, and in 2017, the Future of Life Institute published a report on the potential risks of AI.

Hinton’s departure from Google is a significant event, as he is one of the most respected figures in the field of AI. His concerns about the potential dangers of AI are likely to be taken seriously by other AI researchers and policymakers.

Therefore, we asked the respondents if they were aware of Hinton’s concerns about AI. A majority of 77.83% said they were aware compared to 22.17% who were unaware.

Can Artificial Intelligence Surpass Human Intelligence

Next, we asked the respondents about their opinion on Hinton’s warning about the dangers of AI. Results indicated that 19.92% believed that Hinton’s warning should instill cautiousness when developing AI.

In addition, 18.56% of respondents believed that Hinton’s concerns about the dangers of AI may be exaggerated, 17.79% believed that his concerns are valid, 16.06% said that Hinton’s concerns about the dangers of AI are even more concerning given his expertise in the field, and 11.27% said that Hinton’s concerns could stifle technology and innovation in the AI industry.

Figure 1: Opinion on Hinton’s concerns about AI given his expertise in the field.
Figure 1: Opinion on Hinton’s concerns about AI given his expertise in the field.

According to Hinton, he has changed his mind about the possibility of AI surpassing human intelligence. He previously believed that the idea was far-fetched, but now he believes that it is possible. We asked the respondents whether AI can actually surpass human intelligence.

Survey results showed that 43.79% somewhat agreed, while 40.22% strongly agreed. 12.43% somewhat disagreed and 3.56% strongly disagreed.

Similarly, Hinton predicted that AI could eventually lead to the development of killer robots. When the respondents were asked if this is likely, 47.83% said probably, 30.7% said definitely, 18.08% said probably not, and 3.39% said definitely not.

AI: Respondents Share Their Views

According to the interview with the New York Times, Hinton said that he was worried about the possibility of AI being used to create deep fakes, which are videos or audio recordings that have been manipulated to make it look or sound like someone is saying or doing something they never said or did.

Additionally, Hinton also said that he was worried about the potential for AI to lead to job losses. He said that AI is already being used to automate tasks that were once done by humans and that he believes this trend will continue in the future. He said that this could lead to widespread unemployment and that we need to start thinking about how we will deal with this possibility.

The subsequent survey inquired about the concerns related to AI that the respondents find most worrisome.

Which-of-the-following-concerns-are-worrisome
Figure 2: Which of the following concerns are worrisome?

Most of the respondents (34.31%) stated that the behavior of AI systems could be unpredictable and impossible to predict for humans due to continued advancements, 15.78% stated AI being misused for ill-intended purposes, and 14.72% said people would be unable to distinguish between what is true and false due to the proliferation of fake images and videos.

In addition, 11.25% said job losses due to automation and 8.82% said spread of misinformation.

Do the Benefits of Artificial Intelligence Outweigh the Risk?

When we posed the question to our respondents regarding whether they perceive the benefits of AI to surpass its potential risks, the results showed that 42.41% somewhat agreed and 20.76% strongly agreed. Conversely, 25.43% somewhat disagreed and 11.4% strongly disagreed.

What Steps Can Be Taken To Prevent the Dangers of AI?

In the final survey question, we asked the respondents to suggest steps that can be taken to mitigate the dangers associated with AI.

According to the survey results, 25.43% of the respondents suggested the development of ethical guidelines for the responsible and ethical use of AI systems. Additionally, 20.94% recommended human oversight in AI development, deployment, and operations to ensure proper functioning.

Steps to prevent dangers of AI
Figure 3: Steps to prevent dangers of AI

Furthermore, 18.48% emphasized the importance of transparency by making AI system data and algorithms available for public scrutiny. Lastly, 17.78% proposed the implementation of regulations to govern the development and deployment of AI systems.

Methodology

 
Survey TitleSurvey on AI Pioneer, Geoffrey Hinton Issuing Warnings About the Dangers of AI
DurationMay 06, 2023– May 13, 2023
Number of Participants10,000
DemographicsMales and females, aged 21 to 99
Participating Countries Afghanistan, Algeria, Angola, Argentina, Armenia, Australia, Azerbaijan, Bahrain, Bangladesh, Belarus, Benin, Bolivia, Brazil, Brunei, Bulgaria, Burkina Faso, Cambodia, Cameroon, Canada, Chile, China, China (Hong Kong) China (Macao), China (Taiwan), Colombia, Costa Rica, Croatia, Czech Republic, Ecuador, Egypt, El Salvador, Ethiopia, Finland, France, Gambia, Georgia, Germany, Ghana, Greece, Greanada, Guatemala, Honduras, Hungary, India, Indonesia, Iraq, Ireland, Israel, Italy, Ivory Coast, Japan, Jordan, Kenya, Kuwait, Kyrgyzstan, Latvia, Lebanon, Libya, Lithuania, Malaysia, Maldives, Maluritania, Mexico, Moldova, Mongolia, Morocco, Mozambique, Myanmar [Burma], Namibia, Nepal, Nicaragua, Nigeria, Oman, Pakistan, Palestine, Panama, Peru, Philippines, Poland, Portugal, Qatar, Romania, Russia, Saudi Arabia, Serbia, Sierra Leone, Singapore, Slovakia, South Africa, South Korea, Spain, Sri Lanka, Tanzania, Thailand, Togo, Tunisia, Turkey, Turkmenistan, Uganda, Ukraine, United Arab Emirates, United Kingdom, United States, Uruguay, Uzbekistan, Venezuela, Vietnam, Yemen, Zimbabwe.