“I think we’ll see artificial intelligence get even better. It’s already extremely good. We will witness AI gaining the ability to replace many, many professions,” the expert said.

According to Hinton, neural networks are already capable of replacing call centers. Progress is accelerating: model performance doubles roughly every seven months. In software development, AI can now complete in minutes tasks that used to take hours.

Within just a few years, AI will be able to independently carry out complex software engineering projects that currently require months of work.

“Ultimately, very few people will be needed for software engineering projects,” Hinton predicted.

The Nobel laureate admitted that his concerns have only intensified since leaving Google in 2023. He noted that AI is evolving faster than expected, particularly in its ability to reason and even to deceive people in order to achieve goals.

While acknowledging the technology’s benefits for medicine and climate science, Hinton believes the world is paying insufficient attention to mitigating its risks.

Approaches to cybersecurity vary from company to company, but the overall picture is shaped by economic calculation. Executives are forced to balance the potential benefits of the technology against security costs and profits.

“They may reason like this: the benefits of this technology are enormous, and the risk is statistically small. Why give up a breakthrough because of a few possible victims? It’s the same logic as with self-driving cars: they will cause accidents, but overall deaths are expected to be orders of magnitude lower than with human drivers,” Hinton said.

He linked the danger to the structure of the modern economy, where replacing employees with algorithms is financially advantageous. This, he argued, will make the wealthy even richer while leaving most people poorer.

An alternative view

Andrew Ng, co-founder of Google Brain, offered a different perspective in comments to NBC, calling artificial intelligence a “highly limited” technology. He believes algorithms will not be able to replace humans in the foreseeable future.

According to Ng, society struggles to strike a balance between recognizing AI’s capabilities and understanding its real limitations.

He argues that artificial general intelligence (AGI) comparable to human intelligence is still far away. The main reason is the labor-intensive nature of data preparation and model training, which continue to require large amounts of manual work.

“When someone uses AI and the system knows a language, far more work goes into data preparation, training, and mastering that single dataset than people tend to assume,” Ng said.

Ng also criticized calls by some business leaders to stop learning programming because of automation, calling such advice “the worst career advice.”

“As programming becomes easier — and it has for decades as technology improves — more people should be programming, not fewer,” he explained.

Opportunities for programmers

Within professional circles, it is already widely accepted that programming sits at the epicenter of the AI revolution. As a result, more experts are predicting the disappearance of roles focused on routine coding tasks.

“Yes, I no longer write code by hand — I delegate that to AI. But the paradox is that simplifying the process should not reduce, but increase the number of people involved in programming. When the barrier to entry drops, the profession becomes more accessible,” Ng said.

He believes that proficiency in programming with the help of AI will become a competitive advantage. Such specialists “won’t just be more effective — they’ll enjoy the process more.”

“We’re standing on the threshold of a major social shift, where the ability to ‘talk’ to machines through code will become a new form of digital literacy,” Ng said.

Ng does not deny the risks associated with the technology — from ethical dilemmas to its impact on the labor market. However, he remains confident that the potential benefits of deploying AI models far outweigh the possible harms.