The capabilities of ChatGPT have sparked new theories about the future of the planet - from utopian visions where humans are supreme beings focused on self-development in the future world. To apocalyptic scenarios reminiscent of movies where computers destroy all living things. Let’s try to sort out what’s real and what’s not.
Opinions diverge on this matter. For instance, Stuart Russell, who has published over 100 papers on the risks of AI implementation, considers the development of neural networks a threat. The reason is that developers don’t fully understand the purpose of the technology and what tasks it’s facing.
According to Russell’s assessment, as AI learning rates maintain, the likelihood of a disaster increases because there will come a time when a superintelligent system will simply refuse to obey commands. What dangers artificial intelligence would then pose for humanity is unknown.
Elon Musk, who co-authored a petition to halt the training of neural networks and compared AI to nuclear energy, which can be used for peaceful purposes as well as for developing weapons of mass destruction, similarly expressed himself. Thus, the idea is clear - the future of the planet, of humanity, will depend on whose hands the technologies fall into and what tasks they will be used for.
Even ChatGPT itself confirmed when asked about its dangers that the threat is high. The main risks are intentional errors in the code that can lead to unpredictable consequences. Yet afterward, it reassured that it has no task of harming humans.
At the same time, Eldar Murtazin, lead analyst at Mobile Research Group, considers neural networks safe because, after all, it is just code written by people, and Musk’s statement is nothing more than a ploy by competing companies to slow down Open Ai as the main competitor.
But those who advocate for the continuation of work on neural networks and their opponents agree that the development, training, and implementation of AI require uniform rules for all developer companies.
The discussion of theories about how dangerous artificial intelligence is became the main theme of the HSE seminar. Specialists highlighted several points:
There are suggestions that machines may replace creative professionals. Neural networks can write short essays, poems, stories, or create images from text descriptions. And a photo generated by the neural network DALL-E even won an award at the Sony World Photography Awards. The picture was submitted by photographer Boris Eldagsen. According to him, such a step is a proposal to start a discussion about the existence and future of photography as a craft. The winner declined the prize, noting that generated and real photos cannot compete within the same contest.
AI did not appear just a year or two ago. The first version was presented in 1965, and work on it has not stopped. But it seems that only Open Ai and ChatGPT managed to make a breakthrough. In general, it was after the launch of the chat that the discussion should we fear artificial intelligence emerged.
The discussion of ethical and safety issues surrounding the implementation of neural networks has been ongoing for years. And as algorithms are studied, experts conclude that the danger is not a supercomputer that will take over the world but the fragility of AI neural connections. After all, AI “knows” only what has been uploaded into it. The machine performs millions of operations correctly but commits a critical error in an event for which it has no data.
The second issue is the human factor. After all, it’s people who write the algorithms and train the networks. As long as work on algorithms is a competition in a game without rules, limits, and bans. To reduce the risks of uncontrolled use of technologies, scientists call for politicians to unite and develop some form of control system.
At various times, any global discoveries in different fields have caused fear in humanity. Take, for example, experiments with cloning. Theories put forward then were diverse - from logical to absurd. Two years later, the discussion stopped, but the experiments themselves remained. Now cloning is used to restore extinct animal species. The effectiveness, however, is still quite low - around 10-20%.
Returning to the question, why we shouldn’t be afraid of artificial intelligence. Simply because, for now, it is no more than an algorithm capable of processing information instantaneously. It does not think, and all its answers and decisions are based on loaded data.
And it is this operational efficiency that can bring much benefit. Neural networks are already used in banking, economic, and other spheres. They help find criminals, calculate the probability of events based on given parameters, and analyze patients’ medical histories to simplify diagnosis. Indeed, they take over routine work.
In conclusion, AI is a double-edged sword. On one side is progress, simplification of life, and refinement of technological processes. On the other are risks that cannot be ignored.