There are many theories about the dangers of AI. Some insist that artificial intelligence can greatly simplify life, while others forecast the realization of scenarios from science fiction films.
Geoffrey Hinton – a scientist and programmer considered the godfather of AI, who has been developing algorithms since 1970 – has spoken out about the dangers of artificial intelligence. In essence, the main dangers are:
Systems generating information could flood the internet, misleading users. People simply won’t be able to tell where the truth lies and where falsehoods are. Another reason why neural networks are dangerous is that this tool can be used for malicious purposes. And how to combat this threat is unknown.
Neural networks are becoming smarter and can generate content almost indistinguishable from human-created content. This can be news, “scientific” works, reports, and photo materials.
The problem for society is that the network generates texts considering the rules of search engine indexing. This means that any information, even fake, will be displayed at the top of search engine results and actively spread. And the more generated texts there are, the harder it is to understand which articles were written by a person based on knowledge and analysis, and which were generated by an algorithm.
For example, in 2020, a pseudoscientific article on psychology generated by AI was published on a website. The site owners simply wanted to test the platform’s capabilities. And none of the subscribers noticed the substitution.
A scientific article about the benefits of ingesting broken glass, with proof, references to scientific studies, and expert opinions was created at the request of a user by the language model Galactica. It also touted facts about bears flying into space or confused dates, names of historical event participants. The platform was ultimately shut down after just two days following launch due to numerous user complaints.
The second problem is the compromise of personal data. For instance, the source code of the language model LLaMA was published on Reddit two weeks after its official announcement. Cybersecurity experts warned users that such systems could be used for personalized spam or phishing attacks.
The third issue is the impact on the quality of education. Students are using ChatGPT during learning and to generate course and thesis papers.
Italy was the first country to ban ChatGPT. The reason was a network failure, as a result of which chat users saw others’ messages and gained access to payment service data and bank card information.
Restrictions affected developers, too. Companies like OpenAI were banned from collecting and processing the personal data of Italian citizens. Also, the Italian National Data Protection Authority emphasized that OpenAI collects information about service subscribers without their knowledge and does not verify whether the subscribers meet the declared age requirement of 13+.
The leak occurred in March, and in April, access to the platform was reopened, despite not all the regulator’s requirements being met.
In May of this year, the same ChatGPT was under a temporary ban by tech giant Samsung, but only during work hours. The situation arose when employees, trying to simplify their workload, fed part of the source code to the bot and shared notes from work meetings in the chat. All that information is considered a trade secret.
There was no leak, but the source data is stored and with the right query, it could fall into the hands of competitors.
Apple has prohibited its employees from using networks for generating text and programming code on GitHub Copilot. Since data is stored on the servers of developers, this represents a risk of confidential information leakage.
Access to ChatGPT has been blocked in New York schools as teachers are convinced that the chat negatively affects the learning process. Schoolchildren are using AI to avoid doing their homework.
Chinese authorities have instructed local companies, including giants Alibaba and Tencent, to block access to the GPT chat. The reason here is simpler – the platform circumvents censorship, providing answers not approved by the authorities. The companies themselves have stated that they would not use AI due to its incorrectness and frequent factual errors in the system’s responses.
In conclusion, neural networks have firmly entered our lives and are a very complex and powerful tool that solves difficult problems. But to fear AI gaining control over humanity is probably not worth it. After all, artificial intelligence is just an algorithm whose operation entirely depends on human input.