When discussing the dangers of neural networks, it’s often mentioned how they could infringe upon artists’ copyrights or potentially eliminate jobs in many professions in the future. Or even contemplate scenarios of machine uprisings. But what about the safety of neural networks for children?
Debates on this topic are ongoing. On one side, AI can unlock creative potential and protect against inappropriate content online. For instance, algorithms can draw a picture from a set of keywords, write a fairy tale, or generate a unique coloring page (as MidJourney does, for example). Moreover, the more detailed the description of the intended image, the more detailed the result will be. In other words, as one has fun and exercises imagination, it is also possible to learn how to formulate queries correctly.
AI technologies are also applied in education. For example, the company Skyeng developed a virtual English tutor that can assess the level of knowledge and create a personalized learning program. It can also evaluate pronunciation accuracy, sentence construction, and advise on what mistakes to correct.
In Texas, the VGo robot helps teenagers who can’t physically attend school due to severe illnesses to “attend” classes virtually. They can see and hear the teacher, participate in lesson discussions, and even interact with classmates during breaks. However, just like any technology, neural networks also have a dangerous downside.
The main danger, according to linguists and psychologists, is the reduction in cognitive abilities. Neural networks know the school curriculum well and have even learned how to pass the Unified State Exam. AI will provide answers to any questions and write essays.
It might not seem alarming—after all, students have always copied from one another or looked for examples of essays online. But the difference is that writing cheat sheets or modifying a classmate’s text (following the rule of ‘copy, but not verbatim’) still required effort. And that is a form of mental exercise.
Another danger is the leakage of personal information (photos, short videos) used for deepfakes. There are forgeries besides the authentic Chat GPT. The danger lies in the fact that malevolent users don’t even need to figure out how to obtain personal data—the data will be given voluntarily.
The third concern is the impact on socialization levels and “humanizing” the computer program. It’s more comfortable to converse with a bot since it doesn’t argue, criticize, compete, and doesn’t (like classmates in school) ‘divide’ the interests of the interlocutor. As for attributing human traits to a bot, even adults are guilty of this, seriously asking Chat GPT for advice on how to act in complex situations.
There are no one-size-fits-all solutions. At a minimum, we should engage in more communication and not just check subscriptions in social networks or install applications (such as parental controls, trackers), and also:
It’s also worth discussing the basics of virtual safety. Talk about what can be trusted with a chatbot and what cannot. Additionally, encourage ‘live’ interactions with peers to prevent situations where AI replaces a child’s conversation partner or friend.
In conclusion, it might be pertinent to include the answer and advice for parents from Chat GPT itself on the query - how are neural networks dangerous for children. The bot responded that AI can make mistakes or present fabricated data as truthful, misleading users. It also confirmed that algorithms can collect a lot of personal information, which can be unsafe. The task of adults is to monitor how AI is used and remind them of the rules to be followed when working with a neural network.