Chat GPT

Ethics of Artificial Intelligence

Chat AI
#chatgpt #free
image

As AI develops at an accelerated pace, more questions arise for humanity, and the foremost among them is ethical. For instance, when and under what conditions is the use of artificial intelligence justified, how safe is it to trust a program with data collection and processing, and who will be held accountable for wrongful or erroneous actions of AI.

The problem is compounded by the fact that the development and implementation of AI are not regulated by law. Elon Musk spoke about the need for control over AI technologies in 2017, even before the wild popularity of ChatGPT.

Principles

It’s probably still not possible to fit the operation of neural networks within the framework of legislation due to their rapid development. However, norms exist that help determine the ethics of AI use, even if for now they are adopted locally in each country.

The main principles are:

  • Humanity’s safety – formulations vary, but a clause on protecting personal data exists everywhere;
  • Transparency for users – humans should understand that they interact with AI when chatting or asking for advice on a website;
  • Fairness and impartiality in decision-making – without considering factors like gender, financial or marital status, age, or nationality;
  • Accountability to the public and the establishment of measures for “soft” state control;
  • Developer responsibility for AI actions – from wrong recommendations to their possible consequences.

Apart from the state, the creators of technologies themselves also tackle ethical issues. Developer companies prepare internal regulations or form alliances adhering to local standards.

Code

This document, regulating the field of technology, differs significantly from standard legal norms. Often it’s a set of recommendations, the non-compliance of which doesn’t entail liability for the violator.

Approximately ten countries have formulated norms for dealing with AI technology. In 2021, an AI ethics code was adopted in China. It focuses on AI manageability, supervision, and the expansion of human rights. The document stipulates that users may limit interaction with AI, control, or prohibit neural networks from accessing personal data.

The U.S. Department of Defense has been preparing such rules with an emphasis on reliability, impartiality, and human safety algorithms. It also addresses methods for single-use deactivation or disabling of software.

And What About Russia

The AI code of ethics in Russia was signed on October 26, 2021. Among the signatories were 187 companies, including Sberbank, Yandex, Cian, VK, and VTB.

The document is advisory, and the rules it contains apply to all stages of interaction with AI throughout the software lifecycle. From the idea stage to development, implementation, and project closure. The core principles of artificial intelligence ethics include:

  • A humanitarian approach where the highest value is the interests of the individual (society and person);
  • Respect for human free will, excluding negative or unpredictable impacts on creative and cognitive abilities;
  • Lawfulness and compliance of the software with current regulatory acts in all areas of legal right (copyright, administrative, criminal);
  • Non-discrimination on any basis – gender, age, family, or financial status (for algorithms organizing data from individual user groups);
  • Risk assessment, consequences of implementation.

The code’s rules recommend that developers take measures to protect software from third-party interference, provide truthful information about AI, and inform users that they are interacting with software, not a person. Also, not to transfer the power to make decisions or shift the blame for incorrect recommendations to the program. Because the responsibility for the actions of the software lies with humans.

The authors also call on developers to form alliances, create commissions to supplement the rules, and share successful or unsuccessful practices for solving issues that arise at all stages of the software life cycle.

Why It Matters

We are still far from the widespread implementation of machine information processing, and fully-fledged AI does not yet exist. But even with neural networks, problems are already arising. For example, algorithms used by a recruiting company discriminated against women - 60% of the candidates approved for job vacancies were men. Simply because the program analyzed the company’s past experience, rather than evaluating the candidates’ qualifications.

Another example is related to medicine. Neural networks were entrusted to analyze patient data and determine who needed additional help. Although the work was completed, it turned out that during the selection process, the program considered not the patients’ medical histories, but their financial status.

Clearly, there is no fault of the software itself. Neural networks are trained by people, and the program “inherits” all human shortcomings. But such data processing errors, upon which any significant decisions are based, are unacceptable.

Another problem with neural networks is the dissemination of false information. These can be pseudo-scientific articles with incorrect dates or fictitious research, or even news. That is, the task of manipulating public consciousness becomes much simpler.

Take, for example, the scandal at the end of May 2023 with a photo that went viral on the net and supposedly depicted an explosion at the Pentagon. The authorities quickly refuted it. But the panic on social networks spilled over into financial markets, crashing indices of major companies. Who tasked the neural network to generate the photo and why remains unknown.

To minimize risks in any field, whether it be texts or photos on news sites, automated data processing or any interaction with AI, rules are needed. And the development of regulations, even if they currently contain general recommendations, is just the beginning.

← Previous Article Back to blog Next Article →
Free access to Chat GPT and Journey