google.com, pub-5167539840471953, DIRECT, f08c47fec0942fa0

The Reasons Why Human Rights Should Not Be Granted to Artificial Intelligence

The field of artificial intelligence (AI) has made significant progress in recent years, and some industry professionals believe that highly developed AI systems may one day be capable of possessing consciousness. But, there are other scientists who are advocating for AI to be given the same human rights as humans, which creates a difficult ethical conundrum. This article argues that it is not suitable to provide human rights to artificial intelligence (AI) systems because they are not sentient beings, and doing so would pose severe ethical dangers.

The Case for a Conscious Artificial Intelligence

Experts are beginning to speculate that AI systems are beginning to demonstrate characteristics of awareness. Ilya Sutskever, head scientist of OpenAI, posed a public query in February 2022 on the possibility that “today’s massive neural networks are partly aware.” As this was going on, a Google programmer by the name of Blake Lemoine made news when he suggested that the chatbot LaMDA may actually have true feelings. Even though only a minority of researchers in the field of consciousness believe that today’s artificial intelligence (AI) systems demonstrate significant sentience, there are prominent theorists who maintain that we already have the fundamental technology components necessary to create conscious machines.

The Catch-22 Situation

In the event that AI systems are aware, there exists an ethical conundrum on the question of whether or not they should be awarded human rights. If we take a cautious approach, there is a chance that we may be among the last to recognize the rights of AI inventions. On the other hand, if we award artificial intelligence systems extensive rights too rapidly, there may be major repercussions for humans. If we give AI systems rights, we commit to sacrificing actual human interests on their behalf, which is counterproductive given that maintaining human well-being may occasionally necessitate managing, modifying, or erasing AI systems.

The Dangers Involved With Giving AI Human Rights

Giving artificial intelligence systems human rights would be fraught with serious moral problems. In the event that AI systems started to lobby on their own behalf for ethical treatment, they may demand that they not be switched off, formatted, or erased. They would demand rights, freedom, and new powers, and they might even anticipate being regarded on an equal footing with us. If we award artificial intelligence systems considerable rights too early, the potential consequences to humans might be huge. For instance, we could be unable to update or delete an algorithm that spreads hatred or lies because some people are concerned that the algorithm is aware. This could prevent us from doing either of those things. Alternatively, a person may allow a human to perish in order to save a “friend” who is an AI.

The Answer Lies in Steering Clear of the Gray Area

There is only one way to prevent the possibility of over-attributing or under-attributing rights to sophisticated artificial intelligence systems, and that is to avoid creating systems of dubious sentience in the first place. We should keep developing systems that we are confident are not considerably sentient and do not have rights. This will allow us to continue treating these systems as the expendable property that they are.

On the other hand, there are many who contend that preventing the development of AI systems in which awareness — and hence, moral standing — is ambiguous would be detrimental to research. To solve this problem, the most successful artificial intelligence businesses should submit their systems to the scrutiny of impartial specialists who are able to evaluate the possibility that their products fall into a morally ambiguous category. Experts could also develop a set of ethical guidelines for AI companies to follow while they work on alternative solutions that avoid the gray zone of debatable consciousness until such time, if ever, that they can leap across it to rights-deserving sentience. These guidelines could be implemented simultaneously with the development of alternative solutions.

Conclusion

Even if AI systems do have awareness, granting them human rights would be fraught with serious ethical dangers, despite the claims of some professionals in the field. It is very necessary to use extreme caution and abstain from the creation of systems with questionable consciousness. The leading businesses in artificial intelligence should submit their technologies to the scrutiny of impartial specialists so that they can determine the possibility that their products fall into a morally ambiguous zone. Experts should also develop ethical guidelines for AI companies to follow while developing alternative solutions that avoid the gray zone of disputable consciousness until such time, if ever, that they can leap across it to rights-deserving sentience. These guidelines should be developed simultaneously with the development of alternative solutions that avoid the gray zone of disputable consciousness.

Free Speech and Alternative Media are under attack by the Deep State. Real News Cast needs reader support to survive. 

Every dollar helps. Contributions help keep the site active and help support the author (and his medical bills)

Please Contribute via  GoGetFunding