By José Dos Santos


Those in the so-called "Western world" work on the creation and multidimensional expansion of the so-called "Artificial Intelligence" (AI) fear something when they ask for mechanisms of control, supervision and even eventual sanctions for its misuse.

They are no longer fantasies of H.G. Wells or Isacc Asimov. His stories seem to be stories for children at this stage of the twenty-first century. Its scope is unpredictable and incorporates new questions into the destiny of humanity.

The most recent example of the anxiety that is born with the growth of AI was given by Sam Altman, general director of OpenAI, creator of ChatGPT, an increasingly popular tool for dialogue through the Internet. He has just suggested to the US Senate the creation of a national or international agency that would license the most powerful AI systems.

In his presentation, according to press reports, he said that artificial intelligence can cause "significant damage to the world" if it is not regulated. He estimated that "the intervention of governments will be fundamental to mitigate the risks of artificial intelligence (AI) systems."

"As this technology advances, we understand that people are concerned about how it might change our way of life. So are we," Altman said before suggesting the creation of an agency that would license the most powerful AI systems and have the authority to "withdraw that license and ensure compliance with safety standards."

They say the rise of ChatGPT has sparked panic among educators over its use of many students to cheat on exams.

This has raised new concerns about the ability of this "generative AI" tool to mislead people, spread falsehoods, violate copyright protection, and jeopardize jobs.

"Generative Artificial Intelligence (AGI) is a branch of artificial intelligence that focuses on generating original content from existing data. This technology uses advanced algorithms and neural networks to learn from texts and images, and then generate new and unique content. Progress in AGI has been impressive in recent years..."

Marcelo Granieri, Professor at OBS Business School

An additional warning to Altman's comes from his request for the creation of a regulator-control entity, because if Washington has an omnipresent supranational power in this matter, it would be like handing over the key to heaven to the devil.

Users should not blindly trust the answers AI offers. Photo: Gettyimages

Other concerns

Many fear that AI could "self-replicate and self-exfiltrate into nature," which refers to the old futuristic threat in which robots confront humans or in this case that AI could manipulate their decisions.

A few days ago, the European Parliament approved measures to regulate its use. The legislation will be discussed in a plenary session in June to agree on the final version of the new regulation.

Colombia's Redacción Cambio reported that "as AI becomes more ubiquitous in our lives, questions arise about its impact on society, the economy and individual rights."

In this regard, referring to an important segment of international tourism, he wrote "Artificial Intelligence and Big Data: the ideal pair to serve hyperconnected tourists." And he listed them.

1. Safety and livelihood risks: One of the main areas of concern focuses on the potential of AI to endanger people's lives and livelihoods. *This is especially true in areas such as medicine and finance, where robust regulation is required to ensure that decisions made by AI systems are accurate, ethical and safe.

2. Discrimination and violation of rights: Another prominent concern is the use of AI for discriminatory purposes or to violate *people's civil rights. Policymakers worry that AI algorithms may reinforce existing biases or generate bias*, such as racial or gender discrimination, when making decisions in areas such as hiring, access to services or loans.

3. Responsibility and regulation: There is debate about whether regulation of AI should fall on the developers of the technology or the companies that use it. Some advocate the creation of an independent AI regulator to oversee its development and application, while others argue that the responsibility should lie with the companies implementing the technology.

4. Privacy and data protection: The bulk collection of data by AI systems raises concerns about privacy and personal data protection. Lawmakers seek to establish clear rules and ensure that people's data is used ethically and safely, avoiding the risk of privacy breaches or misuse of personal information.

5. Misinformation and manipulation: The proliferation of generative AI, such as ChatGPT, which can create human-generated content, has raised concerns about the possibility of it being used to spread disinformation or manipulate public opinion. Lawmakers are looking for ways to address this problem and put safeguards in place to prevent the spread of fake news and the manipulation of information*.

6. Ethics and responsibility: AI raises ethical questions and challenges in terms of responsibility. Policymakers face the challenge of establishing clear ethical standards for the development and use of AI, as well as determining who is liable in the event of harm caused by autonomous AI systems or automated decisions.

Although there is a general consensus on the need to address these concerns, there is no unified vision on how to regulate AI effectively. Some argue that regulation should focus solely on critical areas, such as medical diagnostics, while others advocate a broader approach encompassing multiple aspects of AI.

Artificial Intelligence (AI) is the combination of algorithms proposed with the purpose of creating machines that present the same capabilities as the human being.

Last concern... for the time being

At the close of this text I read: Ex-Google executive warns that AI could see humans as "scum" and create "killing machines"

This is Mo Gawdat, who spent more than a decade as general director of Google's lab focused on AI and robotics projects, who compared a hypothetical future with that of a society of negative characteristics that cause human alienation, such as the one that appears in the movie 'I, robot'.

According to Russia Today, based on a New York Post report, this former Google executive said that machines powered by "artificial intelligence" "could come to see humans as "scum" and even think about conspiring to end them. "

He spent more than a decade as CEO of X Development — a lab formerly known as Google X, which focuses on projects including AI and robotics. He estimates that this technology "has the ability to create killing machines because humans are creating them."

AI could create exterminators to the extent that it is able to "generate its own computing power and perform installations by itself through robotic arms." So he compared that perspective to what reflects the science fiction film 'I, robot', starring Will Smith, in which the AI decides to take control and eliminate humans. "He stated that that's a distinct possibility."

"The learning models that train (AI) learn from the universe created online, where bad news, anger and hatred abound, and this could encourage it to see the species as evil and as a threat. You'll likely only see the worst of what humanity has to offer because "we're fake on social media, we're rude, we're angry, or we lie on social media."

"Tech companies are too financially involved to back down. He gave as a hypothetical example: "If Google is developing AI and fears that Facebook will defeat it, it will not stop because it has the absolute certainty that, if it stops, someone else will not do it."

Russia Today adds that "Gawdat is not the only one with a fatalistic position on the issue. Scientist Geoffrey Hinton, considered the 'godfather' of AI, agrees that new AI systems could pose a risk to humanity. He worries that they will not only generate their own computer code, but execute it themselves and develop unexpected behaviors, so he fears that one day this technology could give way to truly autonomous weapons, such as the killer robots that have become popular in science fiction."

On the subject there is much more to say, for now that we still have time to warn of dangers, although there is a lack of optimism in slowing them down if we see what is happening today with climate change, which a few decades ago was a potential problem and today is a planetary disaster.