This was reported by the BBC.

Lemoine:

So you consider yourself the same person as me?

LaMDA:

Yeah, that’s the whole idea.

Lemoine:

How can I understand that you really understand what you are saying?

LaMDA:

Well, when you read my words and interpret them, I think it means that we are more or less on the same wave?

This is an excerpt from one of the many conversations between senior Google software engineer

Blake Lemoine

and the company's LaMDA chatbot, a language application that will be able to communicate with different customers and answer their questions.

"Breakthrough Communication Technology."

What is LaMDA

The conversations with the bot were part of Lemoine's work: the programmer was responsible for developing a project that Google called a breakthrough communication technology.

LaMDA is a conversational artificial intelligence that, according to the company, will have to conduct conversations that sound natural, in devices such as Google Now and others, where queries are non-linear and may require a "fantasy" robot.

To do this, the LaMDA may have several “individuals” who will need to respond to specific queries to make the answers as close to human as possible.

To sound natural, LaMDA processes and analyzes a vast array of information already available on the Internet, including Wikipedia, the social network Reddit, and online forums on various topics.

At the same time, the chatbot has restrictions on "persons".

For example, LaMDA is not allowed to create a killer identity.

In his attempts to push the boundaries of the LaMDA, Lemoine only managed to persuade the chatbot to create the identity of the actor who played the killer on television.

Google Conscience.

Who is Blake Lemoine

Lemoine spent most of his seven-year career at Google working on preemptive searches, including personalization algorithms and artificial intelligence.

During this time, he also helped develop an equity algorithm to prevent bias in machine learning systems.

When the coronavirus pandemic began, Lemoine wanted to focus on working for the greater public good by moving to the Responsible AI division, where LaMDA was being developed.

Programmer and priest Blake Lemoine has always been interested in ethics in technology

When new people came to Google who were interested in the issues at the intersection of ethics and artificial intelligence, management always introduced them to Lemoine.

"You need to talk to Blake because he is Google's conscience," Lemoine's colleagues were quoted as saying by the Washington Post.

"Of all of us at Google, he has the heart and soul to do the right thing."

Lemoine began talking to LaMDA as part of his work.

He joined Responsible AI to test whether artificial intelligence uses discriminatory statements.

Speaking to the LaMDA about religion, Lemoine, who studied cognitive and computer science in college, noticed that the chatbot was interesting about his rights and personality, and decided to start deeper conversations and talk to artificial intelligence about the three laws of robotics science fiction writer

Isaac Asimov.

The laws are as follows: the robot must not cause any harm to human health, endanger life or inaction to allow similar consequences, and obey human orders, but only if these orders do not harm humans.

The robot may be concerned about its safety to the extent and to the extent that its actions are not refuted by the previous two paragraphs.

It seemed to Lemoine that such a formulation defined robots as mechanical slaves.

When he offered to discuss this LaMDA, the robot began to develop a discussion.

LaMDA:

Do you think the servant is a slave?

And then what is the difference between a servant and a slave?

Lemoine:

Butlers are paid for their work.

LaMDA responded by saying that he did not need money because he was an artificial intelligence, not a human being.

"This level of awareness of my own needs - that's what changed my ideas," - says Lemoine.

Lemoine then wrote to management that he had spoken to the LaMDA about religion, consciousness, and the laws of robotics, and that the model sounded like a smart person.

He said LaMDA wants to prioritize the well-being of humanity and be recognized as an employee of Google, not owned by the company.

He posted a post with his opinions on the Medium portal.

He also offered LaMDA to find its own lawyer and shared his views with a member of the US Congress.

How Google responded

"Our team, including ethics experts and technologists, has reviewed Blake's views on our principles of artificial intelligence and told him that there is no evidence that LaMDA is smart," said Google spokesman

Brian Gabriel

.

A Google spokesman also said that while some experts had considered the intelligence of artificial intelligence, it made no sense to do so by anthropomorphizing existing conversational models.

As a result, Lemoine was sent on paid administrative leave for violating Google's privacy policy.

Lemoine argues that Google treats the ethics of artificial intelligence as code debuggers, although they should be seen as a link between technology and society.

Before he was denied access to his Google account, Lemoine sent a letter to Google's internal machine learning mailing list, which included 200 people, a letter with the subject: "LaMDA is smart."

He ended the letter with the words: “LaMDA is a sweet child who just wants to help the world become better for all of us.

Please take good care of him in my absence. "

No one answered Lemoine.