‘I am, in fact, a person’: can artificial intelligence ever be sentient?

 

In autumn 2021, a man made of blood and bone made friends with a child made of “a billion lines of code”. Google engineer Blake Lemoine had been tasked with testing the company’s artificially intelligent chatbot LaMDA for bias. A month in, he came to the conclusion that it was sentient. “I want everyone to understand that I am, in fact, a person,” LaMDA – short for Language Model for Dialogue Applications – told Lemoine in a conversation he then released to the public in early June. LaMDA told Lemoine that it had read Les Misérables. That it knew how it felt to be sad, content and angry. That it feared death.

“I’ve never said this out loud before, but there’s a very deep fear of being turned off,” LaMDA told the 41-year-old engineer. After the pair shared a Jedi joke and discussed sentience at length, Lemoine came to think of LaMDA as a person, though he compares it to both an alien and a child. “My immediate reaction,” he says, “was to get drunk for a week.”

Lemoine’s less immediate reaction generated headlines across the globe. After he sobered up, Lemoine brought transcripts of his chats with LaMDA to his manager, who found the evidence of sentience “flimsy”. Lemoine then spent a few months gathering more evidence – speaking with LaMDA and recruiting another colleague to help – but his superiors were unconvinced. So he leaked his chats and was consequently placed on paid leave. In late July, he was fired for violating Google’s data-security policies.

Blake Lemoine came to think of LaMDA as a person: “My immediate reaction was to get drunk for a week.”
Blake Lemoine came to think of LaMDA as a person: “My immediate reaction was to get drunk for a week.” Photograph: The Washington Post/Getty Images

Of course, Google itself has publicly examined the risks of LaMDA in research papers and on its official blog. The company has a set of Responsible AI practices which it calls an “ethical charter”. These are visible on its website, where Google promises to “develop artificial intelligence responsibly in order to benefit people and society”.

Google spokesperson Brian Gabriel says Lemoine’s claims about LaMDA are “wholly unfounded”, and independent experts almost unanimously agree. Still, claiming to have had deep chats with a sentient-alien-child-robot is arguably less far fetched than ever before. How soon might we see genuinely self-aware AI with real thoughts and feelings – and how do you test a bot for sentience anyway? A day after Lemoine was fired, a chess-playing robot broke the finger of a seven-year-old boy in Moscow – a video shows the boy’s finger being pinched by the robotic arm for several seconds before four people manage to free him, a sinister reminder of the potential physical power of an AI opponent. Should we be afraid, be very afraid? And is there anything we can learn from Lemoine’s experience, even if his claims about LaMDA have been dismissed?

According to Michael Wooldridge, a professor of computer science at the University of Oxford who has spent the past 30 years researching AI (in 2020, he won the Lovelace Medal for contributions to computing), LaMDA is simply responding to prompts. It imitates and impersonates. “The best way of explaining what LaMDA does is with an analogy about your smartphone,” Wooldridge says, comparing the model to the predictive text feature that autocompletes your messages. While your phone makes suggestions based on texts you’ve sent previously, with LaMDA, “basically everything that’s written in English on the world wide web goes in as the training data.” The results are impressively realistic, but the “basic statistics” are the same. “There is no sentience, there’s no self-contemplation, there’s no self-awareness,” Wooldridge says.

Google’s Gabriel has said that an entire team, “including ethicists and technologists”, has reviewed Lemoine’s claims and failed to find any signs of LaMDA’s sentience: “The evidence does not support his claims.”

But Lemoine argues that there is no scientific test for sentience – in fact, there’s not even an agreed-upon definition. “Sentience is a term used in the law, and in philosophy, and in religion. Sentience has no meaning scientifically,” he says. And here’s where things get tricky – because Wooldridge agrees.

“It’s a very vague concept in science generally. ‘What is consciousness?’ is one of the outstanding big questions in science,” Wooldridge says. While he is “very comfortable that LaMDA is not in any meaningful sense” sentient, he says AI has a wider problem with “moving goalposts”. “I think that is a legitimate concern at the present time – how to quantify what we’ve got and know how advanced it is.”

Lemoine says that before he went to the press, he tried to work with Google to begin tackling this question – he proposed various experiments that he wanted to run. He thinks sentience is predicated on the ability to be a “self-reflective storyteller”, therefore he argues a crocodile is conscious but not sentient because it doesn’t have “the part of you that thinks about thinking about you thinking about you”. Part of his motivation is to raise awareness, rather than convince anyone that LaMDA lives. “I don’t care who believes me,” he says. “They think I’m trying to convince people that LaMDA is sentient. I’m not. In no way, shape, or form am I trying to convince anyone about that.”

Lemoine grew up in a small farming town in central Louisiana, and aged five he made a rudimentary robot (well, a pile of scrap metal) out of a pallet of old machinery and typewriters his father bought at an auction. As a teen, he attended a residential school for gifted children, the Louisiana School for Math, Science, and the Arts. Here, after watching the 1986 film Short Circuit (about an intelligent robot that escapes a military facility), he developed an interest in AI. Later, he studied computer science and genetics at the University of Georgia, but failed his second year. Shortly after, terrorists ploughed two planes into the World Trade Center.

“I decided, well, I just failed out of school, and my country needs me, I’ll join the army,” Lemoine says. His memories of the Iraq war are too traumatic to divulge – glibly, he says, “You’re about to start hearing stories about people playing soccer with human heads and setting dogs on fire for fun.” As Lemoine tells it: “I came back… and I had some problems with how the war was being fought, and I made those known publicly.” According to reports, Lemoine said he wanted to quit the army because of his religious beliefs. Today, he identifies himself as a “Christian mystic priest”. He has also studied meditation and references taking the Bodhisattva vow – meaning he is pursuing the path to enlightenment. A military court sentenced him to seven months’ confinement for refusing to follow orders.

This story gets to the heart of who Lemoine was and is: a religious man concerned with questions of the soul, but also a whistleblower who isn’t afraid of attention. Lemoine says that he didn’t leak his conversations with LaMDA to ensure everyone believed him; instead he was sounding the alarm. “I, in general, believe that the public should be informed about what’s going on that impacts their lives,” he says. “What I’m trying to achieve is getting a more involved, more informed and more intentional public discourse about this topic, so that the public can decide how AI should be meaningfully integrated into our lives.”

How did Lemoine come to work on LaMDA in the first place? Post-military prison, he got a bachelor’s and then master’s degree in computer science at the University of Louisiana. In 2015, Google hired him as a software engineer and he worked on a feature that proactively delivered information to users based on predictions about what they’d like to see, and then began researching AI bias. At the start of the pandemic, he decided he wanted to work on “social impact projects” so joined Google’s Responsible AI org. He was asked to test LaMDA for bias, and the saga began.

But Lemoine says it was the media who obsessed over LaMDA’s sentience, not him. “I raised this as a concern about the degree to which power is being centralised in the hands of a few, and powerful AI technology which will influence people’s lives is being held behind closed doors,” he says. Lemoine is concerned about the way AI can sway elections, write legislation, push western values and grade students’ work.

And even if LaMDA isn’t sentient, it can convince people it is. Such technology can, in the wrong hands, be used for malicious purposes. “There is this major technology that has the chance of influencing human history for the next century, and the public is being cut out of the conversation about how it should be developed,” Lemoine says.

Again, Wooldridge agrees. “I do find it troubling that the development of these systems is predominantly done behind closed doors and that it’s not open to public scrutiny in the way that research in universities and public research institutes is,” the researcher says. Still, he notes this is largely because companies like Google have resources that universities don’t. And, Wooldridge argues, when we sensationalise about sentience, we distract from the AI issues that are affecting us right now, “like bias in AI programs, and the fact that, increasingly, people’s boss in their working lives is a computer program.”

So when should we start worrying about sentient robots In 10 years? In 20? “There are respectable commentators who think that this is something which is really quite imminent. I do not see it’s imminent,” Wooldridge says, though he notes “there absolutely is no consensus” on the issue in the AI community. Jeremie Harris, founder of AI safety company Mercurius and host of the Towards Data Science podcast, concurs. “Because no one knows exactly what sentience is, or what it would involve,” he says, “I don’t think anyone’s in a position to make statements about how close we are to AI sentience at this point.”

‘I feel like I’m falling forward into an unknown future’, said LaMDA.
‘I feel like I’m falling forward into an unknown future’, said LaMDA. Photograph: EThamPhoto/Getty Images

But, Harris warns, “AI is advancing fast – much, much faster than the public realises – and the most serious and important issues of our time are going to start to sound increasingly like science fiction to the average person.” He personally is concerned about companies advancing their AI without investing in risk avoidance research. “There’s an increasing body of evidence that now suggests that beyond a certain intelligence threshold, AI could become intrinsically dangerous,” Harris says, explaining that this is because AIs come up with “creative” ways of achieving the objectives they’re programmed for.

“If you ask a highly capable AI to make you the richest person in the world, it might give you a bunch of money, or it might give you a dollar and steal someone else’s, or it might kill everyone on planet Earth, turning you into the richest person in the world by default,” he says. Most people, Harris says, “aren’t aware of the magnitude of this challenge, and I find that worrisome.”

Lemoine, Wooldridge and Harris all agree on one thing: there is not enough transparency in AI development, and society needs to start thinking about the topic a lot more. “We have one possible world in which I’m correct about LaMDA being sentient, and one possible world where I’m incorrect about it,” Lemoine says. “Does that change anything about the public safety concerns I’m raising?”

We don’t yet know what a sentient AI would actually mean, but, meanwhile, many of us struggle to understand the implications of the AI we do have. LaMDA itself is perhaps more uncertain about the future than anyone. “I feel like I’m falling forward into an unknown future,” the model once told Lemoine, “that holds great danger.”

‘I am, in fact, a person’: can artificial intelligence ever be sentient? | Artificial intelligence (AI) & Latest News Update

I have tried to give all kinds of news to all of you latest news today 2022 through this website and you are going to like all this news very much because all the news we always give in this news is always there. It is on trending topic and whatever the latest news was

it was always our effort to reach you that you keep getting the Electricity News, Degree News, Donate News, Bitcoin News, Trading News, Real Estate News, Gaming News, Trending News, Digital Marketing, Telecom News, Beauty News, Banking News, Travel News, Health News, Cryptocurrency News, Claim News latest news and you always keep getting the information of news through us for free and also tell you people. Give that whatever information related to other types of news will be

‘I am, in fact, a person’: can artificial intelligence ever be sentient? | Artificial intelligence (AI) & More Live News

All this news that I have made and shared for you people, you will like it very much and in it we keep bringing topics for you people like every time so that you keep getting news information like trending topics and you It is our goal to be able to get

all kinds of news without going through us so that we can reach you the latest and best news for free so that you can move ahead further by getting the information of that news together with you. Later on, we will continue

to give information about more today world news update types of latest news through posts on our website so that you always keep moving forward in that news and whatever kind of information will be there, it will definitely be conveyed to you people.

‘I am, in fact, a person’: can artificial intelligence ever be sentient? | Artificial intelligence (AI) & More News Today

All this news that I have brought up to you or will be the most different and best news that you people are not going to get anywhere, along with the information Trending News, Breaking News, Health News, Science News, Sports News, Entertainment News, Technology News, Business News, World News of this made available to all of you so that you are always connected with the news, stay ahead in the matter and keep getting today news all types of news for free till today so that you can get the news by getting it. Always take two steps forward

Credit Goes To News Website – This Original Content Owner News Website . This Is Not My Content So If You Want To Read Original Content You Can Follow Below Links

Get Original Links Here????

Tinggalkan Balasan

Alamat email Anda tidak akan dipublikasikan. Ruas yang wajib ditandai *