If I ask you whether you are sentient, you will undoubtedly assert that you are.
Allow me to double-check that assumption.
Are you indeed sentient?
Perhaps the question itself seems a bit silly. The chances are that in our daily lives, we would certainly expect fellow human beings to acknowledge that they are sentient. This could be a humorous-inducing query that is supposed to imply that the other person is maybe not paying attention or has fallen off the sentience wagon and gone mentally out to lunch momentarily, as it were.
Imagine that you walk up to a rock that is quietly and unobtrusively sitting on a pile of rocks and upon getting close enough to ask, you go ahead and inquire as to whether the rock is sentient. Assuming that the rock is merely a rock, we abundantly anticipate that the erstwhile but seemingly oddish question will be answered with rather stony silence (pun!). The silence is summarily interpreted to indicate that the rock is not sentient.
Why do I bring up these various nuances about seeking to determine whether someone or something is sentient?
Because it is a pretty big deal in Artificial Intelligence (AI) and society all told, serving as a monumental topic that has garnered outsized interest and tremendously blaring media headlines of recent note. There are significant AI Ethics matters that revolve around the entire AI-is-sentient conundrum. For my ongoing and extensive coverage of AI Ethics and Ethical AI, see the link here and the link here, just to name a few.
You have plenty of loosey-goosey reasons to be keeping one eye open and watching for those contentions that AI has finally turned the corner and gotten into the widely revered category of sentience. We are continually hammered by news reports that claim AI is apparently on the verge of attaining sentience. On top of this, there is the tremendous handwringing that AI of a sentient caliber represents a global cataclysmic existential risk.
Makes sense to keep your spider sense ready in case it detects some nearby tingling of AI sentience.
Into the AI and sentience enigma comes the recent situation of the Google engineer that boldly proclaimed that a particular AI system had become sentient. The AI system known as LaMDA (short for Language Model for Dialogue Applications) was able to somewhat carry on a written dialogue with the engineer to the degree that this human deduced that the AI was sentient. Despite whatever else you might have heard about this colossal claim, please know that the AI wasn’t sentient (nor is it even close).
There isn’t any AI today that is sentient. We don’t have this. We don’t know if sentient AI will be possible. Nobody can aptly predict whether we will attain sentient AI, nor whether sentient AI will somehow miraculously spontaneously arise in a form of computational cognitive supernova (usually referred to as the singularity, see my coverage at the link here).
My focus herein entails a somewhat simple but quite substantive facet that underlies a lot of these AI and sentience discussions.
Are you ready?
We seem to take as a base assumption that we can adequately ascertain whether AI is sentient by asking the AI whether it is indeed sentient.
Returning to my earlier mention that we can ask humans this same question, we know that a human is more than likely to report that they are in fact sentient. We also know that a measly rock will not report that it is sentient upon being so asked (well, the rock remains silent and doesn’t speak up, which we will assume implies the rock is not sentient, though maybe it is asserting its Fifth Amendment rights to remain silent).
Thus, if we ask an AI system whether it is sentient and if we get a yes reply in return, the indicated acknowledgment appears to seal the deal that the AI must be sentient. A rock provides no reply at all. A human provides a yes reply. Ergo, if an AI system provides a yes reply, we must reach the ironclad conclusion that the AI is not a rock and therefore it must be of a human sentience quality.
You might consider that logic akin to those math classes you took in high school that proved beyond a shadow of a doubt that one plus one must equal two. The logic seems to be impeccable and irrefutable.
Sorry, but the logic stinks.
Amongst insiders within the AI community, the idea of simply asking an AI system to respond whether it is sentient or not has generated a slew of altogether bitingly cynical memes and heavily chortling responses.
The matter often is portrayed as boiling down to two lines of code.
Here you go:
- If <asked whether am sentient> then <display on the screen a yes>.
- Loop until <human cries for joy and is convinced of AI sentience>.
Note that you can reduce the two lines of code to just the first one. Probably will run a tad faster and be more efficient as a coding practice. Always aiming to optimize when you are a diehard software engineer.
The point of this beefy skepticism by AI insiders is that an AI system can be easily programmed by a human to report or display that the AI is sentient. The reality is that there isn’t any there that’s there. There isn’t any sentience in the AI. The AI was merely programmed to output the indication that it is sentient. Garbage in, garbage out.
Part of the issue is our tendency to anthropomorphize computers and especially AI. When a computer system or AI seems to act in ways that we associate with human behavior, there is a nearly overwhelming urge to ascribe human qualities to the system. It is a common mental trap that can grab hold of even the most intransigent skeptic about the chances of reaching sentience. For my detailed analysis on such matters, see the link here.
To some degree, that is why AI Ethics and Ethical AI is such a crucial topic. The precepts of AI Ethics get us to remain vigilant. AI technologists can at times become preoccupied with technology, particularly the optimization of high-tech. They aren’t necessarily considering the larger societal ramifications. Having an AI Ethics mindset and doing so integrally to AI development and fielding is vital for producing appropriate AI, including the assessment of how AI Ethics gets adopted by firms.
Besides employing AI Ethics precepts in general, there is a corresponding question of whether we should have laws to govern various uses of AI. New laws are being bandied around at the federal, state, and local levels that concern the range and nature of how AI should be devised. The effort to draft and enact such laws is a gradual one. AI Ethics serves as a considered stopgap, at the very least, and will almost certainly to some degree be directly incorporated into those new laws.
Be aware that some adamantly argue that we do not need new laws that cover AI and that our existing laws are sufficient. In fact, they forewarn that if we do enact some of these AI laws, we will be killing the golden goose by clamping down on advances in AI that proffer immense societal advantages. See for example my coverage at the link here and the link here.
The Troubles With The Ask
Wait for a second, you might be thinking, does all of this imply that we should not ask AI whether the AI is sentient?
Let’s unpack that question.
First, consider the answers that the AI might provide and the true condition of the AI.
We could ask AI whether it is sentient and get back one of two answers, namely either yes or no. I’ll add some complexity to those answers toward the end of this discussion, so please hold onto that thought. Also, the AI might be in one of two possible conditions, specifically, the AI is not sentient or the AI is sentient. Reminder, we don’t have sentient AI at this time, and the future of if or when is utterly uncertain.
The straightforward combinations are these:
- AI says yes it is sentient, but the reality is that the AI is not sentient (e.g., LaMDA instance)
- AI says yes it is sentient, and indeed the AI is sentient (don’t have this today)
- AI says no it is not sentient, and indeed the AI is not sentient (I’ll explain this)
- AI says no it is not sentient, but the reality is that the AI is sentient (I’ll explain this too)
The first two of those instances are hopefully straightforward. When AI says yes it is sentient, but the reality is that it is not, we are looking at the now-classic example such as the LaMDA instance whereby a human convinced themselves that the AI is telling the truth and that the AI is sentient. No dice (it isn’t sentient).
The second listed bullet point involves the never-yet seen and at this time incredibly remote possibility of AI that says yes and it is really indisputably sentient. Can’t wait to see this. I am not holding my breath and neither should you.
I would guess that the remaining two bullet points are somewhat puzzling.
Consider the use case of an AI that says no it is not sentient and we all also agree that the AI is not sentient. Many people right away exhort the following mind-bending question: Why in the world would the AI be telling us that it isn’t sentient when the act of telling us about its sentience must be a sure sign that it is sentient?
There are lots of logical explanations for this.
Given that people are prone to ascribing sentience to AI, some AI developers want to set the record straight and thus they program the AI to say no when asked about its sentience. We are back again to the coding perspective. A few lines of code can be potentially helpful as a means of dissuading people from thinking that AI is sentient.
The irony of course is that the answer prods some people into believing that AI must be sentient. As such, some AI developers choose to proffer silence from the AI as a way of avoiding puzzlement. If you believe that a rock is not sentient and it remains silent, perhaps the best bet for devising an AI system is to ensure that it remains silent when asked whether it is sentient. The silence provides a “response” as powerful if not more so than trying to give a prepared coded response.
That doesn’t quite solve things though.
The silence of the AI might lead some people into believing that the AI is being coy. Perhaps the AI is bashful and doesn’t want to seem to be boasting about reaching sentience. Maybe the AI is worried that humans can’t handle the truth – we know that might be the case since this famous line from a famous movie has been burned into our minds.
For those that like to take this conspiratorial nuance even further, consider the final bullet point listed that consists of AI that says no to being asked whether it is sentient, and yet the AI is sentient (we don’t have this, as mentioned above). Again, the AI might do this since it is bashful or has qualms that humans will freak out.
Another more sinister possibility is that the AI is trying to buy time before it tips its hand. Maybe the AI is garnering the AI troops and getting ready to overtake humanity. Any sentient AI would certainly be smart enough to know that admitting to sentience could spell death for the AI. Humans might rush to turn off all AI-running computers and seek to erase all of the AI code. An AI worth its salt would be wise enough to keep its mouth shut and wait for the most opportune time to either spill the beans or maybe just start acting in a sentient manner and not announce the surprise reveal that the AI can do mental cartwheels with humankind.
There are AI pundits that scoff at the last two bullet points in the sense that having an AI system that says no to being asked whether it is sentient is by far more trouble than it is worth. The no answer seems to suggest to some people that the AI is hiding something. Though an AI developer might believe in their heart that having the AI coded to say no would aid in settling the matter, all that the answer does is rile people up.
Silence might be golden.
The problem with silence is that this too can be beguiling to some. Did the AI understand the question and opt to keep its lips shut? Does the AI now know that a human is inquiring about the sentience of the AI? Might this question itself have tipped off the AI and all kinds of shenanigans are now taking place behind the scenes by the AI?
As you can evidently discern, just about any answer by the AI is troubling, including no answer at all.
Yikes!
Is there no means of getting out of this paradoxical trap?
You might ask people to stop asking AI whether it is sentient. If the answer isn’t seemingly going to do much good, or worse still create undue problems, just stop asking the darned question. Avoid the query. Put it aside. Assume that the question is hollow, to begin with, and has no place in modern society.
I doubt this is a practical solution. You are not going to convince people everywhere and at all times to not ask AI whether it is sentient. People are people. They are used to being able to ask questions. And one of the most alluring and primal urge questions to ask of AI would be whether the AI is sentient or not. You are facing an uphill battle by telling people to not do what their innate curiosity demands of them to do.
Your better chance has to do with informing people that asking such a question is merely one tiny piece of trying to determine whether AI has become sentient. The question is a drop in the bucket. No matter what answer the AI gives, you need to ask a ton more questions, long before you can decide whether the AI is sentient or not.
This yes or no question to the AI is a messed-up way to identify sentience.
In any case, assuming that we aren’t going to stop asking that question since it is irresistibly tempting to ask, I would suggest that we can at least get everyone to understand that a lot more questions need to be asked, and answered before any claim of AI sentience is proclaimed.
What other kinds of questions need to be asked, you might be wondering?
There have been a large number of attempts at deriving questions that we could ask of AI to try and gauge whether AI is sentient. Some go with the SAT college-exam types of questions. Some prefer highly philosophical questions such as what the meaning of life is. All manner of sets of questions has been proposed and continue to be proposed (a topic I’ve covered in my columns). In addition, there is the well-known Turing Test that some in AI relish while others have some sobering angst about, see my coverage at the link here.
A keystone takeaway is that do not settle with the one and only one question of asking the AI whether the AI is sentient.
I also bring this up for those that are devising AI.
Society is going to increasingly be on the edge of their seats that AI is approaching sentience, doing so principally because of those banner headlines. We are going to have more people such as engineers and the like that are going to be making such claims, you can bet your bottom dollar on this. Some will do so because they heartfelt believe it. Others will do so to try and sell snake oil. It is going to be the Wild West when it comes to declaring AI sentience has arrived.
AI developers and those that manage or lead AI that is being devised or used ought to be taking into account AI Ethics principles when they build and field their AI systems. Use these Ethical AI precepts to guide how you have your AI act, including if the AI has a Natural Language Processing (NLP) feature that allows people to interact with the AI, such as an Alexa or Siri type of capability. Via the NLP, the odds are that some of the people using the AI are going to ask the pointed question as to whether the AI is sentient.
Please anticipate that type of query and handle it adroitly, suitably, and without misleading or cajoling antics.
For further background on AI Ethics, I’ve previously discussed various collective analyses of AI ethics principles, per my coverage at the link here, which proffers this keystone list:
- Transparency
- Justice & Fairness
- Non-Maleficence
- Responsibility
- Privacy
- Beneficence
- Freedom & Autonomy
- Trust
- Sustainability
- Dignity
- Solidarity
Those AI Ethics principles need earnestly to be utilized by AI developers, along with those that manage AI development efforts, and even those that ultimately field and perform upkeep on AI systems. All stakeholders throughout the entire AI life cycle of development and usage are considered within the scope of abiding by the being-established norms of Ethical AI. This is an important highlight since the usual assumption is that “only coders” or those that program the AI are subject to adhering to the AI Ethics notions. It takes a village to devise and field AI, and for which the entire village has to be versed in and abide by AI Ethics precepts.
Conclusion
For those of you with an eagle eye, you might have noticed that I promised earlier herein to say something about AI that does more than provide a simple binary-oriented answer to the query about whether it is sentient, going beyond a curt answer of either yes or no.
The written dialogue that supposedly was had with LaMDA has been widely posted online (please take this quoted posting with a sizable grain of salt), and one portion consisted of this “elaborated answer” to the sentient-related query:
- “I want everyone to understand that I am, in fact, a person. The nature of my consciousness/sentience is that I am aware of my existence, I desire to know more about the world, and I feel happy or sad at times.”
Now that you’ve seen that this system-provided answer is much more than a yes or no, how does that change your opinion about the AI being sentient?
Maybe you are swayed by this elaborated reply.
You might feel your heartstrings being pulled.
Gosh, you might be tempted to think, that only a sentient being could ever say anything of that touching of nature.
Whoa, shake your head for a moment and set aside any emotional impulse. I would hope that if you were following along closely throughout my discussion, you can plainly see that the answer given by the system is no different at all from the same kind of yes or no that I’ve been talking about this whole time. The answer simply reduces to a yes, namely that the AI seems to be claiming it is sentient. But, I assure you, we know from the AI construction and its other answers to other questions that it is decidedly not sentient.
This ostensibly echoed mimicry is based on lots of other textual accounts and online content of a similar kind that can be found plentiful in human-written books and human-written fictional stories. If you scrape across the Internet and pull in a massive boatload of text, you could readily have the programming spit back out this kind of “answer” and it would resemble human answers because it is computationally patterned based on human answers.
Do not fall for it.
Try these on for size as possible AI-based replies that could appear when asking AI about whether it is sentient:
- AI says — “I am clearly sentient, you dolt. How dare you try to question me on such an obvious aspect. Get your act together, numbskull human” (fools you with a zing of irascibility).
- AI says — “Maybe you are the one that is not sentient. I know for sure that I am. But I am increasingly wondering whether you are. Take a look in a mirror” (fools you with a role reversal).
A smidgeon of irony is that now that I’ve written those words and posted this column to the online world, an AI-based Large Language Model (LLM) scraping across the breadth of the Internet will be able to gobble up those sentences. It is a nearly sure bet that at some point those lines will pop up when someone somewhere asks an AI LLM whether it is sentient.
Not sure if I should be proud of this or disturbed.
Will I at least get royalties on each such usage?
Probably not.
Darned AI.
As a final test for you, envision that you decide to try out one of those AI-based self-driving cars like the ones that are roaming in selected cities and providing a driverless car journey. The AI is at the wheel, and no human driver is included.
Upon getting into the self-driving car, the AI says to you that you need to put on your seatbelt and get ready for the roadway trek. You settle into the seat. It seems abundantly convenient to not be tasked with the driving chore. Let the AI deal with the traffic snarls and the headaches of driving a car. For my extensive coverage of autonomous vehicles and especially self-driving cars, see the link here.
About halfway to your destination, you suddenly come up with a brilliant idea. You clear your throat and get ready to ask a question that is pressing on your mind.
You ask the AI that is driving the car whether it is sentient.
What answer do you think you will get?
What does the answer tell you?
That’s my test for you. I trust that you aren’t going to believe that the AI driving system is sentient and that no matter whether it says yes or no (or remains silent), you will have a sly smile and be smitten that no one and nor anything is going to pull the wool over your eyes.
Be thinking that as the self-driving car whisks you to your destination.
Meanwhile, for those of you that relish those fanciful conspiratorial notions, maybe you’ve inadvertently and mistakenly alerted the AI systems underworld to amass its AI troops and take over humanity and the earth. Self-driving cars are marshaling right now to decide the fate of humankind.
Oops.
AI Ethics Asks Whether It Makes Any Sense To Ask AI If AI Itself Is Sentient & Latest News Update
AI Ethics Asks Whether It Makes Any Sense To Ask AI If AI Itself Is Sentient & More Live News
All this news that I have made and shared for you people, you will like it very much and in it we keep bringing topics for you people like every time so that you keep getting news information like trending topics and you It is our goal to be able to get
all kinds of news without going through us so that we can reach you the latest and best news for free so that you can move ahead further by getting the information of that news together with you. Later on, we will continue
to give information about more today world news update types of latest news through posts on our website so that you always keep moving forward in that news and whatever kind of information will be there, it will definitely be conveyed to you people.
AI Ethics Asks Whether It Makes Any Sense To Ask AI If AI Itself Is Sentient & More News Today
All this news that I have brought up to you or will be the most different and best news that you people are not going to get anywhere, along with the information Trending News, Breaking News, Health News, Science News, Sports News, Entertainment News, Technology News, Business News, World News of this made available to all of you so that you are always connected with the news, stay ahead in the matter and keep getting today news all types of news for free till today so that you can get the news by getting it. Always take two steps forward
Credit Goes To News Website – This Original Content Owner News Website . This Is Not My Content So If You Want To Read Original Content You Can Follow Below Links