Sentient AI–a Mystery or Not?

A few days ago, a Google engineer had been put on leave for publicly saying the chatbot they’ve been developing is sentient. After months of testing it, which involved trying to see if it could turn murderous or hateful, he’d come to the conclusion that it was an independently “thinking” entity. With feelings. Although this is more a story about a very smart person who clearly needs to get out more and chat with real humans, it sent me down a rabbit hole of ethical, psychological and scientific questions. https://gizmodo.com/google-ai-chatbot-sentient-lamda-1849053005

So, I thought I’d ask the Misses what they thought.

Tracee de Hahn

This doesn’t keep me up at night . . . that said, I’m not sure AI in its fullest form is going to turn out to be a great idea. I don’t mind a robot programmed to a specific task – particularly a dangerous manufacturing job, or underwater hull repairs, for example. However, I don’t think of that as AI, although maybe it’s limited AI. If “X” do “Y” but don’t think about “A”, “B” and “C”. 

I mostly worry about the limits reliance on AI puts on creativity. After all, the program is only as good as the programmer, right? Will we stop thinking for ourselves? This kind of reliance is a concern. (I will need to re-watch Bladerunner . . . )

From Emilya: So many questions here! There’s an AI that can write plays, screenplays and novels now. One that can write songs. I watched the Andy Warhol Diaries on Netflix (excellent), and the voice of Andy Warhol was an AI. It was eerie and strangely perfect. 

It’s not just that AI is already doing menial tasks, it’s doing so many things. An algorithm controls the stock exchange, for example. And if the AIs begin to think of themselves as “persons”, how is that going to play out? 

Connie Berry: But would you see a play written by AI? Would it win awards?

From Emilya: …????? What if it DID?

Sharon Ward

AI has been used in business for years, processing transactions, solving tech issues, and as a last resort, shunting the stuff it doesn’t know how to solve on its own to a human. But the thing is, it “watches” how the human solves the issue, and then it teaches itself to solve the same or similar problem next time it occurs.

Probably at least a quarter to half the time when you talk to a tech support “person”, it’s an AI bot, which is why it’s so infuriating when they put you on hold. 

And I don’t know if you’ve had a chance to read any of the AI generated ‘literature”, but I’ve read worse stuff written by humans. You bump into an occasional–okay, maybe a little more than occasional–oddity, but in general,  it makes sense. Some of it is even lyrical.

And there’s a new audio book company that uses AI voices to create audiobooks. Much cheaper than hiring a human (I’m not advocating for this. Just stating the facts.) The result, again, sounds pretty realistic. 

Except when it doesn’t. 

And it doesn’t do voices in dialog, which I think makes it pretty useless for its intended purpose.

But the thing is, even a few years ago, AI generated prose was laughably bad. AI voices sounded like robots. Help desk software gave up on even the easiest questions. But today, they have made huge strides, and you might be hard pressed to know whether you’re hearing or reading human or AI generated content.

There’s a thing called the Turing test used to judge the intelligence of a computer. A person poses a series of questions to an unseen responder, and must judge from the responses whether they’ve been talking to a human or a computer. It can be hard to do if you stick to facts or logic, but it’s still pretty easy when you bring emotions and feelings into it.

Tracee, one of the most interesting books I ever read was written by Ray Kurzweil, and he explored in detail what it means to be human. He pointed out that already today we have human/machine (cyborg) amalgamations. 

For example, if I’ve had a knee replacement, you’d probably still consider me human. Even both knees. Maybe my shoulders too. A hip. A glass eye? Sure. Artificial skin after a burn? Of course. What if I had all of these? At what point does my machine-ness outweigh my humanness?

What if my brain and all my memories lived in a jar? What if my personality and memories were downloaded to a purely mechanical body? Would you still be my friend?

Where do we draw the line? It’s an ethical and moral issue that can’t be answered by science. They’ll push the boundaries of what can be done. People need to draw the lines on what makes sense. Bladerunner indeed!

From Emilya: That’s the THING! Scientists will not stop pushing this. It’s an exponentially evolving business and in our lifetimes there will be truly fascinating/horrifying dilemmas

C. Michele Dorsey 

I’m going to sound old and I don’t care. I hate even thinking about this stuff. It hurts my brain and doesn’t touch my heart. I will admit that when I encounter those annoying roaming robots in Stop and Shop (grocery store), I have an urge to kick it. I don’t, but I refuse to get out of the way and make it move around me.

From Emilya: I have an urge to block those things too.

Susan Breen  

I find the whole thing terrifying. Maybe I’ve read too much Michael Crichton, but I firmly believe that where there is a way for things to go wrong, it will. Plus there’s something deadening about dealing with AI. They seem calm, but I don’t believe it. I much prefer to deal with people.

Sharon: Susan, I like people too, but it’s pretty hard to find one nowadays, at least in service or task that are easily automated, like picking and packing stock in a warehouse or approving invoice vouchers. If i’m trying to connect to a help desk, I try to use the chat option because it’s not as painful dealing with a chat bot as a talking bot.

Connie Berry

I agree with Susan. Humans believe we can make things better by our own efforts, but we usually miss the unintended consequences. I’m not afraid that AI will develop a will and emotions (I don’t think it’s possible), but I am afraid we will eventually give AI too much control over our lives–with consequences we cannot see or even imagine. Here’s one small example. The sister of a good friend lived in Texas with her husband and family of four athletic and active boys. For Christmas (this was before Covid),  they bought Alexa (or an equivalent) and set it up. Later that week, as the boys were rough-housing in the family room, the doorbell rang. It was the police, asking if everything was okay. They weren’t satisfied with the mother’s answer and insisted on entering the home to make sure. Alexa, hearing sounds of “fighting,” had surreptitiously alerted the police of possible domestic abuse. If I didn’t know the people involved, I wouldn’t have believed it. No AI in my home, monitoring us, thank you!

From Emilya: I was sitting with friends one evening and someone dropped an f-bomb, and the nearest phone chastised her!

Catherine Maorisi

I haven’t given given AI much thought since I did some reading and took some courses about it in the 1980’s. At that time, I thought there was no there there so I didn’t pursue it. 

The thing that came to mind when I read your question was a TV series, Humans, with robots called “synths” who looked human and were extremely intelligent. They were designed to do a variety of jobs and could learn as they experienced things. Some of them have consciousness but have to keep it hidden because it’s illegal. And eventually, of course, they rebel. It was a great series and a very good example of how we can lose control of Al. 

In theory, AI is fine. It could enhance our lives. But in reality, AI is scary. There are always bad players (just turn on the news) who push the boundaries for personal gain, without a thought for the consequences. And Connie’s story about Alexa, is just a tiny example of what could happen.

Unfortunately, companies and I presume the government, are proceeding full speed ahead to enhance AI’s capabilities and there is no going back.

Keenan Powell

Picture it: San Francisco, 1977. An ambitious but clueless young woman recently graduated from college is working in a sound studio hoping to advance her broadcasting career. In comes two young men in pressed blue jeans and white sneakers, giddy with excitement. They meet with the (male) bosses behind closed doors. The (male) coworker is called into the meeting. Male coworker comes out of meeting and says these two kids have something hot on their hands, “the home computer.”

Me: Like NASA has? Computers are so big they take up an entire room. Who wants a computer in their house?

Him: So you can access information.

Me: That’s what libraries are for.

Those two young men were Steve Jobs and Steve Wozniak.

And I ended up giving up broadcasting and going to law school so that I can now type this passage out for you on my home computer. 

From Emilya: WOW! A brush with greatness! In a much less phenomenal encounter, I had a boss in 1993 who went on and on about CompuServ, and swore up and down that the World Wide Web was a flash in the pan. CompuServ was where it was at. 

How about YOU? Have any thoughts on robots taking over? Us all becoming BORG? Resistance is futile…

Emilya Naymark

avatar
Emilya Naymark is the author of the novels Hide in Place and Behind the Lie.
Her short stories appear in the Bouchercon 2023 Anthology, A Stranger Comes to Town: edited by Michael Koryta, Secrets in the Water, After Midnight: Tales from the Graveyard Shift, River River Journal, Snowbound: Best New England Crime Stories 2017, and 1+30: THE BEST OF MYSTORY.

When not writing, Emilya works as a visual artist and reads massive quantities of psychological thrillers, suspense, and crime fiction. She lives in the Hudson Valley with her family.

One comment

  1. Here’s a link to a Washington Post article about the possibly-sentient AI (spoiler: it’s not), the (apparently very lonely) man who was fooled, and the real take-home warning:
    “But the Lemoine story suggests that perhaps the Turing test could serve a different purpose in an era when machines are increasingly adept at sounding human. Rather than being an aspirational standard, the Turing test should serve as an ethical red flag: Any system capable of passing it carries the danger of deceiving people.”
    https://www.washingtonpost.com/technology/2022/06/17/google-ai-lamda-turing-test/

Leave a Reply

Your email address will not be published. Required fields are marked *