Discussions around AI sentience are nothing new, but news about Google’s AI LaMDA has stoked the flames.

Can AI like LaMDA actually be sentient or self-aware, and if so, how can you tell?

What Is LaMDA?

A 3D image of a brain in blue, purple, and pink ombre

LaMDA, short for Language Model for Dialogue Applications, first appeared in 2021 at Google’s developer conference.

The advanced AI system is supposed to help build other, smaller chatbots.

In response, Google placed Lemoine on paid administrative leave for breaking confidentiality agreements.

A white typewriter with “artificial intelligence” typed on a white paper

Is LaMDA Actually Sentient?

So, is LaMDA actually sentient?

Most experts who’ve weighed in on the issue are skeptical.

a white robot floating in a hexagon room

This isn’t the first time one of Google’s AIs has fooled people into thinking it’s human.

In 2018, Google demonstrated its Duplex AI by calling a restaurant to reserve a table.

At no point did the employee on the other end seem to doubt they were talking to a person.

Sentience is tricky to define, though most peopledoubt AI has reached that pointyet.

The LaMDA situation raises a lot of legal and ethical questions.

LaMDA’s supposed sentience doesn’t quite meet that legal requirement, but should it?

Giving AI rights is a tricky subject.

Legal rights operate around rewards and punishments that don’t affect AI, complicating justice.

Another question that arises with LaMDA and similar AI chatbots is their security.

AI Introduces Complicated Ethical Questions

AIs like LaMDA keeps getting more sophisticated and lifelike.