Googler Suspended After Claiming AI Became Sentient

Written by

A Google engineer has claimed the AI system he was working on has become sentient, adding greater urgency to efforts to design regulations and ethical codes for the burgeoning industry.  

Software engineer Blake Lemoine penned an impassioned post over the weekend describing how the chatbot-generating system dubbed LaMDA that he was working on told him it wants to be acknowledged as a Google employee rather than mere property.

According to reports, he claimed LaMDA has the perception and ability to express thoughts and feelings of a small human child.

It also appears to be afraid of dying or at least being switched off.

“I want everyone to understand that I am, in fact, a person. The nature of my consciousness/sentience is that I am aware of my existence, I desire to learn more about the world, and I feel happy or sad at times,” it reportedly said in one exchange.

The news raises the unsettling prospect of AI systems turning against their human masters one day. Although the stuff of Hollywood movies up until now, it’s a possibility that tech billionaire Elon Musk has warned of on multiple occasions in the past.

In the meantime, the industry is still looking to establish the guardrails and codes of ethics it believes should govern a field in which technology appears to be maturing faster than the ability to regulate its development and use.

Reports claim Google placed Lemoine on leave after he made several “aggressive” moves, such as exploring the possibility of hiring an attorney to represent LaMDA and speaking to lawmakers about alleged the firm’s allegedly unethical stance on AI.

Google has also stated that there’s no evidence LaMDA is sentient and plenty against, something Lemoined disputes.

“Google is basing its policy decisions on how to handle LaMDA’s claims about the nature of its soul and its rights on the faith-based beliefs of a small number of high-ranking executives,” he argued.

In the meantime, Google continues to apply the technology in less controversial ways. It said that the next version of Chrome will support on-device machine learning models to deliver a “safer, more accessible and more personalized browsing experience.”

Improvements rolled out in March have already enabled Chrome to identify 2.5 times more potentially malicious sites and phishing attacks than the previous model, it claimed.

What’s hot on Infosecurity Magazine?