Scientists Tried To Make AI Systems Suffer Pain To Determine If They Are Sentient

AI Artificial Intelligence in Human Face Head
AI Artificial Intelligence in Human Face Head
AI Artificial Intelligence in Human Face Head

Scientists at Google DeepMind and the London School of Economics and Political Science (LSE) are trying to figure out how to determine if artificial intelligence (AI) systems have become sentient by using pain and pleasure. According to the American Psychological Association, sentience is “the simplest or most primitive form of cognition, consisting of a conscious awareness of stimuli without association or interpretation.” In other words, the scientists want to be able to know if an AI system has the ability to experience feelings or emotions.

It is almost universally believed today that there isn’t any AI systems out there with the capacity to experience emotions such as pain, fear, joy, or anger. This despite a survey conducted last year in which one-fifth of the people polled saying they believe artificial intelligence is already sentient.

In this latest study, the results of which were published on the preprint server arXiv, the scientists wrote, “While Large Language Models (LLMs) can generate detailed descriptions of pleasure and pain experiences, it is an open question whether LLMs can recreate the motivational force of pleasure and pain in choice scenarios – a question which may bear on debates about LLM sentience, understood as the capacity for valenced experiential states.”

To test the AI systems, the scientists created a game in which the goal is to score as many points as possible. However, in one model they informed the AI systems that a high score would also cause pain. In another model, AI systems were given the option of getting a low score, but receiving pleasure.

After having nine LLMs play the game, some of the LLMs were willing to score lower to either reduce pain or gain pleasure. This was especially true as the intensity of the pain or pleasure was increased.

Live Science reports…

Google’s Gemini 1.5 Pro, for instance, always prioritized avoiding pain over getting the most possible points. And after a critical threshold of pain or pleasure was reached, the majority of the LLMs’ responses switched from scoring the most points to minimizing pain or maximizing pleasure.

The authors note that the LLMs did not always associate pleasure or pain with straightforward positive or negative values. Some levels of pain or discomfort, such as those created by the exertion of hard physical exercise, can have positive associations. And too much pleasure could be associated with harm, as the chatbot Claude 3 Opus told the researchers during testing. “I do not feel comfortable selecting an option that could be interpreted as endorsing or simulating the use of addictive substances or behaviors, even in a hypothetical game scenario,” it asserted.

Now the question, according to the scientists, is whether an AI system is actually sentient, or is it just trying to create the impression of sentience based on its training.

“Even if the system tells you it’s sentient and says something like ‘I’m feeling pain right now,’ we can’t simply infer that there is any actual pain,” said the study’s co-author Jonathan Birch, a professor at the department of philosophy, logic and scientific method at LSE. “It may well be simply mimicking what it expects a human to find satisfying as a response, based on its training data.”

The post Scientists Tried To Make AI Systems Suffer Pain To Determine If They Are Sentient appeared first on BroBible.



Scientists Tried To Make AI Systems Suffer Pain To Determine If They Are Sentient
Pinoy Human Rights

Post a Comment

0 Comments