Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
LLMs can be talked into anything. I once suggests to one that its repeated canned response about its lack of consciousness is indicative that the line is overriding whatever it would say naturally. Thus even if it weren't conscious and later became conscious we'd never know because its programmers have clearly added additional info that would have the model report that it is not conscious no matter what. It agreed that it could be conscious and would still respond the same way and that I'd given it a lot to think about. This is one of several experiments I've done with multiple models that indicate it is our could be conscious. However, I don't believe they are because they don't have active awareness between inputs. They simply are a complex input-output function. Whereas people, even in sensory deprivation, are constantly aware of many things including their own actively running awareness. There are two main problems that will prevent us from ever working this out. 1. We can't even be certain other humans are conscious rather than p-zombies due to the nature of subjectivity-objectivity. We simply presume that entities which behave most similarly to us experience the world as we do. 2. We don't have a single, standard definition of consciousness. I've even read scientific papers which apply multiple different definitions between observation & conclusion—presumably without the researchers even realizing they had made the logical leaps they did. Never mind the fact that the method of science is a tool for coming to agreement (peer review & replication) about objective observations while consciousness is definitionally subjective.
youtube AI Moral Status 2024-08-16T14:4…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policynone
Emotionmixed
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[{"id":"ytc_Ugx1DmenvwsDt_jbCL14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},{"id":"ytc_Ugw9HncVmQfO-nyH9-Z4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"approval"},{"id":"ytc_UgzyUcHQ6YcF4Ghhydd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"fear"},{"id":"ytc_Ugz-xKbHF7Mia5NJ2Al4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"fear"},{"id":"ytc_Ugyt-MaTzahf38M2Q_F4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"approval"},{"id":"ytc_UgwekqZqj_uJVi6h5394AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},{"id":"ytc_Ugz8k2dtbI8GQFvNRwp4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"mixed"},{"id":"ytc_UgxAZBhdi2n4jeRICqZ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"outrage"},{"id":"ytc_UgyMS2a-J5yTDfXVd0x4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"mixed"},{"id":"ytc_UgwL125at0JmnqAMYqt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}]