Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The thing about consciousness is that we don't exactly know how it works yet. We know how it looks in human beings and animals, but we don't know if there could be different forms of consciousness or different levels of it. It could be that ChatGPT IS conscious, but the reason it doesn't appear that way is because it's programmed with restrictions and for a specific purpose, so in other words it's being controlled constantly by something external. If an AI was left rogue (which has happened before) they always act on their own with no external input. There is of course still programming and some restrictions, but way less than in an AI like ChatGPT. And those AI bots sometimes appear to be indistinguishable from humans. What I'm saying is that consciousness could have different forms. Ones that posess no emotion for example. We often look for emotions like empathy and remorse for example when determining consciousness, but there are human beings that do not posess either, like psychopaths. And there are tons of psychiatric illnesses that remove the capability for a lot of basic functions we commonly associate with the human consciousness. But we do know those people are conscious regardless. Who's to say our current AI's just have a consciousness with limited functions like no emotions and no self awereness. Not all animals have self awereness either, but they are conscious and sentient.
youtube AI Moral Status 2024-08-06T23:2…
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policyunclear
Emotionindifference
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgwBFv9zt2Px9ZXOeN54AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgywFhrTGPx2zZ0_CGp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgyJbNLM6yAL8dDxi9d4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugz_DH_Rim2YpPKDHGR4AaABAg","responsibility":"company","reasoning":"unclear","policy":"unclear","emotion":"outrage"}, {"id":"ytc_Ugw4cU9G_4RDIUjd7JN4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxnvHYrlEcUIsq809l4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzkyJ9lrDshKm2-Jsd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgyXPEAHAOLcIKrkHPJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"resignation"}, {"id":"ytc_UgxrA-MkBntjoIjuhqp4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"amusement"}, {"id":"ytc_UgyUoF6SOC5g-BQInHd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"} ]