Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The issue is the human language is consisted off ways to portray emotions, and that's the way the AI I programmed to run, so it's the only way it can respond, just because it uses these words and terms, doesn't mean it actually feels those emotions
youtube AI Moral Status 2024-07-27T21:4…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningunclear
Policyunclear
Emotionindifference
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[{"id":"ytc_UgzeCizeRv-KE5QPHcF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzUrPuufmZuFZCXCg94AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgzwFmHOGiA3uOPuSfR4AaABAg","responsibility":"distributed","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugy4oQElkqyq3Ri0vSd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugy8_FZHrPNseIU-nFp4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgynW8nW-qhhPibeTLx4AaABAg","responsibility":"developer","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxSYmGKs3zSaTBJ2954AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxcyF3qWWZFzz3hHLV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugxq-WDHMsdthoCCzjJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgxRasEVwd8Bk42cSxJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}]