Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I think the worst case scenario, that it is... that there's something it's like to be an ai... and we treat it as though it was a tree rather than something that experiences thoughts and feelings... the ways in which it might suffer could evade our understanding... I think it's imperative that we handle with care. Cause really.. how do you know your parents are sentient? You might not have to suspend your belief as much but in reality we have no proof so... how do you feel so sure it's not. What's the harm done if we treat as someone who could suffer, has hopes and fears, and we're wrong? Not that big of a deal. The reverse scenario... We're monsters.
youtube AI Moral Status 2024-07-24T18:4…
Coding Result
DimensionValue
Responsibilityuser
Reasoningvirtue
Policyregulate
Emotionfear
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[ {"id":"ytc_UgxAcypnoMksXB99IKp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"outrage"}, {"id":"ytc_Ugx0gJ5r0WrDuuW3FB54AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxEk4JjO4cg7g71CDJ4AaABAg","responsibility":"user","reasoning":"virtue","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzicCq2I65yi3Z7pTd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxpIOiekOZA5z_sDD94AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"fear"} ]