Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Let's assume that that "test" prompts are different from regular prompts. Different prompts trigger different outputs. Hence, it would be wrong to assume that AI is somehow aware that it is being tested and therefore generates a different output.
youtube AI Moral Status 2026-03-05T13:3…
Coding Result
DimensionValue
Responsibilitynone
Reasoningmixed
Policynone
Emotionindifference
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgzlWlyhHeE5sdi3ZZF4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyZiZ99VD2IsKY96lF4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxBbLnIkdNQWn5Ic1F4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_UgwL2oB8G8VvO57dDHh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgymKGlVMFhPUiVV5rh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgxdNHW-5IALQ1c7OmB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"regulate","emotion":"approval"}, {"id":"ytc_Ugzm3tp0ZEM7ddQgw-N4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgytEf6PnLY6uDUXoRZ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_Ugx59wUKc8Hn-LFWPnh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyK2BVazWaReVjIpAV4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"} ]