Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I asked a couple of chatbots about this. Iirc, chatgpt didn't care about please but (then) Bing did. Or the reverse? They are looking out for signs of bad intentions so politeness could help defer massive guardrails being erected.
youtube AI Moral Status 2025-04-30T05:2…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyunclear
Emotionindifference
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgzsOW_U79JUE8Blk594AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"outrage"}, {"id":"ytc_Ugz5Nmo45qwidvyVWKh4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgwxttUqa66aKxgT7HN4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgxiVWDc9zj0WfYSoL94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxPCcuG1LqcxkaUYMt4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugw0HL0G63bC6wwlV3J4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgwvXKhbRRK0k92q1YV4AaABAg","responsibility":"user","reasoning":"virtue","policy":"unclear","emotion":"approval"}, {"id":"ytc_Ugxukz_of0dO6uYON1Z4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgzMgBr-6BfrPPwIwKt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugz1zEbAsIfAozkp81R4AaABAg","responsibility":"user","reasoning":"virtue","policy":"unclear","emotion":"approval"} ]