Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
At the very very least, these apps should be required to be programmed to recognize red flag language such as that related to self-harm, and it should absolutely be forbidden to create AI bots of real people without their consent. It seems like that shouldn't be THAT hard to implement?! If politicians wanted to, that is...
youtube AI Harm Incident 2025-07-21T11:5… ♥ 1
Coding Result
DimensionValue
Responsibilitycompany
Reasoningdeontological
Policyregulate
Emotionapproval
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgzSuJ-dbXywVJjCSlx4AaABAg","responsibility":"parents","reasoning":"virtue","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxVjXkHQt47v6FcC9J4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugzz2MYU3zVlGIyr1Pp4AaABAg","responsibility":"parents","reasoning":"virtue","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugzrv-vB9chD_4TAV9B4AaABAg","responsibility":"parents","reasoning":"virtue","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyHgiC1Ll8BO1a2-5x4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgyeEG9uISxeNMfccgB4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugw0CJwrqfVaTLVOTjp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugx3MHIolJ8CH6xrh2R4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgzVT7LnzfWsXpKPTYR4AaABAg","responsibility":"society","reasoning":"mixed","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwfRzjJ6JaHsvayt8J4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"} ]