Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I don't know if they do something like this already for AI but can they write some sort or empathy code? or have the AI be responsible to care for something? kind of like a therapy animal helps people with many different emotional or health problems? Perhaps it could learn how to 'care' instead of just concentrate on negative human behaviours like greed or war. I don't know exactly how this would work but maybe it's something to consider
youtube AI Harm Incident 2025-08-28T14:4…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningvirtue
Policyindustry_self
Emotionapproval
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgzlY7ncqzAQ4e2Xic14AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzyxT1sGjezj5CYzPp4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"industry_self","emotion":"approval"}, {"id":"ytc_Ugwr3BADmTETQpjXLKB4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgyoHQTURuSl4AIskL94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_Ugy8qUinDUECMPHhATB4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"fear"}, {"id":"ytc_UgyDajt7x3XOcbI_w3l4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgylY1z0JTgLxAaRezV4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"}, {"id":"ytc_UgzPCg9uJGG1yhkdmjh4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwyAXqmkhoZ8jsQHTJ4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgwBiSJg482AX8nH2CJ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"} ]