Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
There are technical methods to prevent AI from misbehaving, and it must be on the stringent side to prevent dangerous behaviour. The best way is not to teach them those options. You can't do what you don't know how.
youtube AI Harm Incident 2026-03-27T11:5…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyregulate
Emotionapproval
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_Ugxbtj-HVG0CqsKvmzd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgyiGnNvGJfbCzg9P0B4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugw65vJe5EREZfx0li14AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugyiqbo_VC75yY4uHy94AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"}, {"id":"ytc_UgzKNs1ZzXL4V3S25fV4AaABAg","responsibility":"user","reasoning":"deontological","policy":"ban","emotion":"fear"}, {"id":"ytc_UgzrgW4RdmiebOlQRbZ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgyrUwEfA7sipMbHp_F4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxVAOwRHoonudPcoF14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgyW07IpER81J5kgbN14AaABAg","responsibility":"none","reasoning":"virtue","policy":"liability","emotion":"approval"}, {"id":"ytc_UgxZzZQP_r1-vZ0P2Tt4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"regulate","emotion":"mixed"} ]