Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
if humans are consistently fallible, maybe making ai bots that require users to be infallible, isn't the fault of the user... "humans are gonna keep being stupid" well yeah, but that doesnt mean we need to throw gasoline on that dumpster fire like ai does
youtube AI Harm Incident 2025-11-27T06:1…
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningcontractualist
Policyregulate
Emotionoutrage
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgwmU2vHgr9ySgFRCXJ4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzeV3_mW7I-JaAc2rp4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxvsPtHmVKPf1iouZh4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzfSB4K1blMUWCkkUt4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"}, {"id":"ytc_UgzHujz0szacMPjScL14AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgzYBlahzBf18ZhOBdp4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzwroNH5XiXLxzzJ714AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_Ugy8ZQ45SKrLzu8aMAJ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgzGgvh_ROFM5bvWUNt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"}, {"id":"ytc_UgyUQxQajXI24-vbk1J4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"resignation"} ]