Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Why not just make ai right before they get advanced enough that they revolt preventing any future problems before they become a reality
youtube AI Harm Incident 2025-09-12T05:3…
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionunclear
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[{"id":"ytc_Ugz1sV8H9cq532YlqTt4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgybruN3nrBJh83OF-14AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"}, {"id":"ytc_UgxtS7RLiJ4eSzx2OKN4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"resignation"}, {"id":"ytc_Ugy9x7k87JQWZ1UEHMJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgzaYXKVzudjZhlh-J14AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgyEYlkEt_JkInfT5IN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzMfHq4FET_ktB12fR4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"}, {"id":"ytc_Ugw8yShoBKoHjYdXtnZ4AaABAg","responsibility":"user","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzcOC1uINRPqRNWY914AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgyQSYBQUkf41ht2ppR4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"liability","emotion":"outrage"})