Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I mean there are populations of people that are being wiped off the map right now and that has nothing to do with AI, well very little anyway, humans can hate a whole lot better than AI will ever be able to. We don't need computers to F ourselves up, we are quite capable of that on our own. The guy at the start of the video saying "we have agency, why would we make something that could hurt us, we just don't make it" what, you mean the same way we don't make guns and bombs anymore because they can hurt us, obviously I'm just pointing out what a ridiculously flawed argument that is. If one country doesn't make it, another will and they very well may be your enemies.
youtube AI Harm Incident 2025-07-26T01:4…
Coding Result
DimensionValue
Responsibilitynone
Reasoningvirtue
Policynone
Emotionresignation
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgwLl7k5KIo1GWqYmPN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgzilMHBtAM1v95ykKF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugwkd4r0xRW8kbkCIyN4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugwgaa9KNdrItNIcCbF4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"resignation"}, {"id":"ytc_UgwAIXo120oJIJ4Q_0B4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzHURxaTi6JLt4yDDh4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwmSMJKgn34KLZEG6p4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgyZsUv6OkQY_tfdstl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgyFtIOUQqK4n9V4PmJ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"approval"}, {"id":"ytc_Ugxt2o7opW8gdwpZyHV4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"unclear"} ]