Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
NONSENSE: This entire scenario depends on the "alignment problem", which is a FAUX problem -- for so many reasons. For example, consequentialism: i.e. it misconceives virtues, and ethical behaviour generally, as a matter of having the right goals or properly assigned 'utilities', so that instrumental reasoning won't opt for some hideous actions to achieve those consequences. What keeps us from eating each other for lunch is not some global set of goals we all agree to, or even shared utilities. Human beings disagree, radically, on both. This is a short post, so suffice it to say, it would be insane to produce general AI with only 'goals' and 'utility assignments' as behavioural limits. We COULD build AI in the model of a sociopath or Genghis Khan, but we can also, clearly, do better than that, because we DO, EVERY SINGLE DAY (as humans).
youtube AI Governance 2025-10-05T04:2…
Coding Result
DimensionValue
Responsibilityunclear
Reasoningmixed
Policyunclear
Emotionoutrage
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgwQbkCSf_XoWQl3yMt4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"outrage"}, {"id":"ytc_Ugxl2FyK470AmfYcC9p4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugx8APfoGBCNKH2AXsB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugy816Mjj7dioV5wFjl4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugwj9LslNFI2wxxeWmh4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyGSlQh0G-X18QTgWF4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgxTs6ls9gjFs3z4rB54AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_Ugwc8HSA3h0k8-RrC5B4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugz1knuLb210bFp8GIx4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyMsbYLFfF-dP9E3QZ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"fear"} ]