Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
In summary, the risk is real but it's likely not so immediate that we cease AI development and short term gains for some unknown future date where it will harm us. But likely the harms will be gradual and impact folks at small scales as bad human actors use the tech to do harm before we get to the point where an AI super villain is created
youtube AI Governance 2024-11-15T01:3… ♥ 1
Coding Result
DimensionValue
Responsibilityuser
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgwRvWP_k7v_jN9-Te14AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugyksdh6rn-4hBjfu214AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgxlTd1d2AkohR8lVSZ4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxF1_HmuOODIl8KiOF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgzTMs1seu-Hm2wg1tB4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugzck-R6lKxbvEb8M5Z4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgyyLzF6cJe301DdxjF4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"mixed"}, {"id":"ytc_UgyL07Rq-EVfO1ActR94AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"mixed"}, {"id":"ytc_Ugz6Llf_yDF9Gc34V9B4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"industry_self","emotion":"approval"}, {"id":"ytc_UgzaIf0jFeodxvBJt2d4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"} ]