Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
And people wonder why a God might have requirements for us people who think we’re so smart that we can just decide what’s good for us. What would we do if AI got out of control? Destroy the unit.
youtube AI Governance 2025-06-24T19:4…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningdeontological
Policynone
Emotionfear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgyOasFlI8x5PKPamGl4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_UgynC8TIG_BZvofK_Kp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugxcio9Yzr0dBMQ_yX94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgxVTaBSGN34uRxrFi94AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgyGBnfRdRvZt0rbtLJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwGAyBjaAj9CfbJaIt4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_Ugz64VYTTYNGDljmCaR4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"}, {"id":"ytc_Ugw3logRjqccvY3gbBl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugx1ViX06K1ngZXKYvF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgypAVZCST67DOxdS8h4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"} ]