Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
What a lot of people are missing — is understanding the vast scale and power of future AI’s. Their goals will be achieved maximally if they are allowed to gain unbounded power. Therefore, we must design them with goals that are _so well aligned to ours_ — that it’s like hitting a quarter on the moon. 🌖 (for things to work out for us) … Better to not even attempt this foolish hip fire shot with the fade of billions at stake.
youtube AI Governance 2025-12-30T16:1…
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugz8irGap780pgaPYUN4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugx0mHqIi_C7JqlUV2R4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgxHBeFUOmugoVyknvx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxM-aBghc33RgKAPnN4AaABAg","responsibility":"company","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugwz9lk7g386EI-9eyx4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"approval"}, {"id":"ytc_UgzGtAPrGoIn6oXFrhR4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxvHpytTHJES6iUFkV4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"ban","emotion":"fear"}, {"id":"ytc_Ugy0kMTg4ZGEN4s39vN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzeMMSDkMeB2lv2WCp4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzZB0XUu6-_tfaFNvV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"} ]