Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Virtually every conversation with experts about AI focuses on the risks and how bad things are going to get with AGI and beyond. Why are we not doing something about it? Why are the developers and computer scientists not addressing the extreme risks?
youtube AI Governance 2026-01-04T20:5…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyregulate
Emotionoutrage
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugz8irGap780pgaPYUN4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugx0mHqIi_C7JqlUV2R4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgxHBeFUOmugoVyknvx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxM-aBghc33RgKAPnN4AaABAg","responsibility":"company","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugwz9lk7g386EI-9eyx4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"approval"}, {"id":"ytc_UgzGtAPrGoIn6oXFrhR4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxvHpytTHJES6iUFkV4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"ban","emotion":"fear"}, {"id":"ytc_Ugy0kMTg4ZGEN4s39vN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzeMMSDkMeB2lv2WCp4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzZB0XUu6-_tfaFNvV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"} ]