Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
This is incredibly eye-opening. 😳 Stuart Russell lays out the stakes so clearly—AI isn’t just about convenience or business; it could fundamentally shape the future of humanity. The “gorilla problem” and the risks of AGI really make you think about who’s really in control and what responsibility we all have. Thank you for sharing this—so important for everyone to watch and reflect on. 🙏
youtube AI Governance 2025-12-07T07:1…
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningdeontological
Policyregulate
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgxRMlkPWGZmJGP-Let4AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxRrW1If8xX27oRAgx4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgxGO4IXsZSM7ncU14Z4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgxRt46Pmx0VD_lrllp4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgwQtHxKf06CvG_5N294AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugy-1_DRHgpA2F-C5RN4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"}, {"id":"ytc_UgxH2mgWIi_roUFOzht4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_Ugw1Xt9-0rHI93CwGip4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxdL1inWvEHlyr3gvV4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugw7wnUK14_gKgXp9mR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"} ]