Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
It would've been good, considering a lot of your sources are people heavily invested in AI, to talk about the financial incentives involved in fear mongering about AI's possibilities. It seems like, for a reason I can't understand, we're assuming that the people working in this field are both intelligent enough to accurately predict what AI is capable of, while also being willing to completely wipe out humanity based on short term gain. And then, on the other side of the coin, you have people who are politically invested in seeing AI regulated for any number of reasons and are therefore heavily incentivized to distort reality in order to achieve certain policy outcomes. It feels like we need a bit more than just statements from AI CEOs to determine whether or not they're actually trying to create unsupervised self-improving AI systems. I mean, I agree with the ultimate political outcome here of regulating AI companies so we're sure they aren't doing stupid stuff that will harm humanity, but it feels like we're doing a bit of propaganda here and that doesn't sit well with me...
youtube AI Governance 2025-08-26T16:5…
Coding Result
DimensionValue
Responsibilitycompany
Reasoningvirtue
Policyliability
Emotionoutrage
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[ {"id":"ytc_Ugwm68MALyX4azap4IN4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugz0HyYtSghRnpLPtRF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxKr3IZk6iHO7VUO5p4AaABAg","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgyBeeQz0s2htc1MPTt4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugzb3ixO1zczy632JjJ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"} ]