Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
On the point about "what's wrong with humans?" I think we're willing to entertain the development of such a high-risk technology, as a society, because of what has been promised. We're at a historic inflection point overall, there is massive political unrest across the globe, the economy of the world is extremely unstable, people are starving despite there being more than enough food (due to bad distribution), etc...etc... And here come the AI companies saying "hey, it's all good, just bear with us a bit longer...we're inventing God and when we're done it'll all be better." People are either desperate enough or otherwise preoccupied to realize the danger presented, and many feel like whatever comes out of it...will be better than what we have. There are a LOT of people who have little else to hope for than the idea that AI will save us all. And THAT is far more dangerous I think.
youtube AI Moral Status 2025-11-03T07:1…
Coding Result
DimensionValue
Responsibilitygovernment
Reasoningcontractualist
Policyregulate
Emotionoutrage
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgwKBnOek438mAagMAd4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzxYQRVAegFgHXg7Xx4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_Ugy1Uh_2A6Hmqz2zX3N4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"industry_self","emotion":"mixed"}, {"id":"ytc_UgyWlZdsdRzUsOyBErZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgyhMjYKq1Cxw9NDepx4AaABAg","responsibility":"government","reasoning":"contractualist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgxRPbCIL-qFBAmgtih4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugw6tbpjSp5ybqmD2ON4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"industry_self","emotion":"mixed"}, {"id":"ytc_UgzOZfFGG5Nz-yNf8cx4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxNbi0qF58Lo_Arj2B4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgzuuXOlamvh4ku8XWV4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"industry_self","emotion":"mixed"} ]