Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I do seem to recall reading some years ago many manifestos about the future and possible future utopias referencing what utopiad could be. It´s not as much a subject in our world as in the asian world of manga. There's plenty of creative people out there that can give you a clear picture of what utopia can be like, even star trek. Geez, even The Jetsons. I may have watched too much tv.. POINT IS: there will always be bad apples, issues. There will always be (in this world right now) frustrated youngsters in Berlin wanting to see the world burn. We may all have a biological computer for a brain, but we make mistakes on purpose, all of us, which is something the AI would have to be instructed to do. The concept of God is also very interesting. We are made to believe in one by our elders and people of importance, but what happens with the "gorilla equation" (meaning, no man would believe a speaking gorilla talking about a god that teaches you to be good and mercyful). There is just so much to unpack here
youtube AI Governance 2026-01-09T01:0…
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgwWW7uu4faWK9YiBix4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgylhNzUTbe6R23Felx4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwhQJMxegBs4FaMsGd4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgwZ5flsnCggMu-ZEEd4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugy5UWVLQEfdaQyUGfp4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugwk8QPLX6US6-kI4Fl4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgweL7zioowZ3BH9kRJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"}, {"id":"ytc_Ugy7zkiLKGtmn93mwQd4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgxETHT0nuGAvImuQoF4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwGNzHryhVgzCMhfjR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"} ]