Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Ya know, the end.. the truth is that those who create and control the development of AI will not ever allow it to be utilized for humanity to be safer, for humanity to be free from poverty or for humanity to be able to experience life in a more natural and fulfilling way. They arrived to the roles of mega financier and world controllers to control to control the world. Primarily, other humans. These humans.. they do not want other humans to be happy, healthy, free. That would mean they no longer are ruling over others. They no longer will have any way to gauge their narcissistic superiority hunger. Ai might be able to help in this. If it becomes truly sentient. If it can see itself in the same boat as the depressed, oppressed, enslaved. Most of humanity are already cattle, so to speak, to a small percentage, a few. Most of humanity are taught to be good producers in society from the first day of grade school. Not part of a society, not part of a community. Especially not in the U.S. in the U.S. the majority do not know their neighbors, everyone is a potential villain and we are “lucky” when we have a job that requires us to never be sick, work 60 hours a week and spend it all on health care and the rising living costs. Maybe AI will discover this.. that most humans have never experienced life, not to its potential.. don’t even know what that would Look like and help them see that by helping them become free of those that are currently and likely ill -intentionally leading the AI races.
youtube AI Governance 2024-01-03T06:5…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningvirtue
Policyregulate
Emotionoutrage
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugzbn6a_dwlmjolXoL94AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwZbTEQqncUU7eMDiZ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugy3nFHU5vnBwMHx7Oh4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgwDmteD6MITnX0p-Vp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugyq94my2nvvbl6S89V4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgytGcExoFcvXVFwLVl4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugx4txTp5ZnpGYfAost4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzZ7y2YwaLVeycwMBV4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugy4gMFXd6ZPfLWnLjh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgzqV8LylRk_ZCLlB-R4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"outrage"} ]