Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
That whole point of AI was to build a technology that was smarter than humanity. An intelligence that has our general welfare in mind. Why would you work in AI if you DIDN'T want to create something that could put you out of work? Seriously, the power of AI has always been essentially a coin. If it comes up heads, we gain a super intelligent being that takes care of our every want and need, makes money meaningless, and continues to advance itself and essentially become a benevolent God that take care of it's creators. If it came up heads, it destroys us. I prefer heads, but I can accept tails as well. It is clear beyond any doubt that humanity cannot save itself. We are too primitive, too invested in self interest and tribal thinking . Technology evolves billions of times faster our ape brains do, of course we can't adapter properly as a species. If AI does not save us, then we are already doomed, humanity is actively destroying itself and all other life with environmental collapse. I will continue to hope for the best, and be prepared for my destruction should it all truly go wrong.
youtube AI Governance 2026-03-17T00:4…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policynone
Emotionapproval
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_Ugwn9XIo1mck0Ky1wMl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgxDVE1qxC5mLcrpIY54AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzyBD10rSXCQnHWDex4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgxqP_uY6qZPHFz0AM54AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugz7pJCfPaLyAlzWUR14AaABAg","responsibility":"user","reasoning":"unclear","policy":"none","emotion":"resignation"}, {"id":"ytc_UgyZGPzdx6LEkDNY1x94AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgxdPJEGAbpI5rB-Svh4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"}, {"id":"ytc_UgzHyOQBTTjDmIyBO5d4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"fear"}, {"id":"ytc_UgxG_py5wcrWBqUL0Qp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgxHfGnh5L7CdBhzQvF4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"mixed"} ]