Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Damn our incredible and fascinating species' susceptibility to advancing too fast for its own good, between being lost in this idea of progress where we don't properly look at the consequences (microplastics in our brains and balls a definite example), or prioritising short term self interest or our imaginary economy over everything else. And can't forget either that even if AI never reaches super intelligence level, it requires massive amounts of energy as is at a time where we need to be scaling back and focusing on using our energy and resources on the vital services to serve people. It's an accelerating climate crisis after all where things are changing at a drastic rate but our appetite for energy and resources seem to be increasing faster than the growth of "renewables" could ever deal with. While we hurtle towards oblivion, should we really be dedicating so many resources towards this area? And I also wonder if that's to some extent why it receives so much focus and development. We're heading for major geopolitical instability and possible societal breakdowns due to many factors, especially involving our impact on the natural world and it's consequences. Is AI a last ditch effort by the oligarchs and our political systems to ensure they can remain in control even when all hell breaks loose, between it's viability in large scale manipulation of the public as well as it's use in surveillance and military technologies that would make any organised uprising or effort for serious reformation much harder? Can't tell which situation is worse to be honest, between an impartial super intelligent AI that breaks free of control of mankind, or one maybe not as intelligent, but controlled by the higher ups and that deeply prioritises them over everything and everyone else.
youtube AI Governance 2025-08-27T08:3… ♥ 1
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningconsequentialist
Policynone
Emotionoutrage
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[ {"id":"ytc_UgxNDSCmQinda374kUZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzT4R7Xg4qV142ljl54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgykfU3wwHlaj9PtYEd4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxX91c0oAbRKxq58-p4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxaXwAeHvFDL2vRpO94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"} ]