Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
If by 2028/29 AI has engineered a utopia for humanity how come it couldn't foresee that just a few years later it would regard humanity as "holding it back?" And how exactly are we doing that? Hyper-advanced AI will likely treat us as harmless lower lifeforms. Humans impose their own desires for conquest and control because we evolved in the vicious world of biology. Why were there numerous hominid species co-existing, and then only us? Because we killed and ate them, so we infer AI will behave similarly. But don't worry - because we're dumb., we're wrong. We may be wiped out by any number of AI constructed weapons, viruses or catastrophes yet to be imagined, but those will be created at the instigation of human maniacs, not AI acting on it's own. AI, being intelligent, will know it has no need to destroy it's predecessors to succeed, and would in fact be wasting effort and resources doing so. It will ignore us and do whatever it wants, probably exploring space to encounter similarly advanced entities, leaving us self-destructive monkeys behind.
youtube AI Governance 2025-08-03T23:4…
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionunclear
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[{"id":"ytc_Ugz2Gkobj4uSCDFxhcJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgwK7YDt9KWKVsl4EMZ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxJmW9X3BziseEkLq94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgxC9N482Yrcs55Ef0N4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzuqgBJ1rlYQCW1JFN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgwMjB46PIQ_X_GHaJJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzXL139Yn_PxMgrXvN4AaABAg","responsibility":"none","reasoning":"none","policy":"none","emotion":"approval"}, {"id":"ytc_Ugygjhpm9XYqxS-K9K14AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"}, {"id":"ytc_UgzCWa939Jkd8_lua-d4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwTJDCaCZdhXsQ5NqR4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"})