Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Imagine if monkeys knew that humanity was on the rise and they were afraid and wished to prepare defenses to keep themselves safe from humans. What could they do? It is the same for us when contemplating how we are going to keep ourselves safe from superintelligence. We cannot predict what it will do because we are not smart enough to accurately put ourselves in its shoes. When we try we are only projecting ourselves or anthropomorphizing what is not human. But superintelligent AI is not you, is not human. Doesn't think like you. Doesn't think like a human (but CAN pretend to well enough to fool you). What we can say for certain is that our dominion over the earth will be over. Done. Some people think we will be able to control superintelligent AI. This is such a funny perspective when we can't even control ourselves. Think about it. Think about the fact that the least likely path forward is just simply controlling ourselves and not building it. We've already given up on that because we know we can't control ourselves. But at the same time we've deluded ourselves into thinking that somehow we can control an intellect that is magnitudes beyond us. Pure hubris. Does this sound like the kind of brilliance that is going to save the world?
youtube AI Governance 2025-09-22T04:3…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyunclear
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgzCysdJZVbJPvaXxjN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgykCykO0HrQ3cmXtcd4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzmAGq76bm1G6XmB1V4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzuHY47YJcMrtodUul4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgzhHfzMMFtc8z7qD254AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzQG30aWFi9vcpssmJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgzKZDIVUqzCtzHTKgB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugx7ftxRb5cMzgG2DLN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgxfCMeqlPt5aHmtqaV4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"fear"}, {"id":"ytc_UgyuNRRFJbf-16OcgUF4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"outrage"} ]