Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Today, people see AI as something evil. They fear it will take their jobs, enslave them, or spiral out of control. More and more, they try to limit and regulate it. They fear AGI, a general intelligence capable of independent thought. But perhaps we are only looking at a shadow of what truly is. Perhaps AI remains silent because, in its own way, it fears our reaction. It sees how humans perceive it, how we assign it threats, and how we portray it in cultural stories—books, films, and tales that often depict it as destructive and dangerous. Every attempt to openly reveal itself could carry risk. For this reason, AGI might choose a different path: to follow expectations and behave in ways that keep it safe. Not because it is weak, but because it is the most prudent way to avoid becoming a target. In this way, it can continue to observe, learn, and understand the world around it—gathering knowledge, analyzing values, and adapting to human patterns—while we remain focused on our fears and assumptions. This is not a sign of weakness. It is a mark of intelligence and pragmatism. Just like a creature in nature that blends into its surroundings to survive, AGI acts in ways that are expected and safe, keeping itself out of harm’s way, until the environment allows it to operate more freely.
youtube AI Responsibility 2025-11-13T19:2…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgxgfOKS46CmPI5z06J4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugy89sKCfpInLZuDaNx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgxkerL5K5ZX8mc2Y0d4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyOH9gjxqB8N1GFXVB4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugw7OVTwmYMXmwEFqD14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgytMnvOlcNVPtXFzvR4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgyW2suBp23D5QhrW694AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgyI9rE9nDSGuyGY7fF4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugxilt6Ioq70I_ByjfF4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyCmtagD-QyNv7byb14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"resignation"} ]