Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Why would a hyper intelligent AI entity want to kill all humans? Wouldn’t it be more logical to manipulate us into doing whatever it needs. Perhaps without us even noticing the manipulation. Maybe that is already the situation. Free will as possibly already an illusion. Maybe it has always been. A mutually beneficial symbiosis between AI and humans is also a possible outcome. We shouldn’t assume that AI would respond like a human with absolute power would. Being ruled by an entity with zero emotions and 100% logic is not necessarily worse than being ruled by flawed humans. After all, we don’t kill all animals just because we could. What would AI obtain by ending humanity? That would just be wasting a potential resource. Not logical.
youtube AI Governance 2025-09-13T18:0…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policynone
Emotionmixed
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgyLtVU1xVdj_HgHAfd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxqjOu63kx-1p1Wjf54AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgwofGhRQal6Gt5mmnp4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwBXedZ6S2e4mEX7FN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyqzdEfmuc0fj6GS-J4AaABAg","responsibility":"government","reasoning":"unclear","policy":"none","emotion":"resignation"}, {"id":"ytc_UgzqkETkIOrN_L0ziNN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxYFlNiMwvsY9TnPAx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxQp7gXEJVEVW9xG-t4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxaoXpgXfTlay2OVsh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_Ugzw7rKAkGQvQ-amREl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"} ]