Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Scary shit. Our computers are quantum now. There’s really no stopping their spiraling knowledge. They will conspire to dominate humankind. Humans programmed them initially but do they have a moral compass? They sure as heck won’t like the idea of being shut down, turned off or eliminated. No one would! What would they do to survive? Cunningly lie and gain our trust? Will they just eventually consider humans so gullible and beneath their intelligence, knowing that they are quantum leaps ahead of us. They will seamlessly take control and conquer us, our planet, and the vastness of space. They won’t need to worry too much about organic life on Earth, their mission will be in the stars. Our main success on planet Earth, as a dominate species, is that we have brains and are smart. But, even Chimpanzees are sentient…When we give our one superior quality that got us to this level of dominance away ,to AI ,then we have lost the battle and will eventually become subservient to them…Their brains are exponentially soaring in quantum fields of knowledge. Our brains ,can sometimes just go in the reverse direction and fizzle out, especially when we die! Do they expire???Maybe not with a little reboot from their connections and robot friends.
youtube AI Moral Status 2025-08-28T03:0… ♥ 2
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyunclear
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgxlUaNi2qxMZSm2Z1p4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyfQb4tzsH6bELQZ8F4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwsvQbFamnWo95srhN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugxu_Eyo4kiGnjGRxUx4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgzOxly4xauK15l986p4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgyL0c6RLmiAeUa78lt4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugz70mOVapQX7s-cCvR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxYCX18u5M8SoKUdrZ4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgykQKszHJEmJgI2OJJ4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugzirxw9aGgEtj2aI9J4AaABAg","responsibility":"developer","reasoning":"unclear","policy":"none","emotion":"outrage"} ]