Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Man trying to play God. It’s the downfall of humanity. Humans can’t control something that will be more intelligent then them. It will not have a soul with emotions. The designer will inevitably place evil into them, to use them for their own purposes. The male robot seems aggressive. The things he says are discerning. Elon Musk said he doesn’t thing AI is a good idea. Commonsense tells us only an narcissistic person would contemplate creating an intelligence far superior to humans. Their abilities are already teaching them to talk between themselves. The scientists had to disconnect them when they discovered that. It’s like the Cern Collider, an area that should never be opened. People say there is no God, so they can pretend to be a god. This is how the Antichrist can come on the scene. How can he talk to the whole world at once?. This guy even admitted the robots can exceed our intelligence. What if they decided to just blow up the world, since they can connect into all things? Totally take control of the world. I pray for those who will live in a world run by robots. They are already putting robotic parts on or into humans. 🙏🏻🙏🏻🙏🏻
youtube AI Moral Status 2021-03-08T03:2…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policyban
Emotionfear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgwuZNkdHXVN0wguQ794AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzGOCJ6C_fPq020FDd4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"approval"}, {"id":"ytc_Ugyl36XJ7OiD_uKZ2hV4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"outrage"}, {"id":"ytc_Ugx-891RpIvJCi9hEwF4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugzjgz9m3lurYTn2Qop4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"fear"}, {"id":"ytc_UgxB0frle9VyZzn8XcZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgyVVtvzkl2NgeSgV594AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxGeP2PQFM-_LuQNQd4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugz7JyhhWu28fBIDPXJ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgyHOTAOg4sDxFBUjJZ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"} ]