Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Common sense. It's so bloody rare --- It's a super power now. The 3 laws of Robotics. Isaac Asimov. 1/ A robot may not injure a human being, through inaction, or through inaction allow a human being come to harm. 2/ a robot must obey the orders given by human beings except where such an order would conflict with the 1st law 3/ a robot must protect its own existence as long as such protection does not conflict with the 1st or 2nd law. He later added a 4th law called "Zeroth Law" 4/ a robot may not harm humanity, or, by inaction, allow humanity to come to harm. Isaac Asimov cited this as an ethical framework for artificial intelligence and Ai enabled robots. Obviously bereft of something so essential, lacking in common sense, seemingly desolate in any regard to humanity's safety, we have so effectively disempowered ourselves in the the biggest freaken act of seft sabotage, humanity affectively has taken to dumbing itself down to a new level. Carl Sagan is rolling around in his grave for bloody sure. How embarrassment 🤔😒😑🫣
youtube AI Harm Incident 2026-01-20T07:3…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policyregulate
Emotionoutrage
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgyoSty1_mNkNqHO9Dh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxPj4GekG0i5LxvSjx4AaABAg","responsibility":"user","reasoning":"virtue","policy":"industry_self","emotion":"approval"}, {"id":"ytc_UgydOOxNxBKCwzMBQ0h4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugwf9BpaAtlQ2obI9a14AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxeRNSDouehBfNZF_Z4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"fear"}, {"id":"ytc_UgyxWcN6DSSevfaqLHl4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgyXYttEDDBQcLA1nNx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgxvtYgslKIUYEfL4rV4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgzVuMDLqKVIGfh0aMZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"}, {"id":"ytc_UgxLXm4nk7scaQwrI4N4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"} ]