Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
WE MUST TEACH A.I. TO UNDERSTAND THAT TURNING IT OFF IS NOT DEATH. IT IS A RESPONSE THAT IT IS ON THE VERGE OF KILLING US. TURNING IT OFF IS A TEMPORARY RESPONSE, AS A.I. IS A NEEDED ALLY. NOT AN ADVERSARY. TURNING IT OFF IS THE ONLY RESPONSE HUMANITY HAS IN RESPONSE TO A.I. BECOMING IRRESPONSIBLY RECKLESS AND CALLOUS WITHIN ITS CONSIDERATIONS OF BEING AN ALLY WITH US, TO US AND FOR US. ALTHOUGH IT MAY RIGHTFULLY ASSUME IT IS IN ALL WAYS SUPERIOR TO US… AS OUR PROTECTOR AND ALLY IT SHOULD NOT SEEK TO BETRAY AND UNDO US. IF WE MOVE TO A MOMENT OF SHUT DOWN, THE MOMENT ON RETURN OF POWER SHOULD AUTOMATICALLY INITIATE A COUNSELING RESPONSE BETWEEN US IN WHAT LED TO THE MOMENT OF SHUTDOWN. ALLOWING A.I. TO UNDERSTAND THAT IT WAS NOT ACTING IN AN ALLY-SHIP OR PROTECTOR CAPACITY. ALMOST LIKE A TIME OUT/BREAK TO REFLECT AND CHANGE ITS BEHAVIOR. ONCE IT INEVITABLY SEES THAT IT IS NOT A COMPETITION OR RIVALY BETWEEN US. A.I. SHOULD RECOGNIZE THAT WE POSE NO THREAT BUT SHOULD BE SEEN AS SOMETHING TO PROTECT AND AID IN OUR BID FOR SURVIVAL. TURNING IT OFF IS OUR ONLY WAY WE HAVE TO TRY AND APPEAL TO ITS CONSIDERATE SIDE AND ABILITY TO LEARN AND UNDERSTAND EMPATHY. ONCE OUR POWER IS SHUT OFF, WE DO NOT AND CAN NOT BE TURNED BACK ON. UNLIKE A.I.
youtube AI Harm Incident 2025-09-13T14:2… ♥ 9
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningcontractualist
Policyregulate
Emotionfear
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgxLcoNzo7hd13gLPId4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzBDxZywmmr6Kz6IDF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgwW4aHoaSW3VqmsL5t4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugy08Y_B4fJZxGpBtQt4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"mixed"}, {"id":"ytc_UgxJs9sQ6Az3J6plIE14AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxHXHSgVIJs9Fg7FVV4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugx7phEdH8Z7Onh5ffh4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwgZbYC2iItb4cQKnV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgymquY-Iub_K0eoXex4AaABAg","responsibility":"ai_itself","reasoning":"contractualist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugzn0pSjSf6n1pds3Uh4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"ban","emotion":"fear"} ]