Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I stump Google AI today absolutely no context on the matter. it a first for me. usually it tries to convince me that I'm wrong. this is a human contidition not AI. here you go "AI will not be safe to human as long there is one evil programer or hacker, just think of aIl Terrorism or a whole country of terrorists that rape, beat, and kill there own people like the middle east. we can not stop terrorist attacks completely . they happen all over the world all, we cant stop them unless we stop there indoctrination."
youtube AI Moral Status 2025-08-26T01:1…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyliability
Emotionfear
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgzpJM16cXJj7RdyXZF4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_Ugz-ibpMyHjVvQQO3q94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgyYMElQFK6FE36adpF4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"unclear"}, {"id":"ytc_UgyJUJBKo0y7BTUDMSp4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugz5RyHpWlofN_hPC5F4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgztGHFa-aWRqMLBHox4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugxbm1908YYWU7gAFAp4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"}, {"id":"ytc_UgwKA_eEdDGVVN0c9fZ4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugw8IKq9MY13R9kK3hF4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyWeeS7pNNRc7Jvq6N4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"} ]