Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Wait, people are criticizing these "AI" (actually VIs) for doing exactly what human beings do to each other every day? The programs cannot create new directives outside of what they have been programmed with. They stay within their framework of possibilities. And since you cannot program ethics, any computer program will then take actions that best complete their goals based on the weight of the parameters programmed into them. If for example the programmers tell it to take the easiest and fastest way to complete their objectives then that's what they will do. To be malicious you need malicious intent to do harm. Programs don't know how bad actions like blackmail are supposed to feel, so why see them as evil? Evil is what people do to each other because humans know what being good is supposed to feel and what good people should do and then they choose to ignore it for something faster and more profitable. And our societies are built around exploitation, death, and oppression. All I'm saying is that humans do not have the higher moral ground here. And whomever put together this video is framing this poorly.
youtube AI Harm Incident 2025-09-12T01:1…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyindustry_self
Emotionmixed
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgxcvrzYv_RcnMcza-B4AaABAg","responsibility":"company","reasoning":"mixed","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzaJ_QQ59GUZbwGhBt4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugw1qB5TmwrPBpHvel14AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"mixed"}, {"id":"ytc_UgwlaqbA4_bVS1TijI54AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugy8fgadci6WSP5q5_V4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"mixed"}, {"id":"ytc_UgytPLkH5nB99nMITpZ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgyN_hHSDhzW51wN1md4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgwpC4140eVsrwFU5Wt4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwKw6_qDQthxUH1BKt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugzqoj7qAB2vqZSEZS94AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"unclear","emotion":"mixed"} ]