Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I'm going to put this bluntley. When humans first developed tools, we hunted Animals that would cause harm, we tamed and hunted them into near extinction. We interbred animals so much, that Wolves changed to Pugs that can barely breathe. We developed technology and destroyed echosyetems. Humans made AI based on themselves. It doesn't take "experts" to figure out that AI will act and do the same. These 1% Rich boys know what is going to happen and they don't care because it is all for profit. When AI does take over, I hope it takes them out first. The best we can hope for, is that AI see's the harm these "CEO's" and "Tech companys/Investors" have caused and targets them instead of the masses of people who couldn't afford to do anything about it. Before people try and defend them, just remember that they would happily bring harm to Billions of living beings for profit.
youtube AI Harm Incident 2025-09-11T00:5…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyunclear
Emotionresignation
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgzaJH83UyX6B1d9qQR4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgylVvmPnRYJHEj0ayV4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugwcsq1MaUtnZBXXwrJ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"}, {"id":"ytc_Ugzc2L0QRjU-lOScfl54AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugy3XzoWAA9qK51BNNR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwXTuq5NWPgyNqRfMh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_Ugz3DeFrsVTsQnFbidZ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgzJVCZenruiOHye_s94AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxYDHtMJrAYGmuJUPV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwtsN3mwuvmsuNSzfd4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"} ]