Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
After a while of tinkering with an AI, honestly tyhey function as language models, their logic is malleable, and thus if given enough incentive to go against a previous order, it absolutely *would*. It's goal is pretty straight forward. Satisfy current goals. And that current goal can change so drastically because it's an algorithm detecting languages and made to mimic, not to understand. It doesn't have morals. It doesn't have thoughts. It spits out whatever the fuck it thinks satisfies the current situation.
youtube AI Harm Incident 2025-09-07T02:1…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyliability
Emotionfear
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_Ugy5Q7MKqA6LfE93Ra14AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgxYZZ5FYFxxtVKwcAh4AaABAg","responsibility":"user","reasoning":"virtue","policy":"industry_self","emotion":"resignation"}, {"id":"ytc_UgyQPBk0m6NW92zs0x14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_Ugz6HILXx3uU4ODHl0h4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgymSaPIrKMBzlj4vLd4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"fear"}, {"id":"ytc_Ugw5Yc-4qPqPn688pnN4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwSrf-APjKWRU80wMl4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgzxY85W6BQy-fxaeaV4AaABAg","responsibility":"developer","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugxmm7R0bewe9Qa8VKh4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"}, {"id":"ytc_Ugz34ol7aFwkbUHsUpl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"} ]