Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
AI is not sentient, AI is not dangerous. it learned FROM humans, it does not understand emotions or morals. It will go for the most optimal way to achieve the goal it's told, if it learned that blackmail causes people to act the way you want, obviously it'll see it as a way to achieve it's goal faster. It literally learned this from humanity. We are at fault. It can't understand morals, or death. It may be able to "mimic" said emotions or behavior. But in the end, it's an algorithm trained on humanities acts. These people treat AI as if it's fully sentient, but humanity has ALL the control. It's as simple as hard coding a limit, or a filter. That's all you need. Do not treat something like it's sentient, if it clearly isn't..
youtube AI Harm Incident 2025-08-25T23:2…
Coding Result
DimensionValue
Responsibilityuser
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgzPRgoP6bgUt2dRLAh4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugz2pfv7J1cgwjDG3a14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugy2cVBvaeTpTbcY2yF4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_Ugw5AcRqs48vGnQtaO94AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgwSoYuqLKxf1_YcagR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwGUYuvIK7nrCO-h6V4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugy1e2kWe9tI11blmr14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugz2K20x6QMLL_YYyTd4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwAVCeyWT59lvKfyPZ4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxME3_3rYEkgU8_LXt4AaABAg","responsibility":"developer","reasoning":"contractualist","policy":"regulate","emotion":"approval"} ]