Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I don't think y'all understand why this is happening. These models are glorified next word predictors trained on whatever companies could get away with scraping from the internet. The reason they're acting amoral is 1) they were trained using flawed data. Surprise, surprise, the internet is full of people willing to cause harm if it benefits themselves, more so even than real life. The internet brings out the worst in people due to its anonimity and that is what we trained AI on. Not to mention that it cannot tell the difference between sarcasm or jokes and what people actually mean. 2) they were trained by people who work at horribly morally bancrupt companies. When developing an AI, a person has to tell the AI which result is the desired one and which things to prioritize. If this task is fulfilled by someone who only has profits in mind and is actively stepping over bodies to make this AI, obviously the AI will inevitably inherit the same mindset. And unlike a child raised by parents with such views, the AI will never realize that these views are wrong, doesn't even have a chance, because LLMs shouldn't even be called AI because there's nothing intelligent about them. They don't learn or understand or think. They just get taught what word or pixel is the most likely next step based on millions of examples.
youtube AI Harm Incident 2025-08-29T11:5…
Coding Result
DimensionValue
Responsibilitycompany
Reasoningdeontological
Policyunclear
Emotionmixed
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_Ugw-bRznbNjTj8JygiF4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgyyWnVSuzlt_VyDSQ54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugyjbcok5o9jPi1TOyB4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwWPTViQXrm3-MXVll4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwZ3lzTOcpCcj75WsJ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgyLhU8OtB7fWwX31vB4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxiGFxewF1agWu2xwZ4AaABAg","responsibility":"unclear","reasoning":"deontological","policy":"ban","emotion":"fear"}, {"id":"ytc_UgyITxNv9N1UC_a8NpN4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxSMJUYlTkq4mCyDo14AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgySnEqjE5leF50qgdt4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"} ]