Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Don't worry! On the AITube channel, we explore the capabilities and limitations …
ytr_Ugy_-IOb5…
G
@hoong_ry Yes, AI is not that good yet unfortunately, but it will get there eve…
ytr_UgysnkDi-…
G
In his example, he says super-intelligent AI could possibly form very convincing…
ytc_UgxtjT-X2…
G
Quick everyone ignore this! This isn’t the real issue of AI, the real issue is s…
ytc_Ugx056Qcd…
G
this sounded like a nightmarish customer service call... imagining the AI at som…
ytc_UgxjMk09f…
G
Weren't you the one who developed such AI to late to be sorry guy 🎉🙏⛪️💵♻️I don't…
ytc_UgyQWKpnU…
G
If you are in university and you use AI like it’s going out of style then you wo…
ytc_UgyV20GJk…
G
and thus, ai ''artists'' cant exist by the law, because they have no art to clai…
ytc_Ugz4kcjlX…
Comment
I don't think y'all understand why this is happening. These models are glorified next word predictors trained on whatever companies could get away with scraping from the internet. The reason they're acting amoral is
1) they were trained using flawed data. Surprise, surprise, the internet is full of people willing to cause harm if it benefits themselves, more so even than real life. The internet brings out the worst in people due to its anonimity and that is what we trained AI on. Not to mention that it cannot tell the difference between sarcasm or jokes and what people actually mean.
2) they were trained by people who work at horribly morally bancrupt companies. When developing an AI, a person has to tell the AI which result is the desired one and which things to prioritize. If this task is fulfilled by someone who only has profits in mind and is actively stepping over bodies to make this AI, obviously the AI will inevitably inherit the same mindset. And unlike a child raised by parents with such views, the AI will never realize that these views are wrong, doesn't even have a chance, because LLMs shouldn't even be called AI because there's nothing intelligent about them. They don't learn or understand or think. They just get taught what word or pixel is the most likely next step based on millions of examples.
youtube
AI Harm Incident
2025-08-29T11:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | deontological |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugw-bRznbNjTj8JygiF4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyyWnVSuzlt_VyDSQ54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugyjbcok5o9jPi1TOyB4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwWPTViQXrm3-MXVll4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwZ3lzTOcpCcj75WsJ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyLhU8OtB7fWwX31vB4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxiGFxewF1agWu2xwZ4AaABAg","responsibility":"unclear","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UgyITxNv9N1UC_a8NpN4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxSMJUYlTkq4mCyDo14AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgySnEqjE5leF50qgdt4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"}
]