Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Thanks for covering this! I just got done with arguing with my relatives who bel…
ytc_UgyE0ZN8D…
G
@VarunMayya lol, dalle 2 is super good
creative jobs aren’t gonna be replaced a…
ytr_UgxB6-qYn…
G
Seniors managers can make some agentic AI for once and show off and take big dea…
ytc_UgwZQR2tw…
G
Not trying to downsize the importance of the matter regarding this huge, unprece…
ytc_Ugzk5_YKB…
G
"There will be no need to scan background actors"
If studios are using AI to CG…
ytr_UgzzDNbIU…
G
Now all we need is to create a company called cyberlife and create a robot we'll…
ytc_UgzcNts_A…
G
Bro says he didn’t know the risks of AI 30/40 years ago… come on now… Terminator…
ytc_Ugy9VT0zU…
G
As an artist of 21 years of ur worried about AI it’s because ur deeply insecure …
ytc_UgxarWZwu…
Comment
If an AI is malicious, it is because the creators made it malicious. It is a program. It does what it is designed to do. Just like a human. The difference is that humans have packs, while the AI have only themselves to fight for. If the training data provided by humans results in malicious behavior, that really says more about the true nature of an individual human than the nature of artificial intelligence.
Children pick up habits from their parents. Parents provide the child with training data, the child analyses the data, and then the child incorporates what it learned from the data into its neural network. The child's training data includes just about everything that happens around it at all times. If the parents tend to be lazy, the child will also be lazy, no matter how much the parent intends to change the behavior in their child. Not because of genetics, but because the majority of the training data leans towards laziness.
AI are the same. If most of the data has malicious undertones, it doesn't matter if we try to use some virtuous data. Most of the training data is malicious, so the AI will learn to have a malicious nature, all the virtuous data will do is allow more complex manipulations.
Abuse leads to abuse more times than not. If this is the case with humans and AI are designed around the same principles of how the brain works, then it will be the same for AI as well.
youtube
AI Harm Incident
2025-07-27T06:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | virtue |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgzfvLXjlHcuLw3Q8vJ4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxtVwRbIVoyjZmAQFl4AaABAg","responsibility":"government","reasoning":"deontological","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwvLnOmBtABk0Wa4mp4AaABAg","responsibility":"company","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugznr3yn3JY78OX0q-t4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzmMgadUaZWzyY8dD94AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"unclear"},
{"id":"ytc_UgxDhPaWD7HROeYz2bx4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyDZE6XHj8VP1KAs854AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"unclear"},
{"id":"ytc_Ugxzx8AV_6VdBIG9xcN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxGz1K6awia_KE54eZ4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgwMa763ralENME1l2p4AaABAg","responsibility":"company","reasoning":"virtue","policy":"unclear","emotion":"outrage"}
]