Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
😮 you are all completely insane you're talking about AI like it's a moral decisi…
ytc_UgzBIEtJc…
G
To me the working class does actual work. AI will only take away paper pushers. …
ytc_UgyFojsyg…
G
Perhaps we’ve reached a point where the wanton degeneracy of creators is no long…
ytc_Ugzmjmmk6…
G
I am all for the free market, but when choosing between “regulations and human o…
ytc_UgxnLQ1Nb…
G
@earthman7678 I don’t have any real fear, but saying that ai ‘art’ has just as m…
ytr_UgxBHFNDL…
G
Wouldn't robots and artificial intelligence in general better help us understand…
ytc_UggIJNl8K…
G
Since you created AI
Why would you have a concern of AI exterminating humanity …
ytc_UgzgqD44W…
G
wouldn't the program steer other cars in a more complex manner? in the scenario …
ytc_UgiOKnl1P…
Comment
I mean, that's expected, we build an AI to be perfect, to look for perfection, as long as we don't also build laws and walls into them to do that inside a frame of morality, safety and not allow certain actions, that AI will use anything it can to accomplish the goal you give them, since, at least for now, an AI has no morality or feelings, it's just a numbers game, and it's only goal is to succeed on their "mission", nothing exists outside that mission, and everything is fair play unless it's stated. An AI it's the perfect example of what the real world is, as long as there is a way, someone will find it and play that, that's why we need regulations and laws same goes for AI, we need to limit their actions, it's quite simple to understand why that's needed, just like we have those to stop people from doing something bad to benefit themselves, we need to code AIs in a way they can't act in a way we don't want them to, or they will, because, sadly, acting maliciously is always the easiest path to fast success.
youtube
AI Harm Incident
2025-09-11T10:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgzC5nOfEKp81GCwoXl4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgyaoXPP8dKgiQeN8p14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"ytc_UgyQjQ2ySfhN9GTf8St4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"ytc_UgxpTOVvOtBlumzXDgJ4AaABAg","responsibility":"user","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyIUkePR3YLKOhSTkx4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugwzd6UVsTsxGdoPIPl4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"approval"},
{"id":"ytc_Ugy2cIR1SGRoiyf9Fyp4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgxTCxZn3VL5P1MRQhJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwYQOW6snXi9PWF_6x4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugxwu01Y5dea0w6IKEt4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}
]