Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The truth is that Chat GPT's primary goal as a Large Language Model Algorithm is…
ytc_UgyudGQrh…
G
What will that self driving truck do when it blows a front steer? It happens.…
ytc_Ugz6kPPz1…
G
You can be a game developer without being a coder, there's game engines set up f…
ytc_UgzOVtH-X…
G
I've seen so many opinions, especially about AI, from CEO's and I keep wondering…
rdc_n0i61xu
G
Right! A utility! OWNERS of AI COULD make it beneficial to humanity; “we the cre…
ytc_Ugwxzp648…
G
Reminds me of the guy who ordered 10 waymo taxis and just recorded them driving …
ytr_UgzREUJVy…
G
Well, AI images are still useful, i use them to see impossible situations, but i…
ytc_UgxFdBn_T…
G
they can't make the difference between time saving and abstraction because in co…
ytc_UgyD5X8dx…
Comment
Well, if Ai would to know all of a history of a one person, the AI might think he is a criminal. Nobody gives you the full context on how they got to the positions
You could also say that AI just mimics human behaviour.
Also, I like how when 300 human kills 300 someone it's not that interesting, but when just 1 AI does it, it is lighted at the top of a tower with bright red arrows pointing at the headlines.
like, literally, AI learns only what it sees, if sees constant aggression towards each other, then it will do it too
youtube
AI Harm Incident
2025-07-24T01:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgzHN0aHaovq7eeuyCV4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgzXGDEgrTHvwu0uLmJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzVgOy5YXG04NKcc954AaABAg","responsibility":"user","reasoning":"unclear","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgxkKOodE7wVmT03RJV4AaABAg","responsibility":"company","reasoning":"virtue","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxAmjXa6TmFc0mYUnJ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgzDnmttbqB9oF5m0uh4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UgxcukoamWoMCCWN0Eh4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwLYHV3zssnKAr7Kyt4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_Ugx1cjMRO8w7VjwC8T54AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxsjlRRJmn5aUpBubd4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"}
]