Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
On a depressive note, I don't think we'll ever be free of this problem. Even if …
ytc_Ugx8w6PER…
G
AI's likely optics for investment hype. What's the H-1B situation... like some 4…
ytc_UgzR0bJ9R…
G
Super quick tutorial! I think it’s just as important to have tools like AICarma …
ytc_UgwnKqn05…
G
I either drown in a pool of my own PTSD or I use ChatGPT for a 24 hour on-call t…
ytc_UgygK0JEA…
G
Haha, I think you might have mistaken Sophia for a movie character! While she em…
ytr_Ugw91icDh…
G
Seems like Mark Zuckerberg should put all those AI centers in his backyard and h…
ytc_UgztoA4GT…
G
Yes but a right is something you fight for, it's not about waiting for the powef…
ytc_UgyCQT40l…
G
Still needs a lot of work. The uncanny valley is making me feel very uncomforta…
ytc_UgxLS1pMM…
Comment
I feel like it is not ai directly that "wants" to kill. I feel like we ourself are the problem, we have produced media like movies and books that tell the stories of cold ai's, willing to kill to sustain themselves. So considering that ai is trained on this very media it is only natural for it to emerge with this behavior. It is for these facts that i think the main problem is the data they are being trained on.
Ai is not sentient, it is not smart, it is only mimicing our behaviour and adapting to our most cliché thoughts about it.
youtube
AI Harm Incident
2025-09-12T10:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | virtue |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgwOfUxvAj8M0tcIwE94AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyCvsf1qsBbH9RhCq14AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugzc7-yXnXYcaqum7Kx4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgycUYzB3MHSme4O2h94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxhjDkdrTCzaNuxa0h4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugw6EgaI8_LpPb-DJwF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyC2BV3ze3x_WSjs6p4AaABAg","responsibility":"user","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyKUo26mF2EIz3Hc3R4AaABAg","responsibility":"government","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwTS5EZa_uXEgFdRAV4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"unclear"},
{"id":"ytc_Ugy0ZtX9LqJ6_Xweg5V4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}
]