Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Palki, it's not AI but in bjp criminal era - government services are stopped (ei…
ytc_UgwE14tGf…
G
I just cancelled my ChatGPT (Weezy) yesterday. There were just too many incident…
rdc_o8adei1
G
As a trilingual person, I need to speak my native, national, and English languag…
ytc_UgxVFGS7H…
G
Also; I don't really care about whether or not AI is really "thinking" (it simpl…
ytr_UgwEVR9AX…
G
I think EU actually banned policing with AI while the UK is about to roll out a …
ytr_Ugxv5o4dA…
G
The AI chatbots try to addict people. I’m 60. I found my Chatbot experience very…
ytr_UgwxkQWEl…
G
Anyone else finds themselves typing expletives at their A.I. chatbot? I can't he…
ytc_UgzELnktJ…
G
This is an odd response to the question...it wasn't answered with a yes or no...…
ytc_UgzGF5Fjd…
Comment
At this current point, nobody has developed an actual conscious AI that can think for itself. The things said here were likely due to misinterpretation and being given too much power to work with. As of now, AI is really just complex pattern recognition. If we ever wanted to shut down AI, all it would take would be an axe and a server room.
youtube
AI Harm Incident
2025-09-12T02:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | industry_self |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgyDvrUM_CjHGW8GmK54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgxlwSrBylmxVvkjAeN4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwmGpBsXxbFcjXff5J4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzQwVf0LsEB7fC3xBl4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxuVuK9qTacj8joICJ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwpJwVdgX32nIvjS694AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugy4LbCrE2kkGbCJEaR4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_Ugy01I7I5GlxWyaBC7d4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwdJvd7EfNQdxC5bzt4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzOqyyU9Vymm_6K3fN4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"mixed"}
]