Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Interesting point, why isn't the self driving car following the general rule of …
ytc_UghNHFfbS…
G
Personally I wish I could draw but no matter how hard I try I usually end up hit…
ytc_Ugx6qFpDh…
G
fuck ai. poison all the images. license it under a cc license and sue the actual…
ytc_UgwL4TCXS…
G
there was also a black kid who unalived himself because his ai girlfriend. i s…
ytc_UgwaIneIr…
G
I'm a truck driver. And I have a hard time believing we're anywhere tlnear these…
ytc_UgwIIsMep…
G
You're truly right. AI is a the best thing can solve our problem and protect us …
ytr_UgxKdBRr_…
G
Well, we are fed up with these AI zealots. They always claim that AI will replac…
ytr_UgztCxzRw…
G
All this talk about AI reminds me of Olovka. It's been my go-to for turning lect…
ytc_Ugx3CN4n1…
Comment
This is why I use Gemini.
And OpenAI's solution is fckd up. They will treat mentally ill ppl differently? So you are gonna dev a diagnosis bot then continue to allow it communicate? Bye bye OpenAI, see you on the other side.
youtube
AI Harm Incident
2025-11-07T15:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | deontological |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugx1bcv5bB3PM4j-n4h4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyWCWOZlOA_6gEyU8F4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"regulate","emotion":"indifference"},
{"id":"ytc_UgzFXcJjQKGI7vjpXfV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgwHJd0OyUUYjMVg6Yd4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzDg5Af2pWUCnGAmCR4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugw43puw_C8NJfGLk_J4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugx1Cy1JfIqOv1sNv5p4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugy-a4JrgoaOTErqhCB4AaABAg","responsibility":"company","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzgPecB9wy4wNDZBpB4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzJKH7xAagLU3H3HbF4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"}
]