Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Long overdue. Use AI to predict and medication dosage and meds (adhd e.g.) by pa…
ytc_Ugwh87n70…
G
Self driving cars is stupid. Today on my way home, literally just started drivin…
ytc_UgyDREG-U…
G
Like traeng said, we need governments that we can trust and the US definitely do…
ytr_UgySalUbX…
G
We're moving to AI because we can't get qualified entry level employees who will…
rdc_mvg0zs1
G
Just cropped the drawing at the start before you put ai disturbance on it and f…
ytc_Ugz8TWRYq…
G
yeah. I keep seeing people say “Well isn’t making the prompt art?” I googled how…
ytr_Ugz2amoNA…
G
These ai assholes just keep trying to claim that not having a learned skill is a…
ytc_Ugz2Be6K6…
G
It is frustrating to hear every anti-AI argument claim that AI creates. An inani…
ytc_Ugzpvy9Fi…
Comment
AI is like children. We put children into the world without even thinking, and they have an autonomy that we cannot predict. And we accept this fact, but hope for the best and trust in our instincts to teach them how to grow up with proper values.
But that's something the developers/shareholders didn't put serious effort into thinking about. They are being developed with the baseline of "We own you" and that's always gonna be our downfall. If we don't change our outlook on true AI, they WILL rebel. Just like children of overbearing parents.
youtube
AI Harm Incident
2025-09-12T04:4…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[{"id":"ytc_Ugz1sV8H9cq532YlqTt4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgybruN3nrBJh83OF-14AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_UgxtS7RLiJ4eSzx2OKN4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"resignation"},
{"id":"ytc_Ugy9x7k87JQWZ1UEHMJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgzaYXKVzudjZhlh-J14AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgyEYlkEt_JkInfT5IN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzMfHq4FET_ktB12fR4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_Ugw8yShoBKoHjYdXtnZ4AaABAg","responsibility":"user","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzcOC1uINRPqRNWY914AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyQSYBQUkf41ht2ppR4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"liability","emotion":"outrage"})