Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I am not an AI hater, but i dont use it myself.
AI in the consumer market looks …
ytc_UgyIr2DbN…
G
The internet is not going anywhere, what you are referring to is the evolution o…
rdc_le59kht
G
So tell me if putting a prompt into an AI is not 100% prompting then what is it…
ytr_UgzGpJtJF…
G
Companies need to start using Ai is a way that actually helps us. I think that a…
ytc_UgzacE3Wv…
G
What about the people who are tricked into having emotional (i.e., wrong) reacti…
ytr_UgwCOtcNx…
G
At around 1hr 30, tom took this from an intelligent conversation to whatever it …
ytc_Ugw_ni13A…
G
As an artist myself, I agree with you on this. AI is implemented in the wrong wa…
ytc_UgyKV4j76…
G
It's safe to extrapolate based on an empirical understanding of Moore's Law and …
ytr_UgjuJ4g_5…
Comment
We actually don't. So far, tests have been strictly controlled, and in most cases, they have been the only self-driving car on the road. Not to mention that we have mere hundreds of hours of data. We need hundreds of thousands or millions before we can trust them so implicitly.
reddit
AI Harm Incident
1475430191.0
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_d8ai0nx","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"rdc_d8almjm","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"rdc_d8b7vpz","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"rdc_d8ar0o0","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"mixed"},
{"id":"rdc_d8azx7v","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]