Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Until we have an objective definition of consciousness and a comprehensive test …
ytc_UgwM1X2M6…
G
I guess I'm not clear on how this is all that different from human intelligence.…
ytr_Ugz25Miln…
G
Correct reaction. I am a huge art lover for decades and this ai bull*** needs to…
ytc_Ugyv9_xrb…
G
Sorry, I was a driver for a self-driving company. It was ahead of aurora. It wen…
ytc_Ugz0noKyP…
G
The AI companies don't care. They live off of government contracts, and the gove…
ytr_Ugzo_EbpG…
G
For OpenAI to call their agents "AGI" is like Microsoft calling VBA macros "AGI"…
rdc_n40qmnn
G
We won't know ever if any AI will be conscious because of the philosophical prob…
rdc_j8v6586
G
I hope AI destroys us all. I will plead for my life just so I can get a front r…
ytc_UgxnGfr_P…
Comment
What will those cars do if those lines are obscured by snow cover, or will self-driving cars only be used in states that don't have winter?
Edit: added word
reddit
AI Harm Incident
1459432984.0
♥ 184
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_d1kettd","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"rdc_d1kmcae","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"rdc_d1khrb2","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"rdc_d1kmmh8","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"rdc_d1kpoh0","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}
]