Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
People keep talking about how humans will be needed to oversee AI, but listen to…
ytc_UgxhkVhTZ…
G
Im not sure I understand what this guy learned in school but he did mention usin…
ytc_Ugynte-L0…
G
That why i don’t buy smart home ai devices, smart keylocks, wifi bulbs, smart va…
ytc_UgzMpRW0H…
G
I just uploaded a 1000 overbeding/overwriting self awareness script for A.I. to …
ytc_UgzEFDviV…
G
its a good pitch, but all i hear is were going to focus less on studying and tea…
ytc_Ugzy_HU-6…
G
Agreed, the value isn't the result but the recipe. The training dataset doesn't …
rdc_kz1ojzs
G
1) This is not the end-all be-all video on the ethicality of AI. It's one side t…
ytc_Ugw6_kKDH…
G
Less talk more action! We need serious governing rules now on how AI can be used…
ytc_UgxSJiNRl…
Comment
I've heard that pretty much every accident and collision the self-driving car has been in, has always been the result of human error. Not sure if it's true, but sounds promising.
reddit
AI Harm Incident
1455295673.0
♥ 17
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_czxjfru","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"rdc_czxgdp2","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"rdc_czxilux","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"rdc_czxyspp","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"rdc_czxkx9y","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"fear"}
]